UT has an incomplete copy of volumes 31 and 32:
https://search.lib.utexas.edu/permalink/01UTAU_INST/19i7hhk/alma991030494949706011
UT has an incomplete copy of volumes 31 and 32:
https://search.lib.utexas.edu/permalink/01UTAU_INST/19i7hhk/alma991030494949706011
Hey, traveler.
Wayback Machine has a copy of the original HTML version.
Lassila has a copy of the PDF her homepage: https://www.lassila.org/publications/2001/SciAm.html
JSTOR (PDF): https://www.jstor.org/stable/26059207
the retailer response is to send me an individual email every time they notice one
It's almost that link rot is a problem that publishers should, you know, do something about...
this is a problem for print books as well as for the ebooks of course, but I think we’re more content to let the URLs in print books function essentially as decoration—as signs that there is scholarship underlying their claims
baffling
Should we fix these factual errors?
No.
because every ebook looks like a brand-new ebook, and because you’re reading it on your brand-new eighth-generation Kindle Fire, these kind of factual errors are more jarring than they would be in a print book
In 2010, we thought inserting a picture of the print book index was a reasonable way to do ebook indexes
weird
Truth (New York-based magazine, 1881-1905
Not to be confused with Truth, the British magazine published 1877–1957.
The book Organization and Success (1923) by William Armstrong Fairburn does not seem to be available online.
Fairburn, William Armstrong, et al.: Merchant Sail
UT purports to have it at PCL, but I haven't verified.
https://search.lib.utexas.edu/permalink/01UTAU_INST/be14ds/alma991024040439706011
Woodroffe, John George, Sir: The Serpent Power
UT PCL has it. https://search.lib.utexas.edu/permalink/01UTAU_INST/3307f/alma991017140839706011
Time and Tide
UT PCL has some of them. https://search.lib.utexas.edu/permalink/01UTAU_INST/9e1640/alma991028908279706011
New York Times Book Review
UT PCL has them, mostly.
https://search.lib.utexas.edu/permalink/01UTAU_INST/19i7hhk/alma991015658839706011
Meeke, Mary: Count St. Blancard
UT has a copy that is a facsimile reprint with a newer (still in-copyright) introduction.
Hays, Mary: The Victim of Prejudice
UT has a copy that is a facsimile reprint with a newer (still in-copyright) introduction.
Denison, Charles W.: Old Slade: or, Fifteen Years Adventures of a Sailor
UT has it in their microfilm deposit.
Nor were we using the pieces in waysinappropriate to their advertised scope of applicability.
Kiczales is fond of the metaphor of implementing a spreadsheet by making each cell its own window under the native platform's windowing system.
This is:
Garlan, David, Robert Allen, and John Ockerbloom. “Architectural Mismatch or Why It’s Hard to Build Systems out of Existing Parts.” In Proceedings of the 17th International Conference on Software Engineering, 179–85. ICSE ’95. New York, NY, USA: Association for Computing Machinery, 1995. https://doi.org/10.1145/225014.225031.
My side projects from 2012-2017 cannot be built or ran because of dependencies. My jsbin repo with lots of experiments cannot be ran anymore. But I have the sqlite database.I forgot to pin dependencies when I was working. It would take a lot of trial and error and effort to get back to where I was.
The act of authorship is an act of taking fluid human thoughts and structuring theminto linear arguments or narratives
I have written down all these thoughts are as ‘remarks’, short paragraphs, of whichthere is sometimes a fairly long chain about the same subject, while I sometimes makea sudden change, jumping from one topic to another, – it was my intention at first tobring all this together in a book whose form I pictured differently at different times. Butthe essential thing was that the thoughts should proceed from one subject to another in anatural order and without breaks.After several unsuccessful attempts to weld my results together into such a whole, Irealised that I should never succeed. The best that I could write would never be morethan philosophical remarks; my thoughts were soon crippled if I tried to force themon in any single direction against their natural inclination.– And this was, of course,connected with the very nature of the investigation. For this compels us to travel over awide field of thought criss-cross in every direction.
This precedes Nelson on hypertext.
I have written down all these thoughts are as ‘remarks’
spurious "are" here
Nash’s Magazine: (about) Nash’s Magazine—UK; Apr. 1909-Sep. 1937 (532 issues); merged with The Pall Mall Magazine, Oct. 1914, as Nash’s and Pall Mall Magazine, separated again May 1927-Sep. 1929, re-merged, Oct. 1929 as Nash’s—Pall Mall Magazine; Eveleigh Nash, London (1909-1911), Hearst’s National Magazine Company (1911-1937); monthly; standard format, on pulp paper until Feb. 1910, when better-quality coated stock introduced, with more illustrations; became a large-format slick in 1923; mostly fiction, including Algernon Blackwood, William Hope Hodgson, Oliver Onions, Marie Belloc Lowndes (“The Lodger” Jan. 1911).
I can't seem to locate these issues. If I search Hathitrust or lib.utexas.edu, it just gives me Pall Mall. We know from a 1913 issue of Hearst's in which Chesterton's "The Treason of a Jingo" was (re-)published, there is some 1912 issue (apparently September) of Nash's in which the writer Sydney Brooks published "The Conquering English". (Chesterton's piece is a response to Brooks's.) Evidently, it is the September 1912 issue in which Brooks's article appears. However, at the time, Nash's and Pall Mall were still separate. According to Wikipedia, they didn't merge until 1914. And indeed, it looks like there are independent issues for September 1912 of both Nash's and Pall Mall. Viz:
E are thinking and talk-ing a great deal now-a-days about placing theright man in the rightjob, about puttinground pegs in roundholes and square pegs in squareholes, and this subject is one ofthe most vital problems that con-fronts us all, whether we workfor others or employ men to workfor us.
Is this a reference to Taylor and the sort of work that the Gilbreth's were doing?
As seen in the table above, namespace URIs tend to be long and cryptic, with lots of punctuation and case-sensitive text. In this instance the W3C has compounded the problem by adding dates to ensure that the namespace URIs are unique, as if it were likely that the W3C would create another "XSL/Transform" or "xhtml" namespace in the future. While namespace URIs may be guaranteed to be unique, they are also guaranteed to be impossible to remember. Quick, without checking, can you remember if the namespace URI for W3C XML Schema ends with "xmlschema", "XML/Schema", or "XMLSchema"? Was the namespace URI for SVG allocated in 1999, 2000, or 2001?
It's odd that this is considered to be an issue and something that I take to be a consequence of the times.
Does anybody worry about being able to remember the URLs of, say, their Golang imports?
If HTML had been precisely defined as having to have an SGML DTD, it may not have become as popular as fast, but it would have been a lot architecturally stronger.
Alternative take: if the HTML5 parsing algorithm (and its error handling) had been precisely defined, then HTML would have become as popular as fast (maybe even faster?) while being a lot more cross-compatible.
Phillip Hallam-Baker who "wondered about a tag being added to the get protocol to indicate where the text was being accessed from
There is a set of formats which every client must be able to handle. These include 80-column text and basic hypertext ( HTML ).
TBL says that browsers must be able to handle plain text (and not just that, but 80-column text). I wonder if this mandate appears anywhere else in modern standards (rather than just implemented by convention). It should.
(I am genuinely concerned about the possibility that browsers could/would remove support for plain text.)
Tim Berners-Lee and Robert Cailliau
Those are: - https://www.w3.org/History/19921103-hypertext/hypertext/Conferences/ECHT90/Authors.html#BernersLee - https://www.w3.org/History/19921103-hypertext/hypertext/Conferences/ECHT90/Authors.html#Cailliau
attach(target = PocketCastsStarsExport, modules)
Derp. This is the wrong way to do default parameters.
PocketCastsStarsExport.UPDATE_ENDPOINT = ( `https://api.pocketcasts.com/sync/update_episode_star` )
The history endpoint is https://api.pocketcasts.com/user/history.
See https://github.com/DanEEStar/listening-history-deno/blob/main/src/pocketCasts.ts
I’m gonna go out on a limb and suggest Pratt would not have fared well in the #MeToo era.
#MeToo was about rape, other sexual assault, and harassment. Pratt may very well have been guilty of some or all of these things, but it's not suggested by anything in the preceding passage. (And it's pretty reductive/diminutive of the actual crimes and other transgressions relevant to the #MeToo label to point to that passage and have a response that is essentially, "lol #MeToo amiright?")
Written inPython, Cython
Is this accurate? I don't have a lot of firsthand experience with data science stuff, but usually when looking just past surface-level you find that some Python package is really a shell around some native binary core implemented in e.g. C (or Fortran?).
When I at the repos for spaCy and its assistant Thinc, GitHub's language analysis shows that it's pretty much Python. Is there something lurking in the shadows that I'm not seeing? Or does this mean that if someone cloned spaCy and Thinc and wrote it in JS, then the subset of data scientists whose work can be done with those two packages (and whatever datavis generators they use) will benefit from the faster runtime and the the elimination of figging and other setup?
Eventually, there will be different ways of paying for different levels of quality. But today there some things we can do to make better use of the bandwidth we have, such as using compression and enabling many overlapping asynchronous requests. There is also the ability to guess ahead and push out what a user may want next, so that the user does not have to request and then wait. Taken to one extreme, this becomes subscription-based distribution, which works more like email or newsgroups. One crazy thing is that the user has to decide whether to use mailing lists, newsgroups, or the Web to publish something. The best choice depends on the demand and the readership pattern. A mistake can be costly. Today, it is not always easy for a person to anticipate the demand for a page. For example, the pictures of the Schoemaker-Levy comet hitting Jupiter taken on a mountain top and just put on the nearest Mac server or the decision Judge Zobel put onto the Web - both these generated so much demand that their servers were swamped, and in fact, these items would have been better delivered as messages via newsgroups. It would be better if the ‘system’, the collaborating servers and clients together, could adapt to differing demands, and use pre-emptive or reactive retrieval as necessary.
It's hard to make sense of these comments in light of TBL's frequent claims that the Web is foremost about URLs. (Indeed, he starts out this piece describing the Web as a universal information space.) It can really only be reconciled if you ignore that and understand "the Web" here to mean HTML over HTTP.
(In any case, the remarks and specific examples are now pretty stale and out of date.)
Bluesky and Mastadon don’t feed off our engagement
The first is the extent of power concentration, which contradicts the decentralised spirit I originally envisioned.
And yet this article was published to Medium.
This foreword is described in the book as being "written as an article in 1997". There's a brief introduction (8 paragraphs dated December 2002), and then what follows is purportedly that same article, which begins, "The Web was designed to be a universal space of information[...]". The acknowledgements of the foreword, too, says that it "is based on a talk presented at the W3C meeting, London, December 3, 1997".
The same material, including acknowledgement, but sans the 8-paragraph introduction, is available on a webpage titled "Realising the Full Potential of the Web" on the W3C site. https://www.w3.org/1998/02/Potential.html
Reversible_Object-Oriented Intertgfeters
A company as big as Intel could obviously write its own OS if it had to
Just because they "could", it doesn't necessarily mean that they could, IYKWIM.
the Berkeley license provides the maximum amount of freedom to potential users
Inrupt's success will depend on good execution
Dries almost but not quite caught that Solid/Inrupt was doomed.
Programming models, user interfaces, and foundational hardware can, and must, be shallow and composable. We must, as a profession, give agency to the users of the tools we produce. Relying on towering, monolithic structures sprayed with endless coats of paint cannot last. We cannot move or reconfigure them without tearing them down.
Counterpoint: the judicious use of abstraction is/can be, in some instances, the solution to giving users agency and reconfigurability.
Software that has to be torn down is the result of software built upon bad abstractions. Abstractions are not ipso fact bad. They just need to be chosen on the criteria of whether or not they solve a problem.
Newcomers had a low-cost entry point
The software crisis doesn't just apply to the profession of building software, but to anybody that uses software. Users have little to no control, save for things afforded to them by the author.
curious developers can no longer build software without scaling mountains
It is no longer easy to build software
This is safe to do, because it doesn’t affect any state machine transitions, but merely preserves original 0x00 bytes and delegates their replacement to the parser in the end user’s browser.
Cook, Steve J. (1994) The World isn't Software. Journal of Object-Oriented Programming, 5 (9).
This is wrong. This guest editorial actually appears in Volume 6 (not Volume 5).
feed-published-at is not a standard HTML attribute. validators will complain about it. but i couldn't find an alternative i liked.
As nayuki notes[1], XHTML is usable, and XHTML, being XML, supports namespaces, and this is like the one place where they're perfectly suited and not clunky.
I like to call it inverse vandalism (when vandalism is destroying things just because one can, then inverse vandalism is making things because one can).
again
Previously: https://holzer.online/articles/easteregg-lp-style/index.html
how do I get to this number between 0 and 1? Well, by simply reading on in the spec. There is some some pseudo code, which is easily translatable to Javascript (which I'm doing here in form of a literate program
We decided that we would like to see better documented code included within web pages for convenient browsing. The motivation behind this peculiar aim is to be able to include high quality documentation alongside working code, hopefully making it easier for programmers to produce more maintainable, readable programs.
In the time we’ve spent together working on this, a strong team culture has formed. The culture is highly optimistic and everyone has a “can do” attitude.
It would be useful to track down the misleading statement that Mozilla PR released that suggested that neither party was receiving kickbacks with the new Pocket integration. The reality is that that there was money changing hands related to the decision to integrate Pocket (NB: this was pre-Pocket acquisition by Mozilla), but the statement was worded just so to merely suggest that no money was changing hands without ever explicitly stating so—the idea being no doubt that they could claim plausible deniability wrt any false statements and blame the reader/listener for misunderstanding. The problem with this is that it backfired because it was so successful that Mozilla programmers who weren't in-the-know themselves took the statement to mean exactly the thing implied, and then they took to all sorts of public fora and "refuted" people using the PR piece, only these duped employees were explicitly claiming that there wasn't anything untoward going on, rather than the way the PR statement merely implied it. Plausible deniability moot.
(I was hoping after stumbling upon this old piece that I'd see the statement here, which would allow me to trace the contamination to e.g. HN comment threads around the same time, but this isn't the statement. It's a good clue as to when, precisely, it might have been issued.)
Just write the stupid dispatch manually and get on with the real work.
Dear wanderer:
You're looking for http://software.rochus-keller.ch/screenshot_oberon_ide_0.5.1.png
Dear wanderer:
You're looking for http://software.rochus-keller.ch/screenshot_oberon_system_in_debugger.png
They never end up in the terminal, because that is a huge jump in complexity, usability, and frustration
I've been saying for years that if, say, the Gnome project wrote a new terminal emulator and replaced the default desktop terminal emulator with it and they made sure that the new one had a scrollback buffer that made your call-and-response session with the machine look more like iMessage/SMS bubbles that people are more comfortable with, people would be a lot less reluctant to use it. Once you've done that, replace Bash with an even better shell (in the same vein as the overhaul I just mentioned—not merely with something as conservative as fish or zsh), and you could magnify that effect 10x.
The real problem is:
public perception upon showing someone a terminal window—that they immediately adopt the thought, "Uh-oh. I don't know about this computer stuff, and I shouldn't be here"
the fact that the default terminal emulators do evoke that feeling
and the fact that tech folks are okay with this (and defensive of it, even)
the right knowledge to make a full-stack app
Worth considering Brooks's distinction of essential versus incidental complexity. It's especially worth considering the instances where the "incidental" part comes from being incidental to the fact that if it were easier, there it would make a lot of people unhappy for reasons that I call "the consultant effect".
what I call the command line wall
See also: Philip Guo on "command-line bullshittery".
writing code for its own sake
we need to stop harassing normal nurses, teachers, and therapists to code
I'm not saying everything Mao did was great, but this was a pretty good program. I'm sure he had very little to do with it.
I actually don't want to talk about how advances in AI will affect professional developers. With all due respect, we have it pretty good.
Even those of us who are professionals don't always have the right knowledge to make a full-stack app for ourselves.
showing historical borders or tidal patterns
Isn't that problem better fit for a solution with linked data—not by an app?
He painted a vision of applications that could be used by dozens of users rather than thousands or millions. This is an absurd target population both then and now.
It's interesting, because there's tons of software out that has exactly one user, and then it drops off sharply for x>1 and then goes back up when you get into, what, I dunno, the hundreds?
what I've called the barefoot developer
most of these notebooks run entirely in the browser
except for the part that doesn't because it runs on the server and uses the browser as a thin client
This approach of interleaving documentation and source code is called literate programming and was first proposed by Donald Knuth in 1984. He argued that all code should be written in these linear, readable documents. See Chris Granger's Eve project for a beautiful example of mixing prose and code.
See also Raskin on: * The Woes of IDEs * Comments are more important than code
"Computational notebooks"Listed as a notebook interface on Wikipedia, which seems like a much worse name to me. have emerged as one of the best solutions to the problem
there's no infrastructure to guide them step by step through what the code does and in what order
When you want to show someone a single function or a quick experiment, it's too much to ask them to install a bunch of command line crap.
A response of sorts to commit robbery.
It looks like it's this:
https://ariel-miculas.github.io/How-I-got-robbed-of-my-first-kernel-contribution/
our founder, Dan Whaley, transitions to the role of President and Chief Product Officer
some non-technical people are just scared of any monospaced writing with syntax highlight
An obvious question arises: why not just not format it like that?
How many people could you trick into using a conventional CLI if the text entry and output weren't in a window that looks like a traditional terminal emulator? What if the commands were more humane (like a step up from PowerShell) and the screen looked like you were interfacing with a not-explicitly-human agent on the other end of a messaging-like app?
what I call "SQL-by-mouse"
This doesn't allow easy creation of bookmarklets like:
javascript:(`foo <b>bar</b>`)
We need explicit detection/support for these.
there is also an honesty problem when contents change or update without record
To underscore this, I've also settled on characterizing this as a problem of honesty. I put it in terms of lying—i.e. people lying about the identity (URL) of their work (Web resources).
URLs should not be considered reusable/recyclable—at least for the duration of the original publishing authority's continued renewal and control of the domain where it appeared (and even then...)
The ratio of time spent on precedence parsing in compilers to utility feels very low
because there is no compilation step
That (a compilation step) is not why you get advance warning of the sort the interviewer means when you're programming in C++ and Java. It's static typechecking that's responsible for that. You can have static typechecking without also requiring compilation.
I went back to Dmitry and asked him if my understanding of “everything is a table” was correct and, if so, why Lua was designed this way. Dmitry told me that Lua was created at the Pontifical Catholic University of Rio de Janeiro and that it was acceptable for Pontifical Catholic Universities to design programming languages this way.
Take three different translations of any book – say, The Brothers Karamazov or Madame Bovary – and compare them to see how many times they have entire sentences exactly the same. Never, or almost never.
I'm curious if Bart has ever actually tried doing this exercise.
basic necessities, like electricity or a quiet place to study
not a bad start at a rubric
plus a strict Content Security Policy ruleset which disallows scripting
Again, you don't need that...
secure
What does "secure" mean?
data:// urls
What's a "data:// url"?
WACZ - Web Archive Collection Zipped - is used by the WebRecorder project, which seems to be an active effort to create an open standard for web archiving, though you wouldn't realize it by the design of their website. I almost thought it was also a dead 1990s effort until I saw the August 2022 update.
What? Neither one of those links are particularly sparse and both have the marks of modernity.
Pretty cool.
That's it? Where's the analysis.
Zippy
whoever came up with Apple's property list format didn't really understand how XML and/or SGML-like tags actually worked. Does it make sense to you that p-lists have stuff like <key>WebResourceData</key> instead of simply just <WebResourceData> ? It's like they were confused
rather than separating out the various media types - css, images, icons, etc., the browsers just dump them all into a single folder
a CSP header on the index page to prevent malicious scripts from running
Browser's don't need the author to put CSP headers on the page [sic] or the server response to prevent scripts from running. They can just not run the scripts.
Linux is the only OS that actually pays attention to a text-file's MIME type
What does this even mean?
just think how happy TBL will be to finally have Phase 2 completed after 30 years
Not mentioned explicitly by this author, and he does say "completed" here, but it's not the case that TBL never got around to Phase 2. The original WorldWideWeb.app did do document editing.
Using a standard set of semantic HTML and CSS as the underlying markup for documents solves these problems
It doesn't solve anything re using Git for version control. Any problems that exist with docx is going to exist with this propos, too.
nearly impossible to version using git
Not really.
Rich text, such as bold and italic, among other examples, shouldn't be optional or considered extraneous to language. The fact that computing technology has gone so long ignoring these essential parts of communication is bewildering. You can send a text message from your phone including a variety of customizable emojis in various skin tones, but basic text formats used for literally hundreds of years are either impossible to enter, or lost in transmission. There's more than subtle a difference between, "You really should do something," and "You really should do something". Having to write out ideas using plain text with weird symbols such as _ this _ or * this * is truly a loss, and in the 21st century, completely inexcusable.
I disagree.
You really should do something
There are almost-invisible-to-the-naked-eye mistakes in the markup here. Notable? Telling?
The problem is that HTML can now do so much, that any attempts to create a consumer-focused app to edit it soon get unfocused and unusable.
Controversy and chaos is good for their platform
You can whip up cover letters in no time using ChatGPT! Just paste in your resume text, position title and company name and ask it to write a cover letter for you. It summarizes your skills really well in context of the position and company. Such a time saver. Like everything else AI does lately, it's absurdly good and in Ryan Reynold's words, "mildly terrifying." I have no idea who actually reads cover letters
Software is a brutal industry and appearing to be unintelligent can harm your career badly
social dynamics in tech jobs punish people who say things like, “Uh, this is too complicated to me.”
most of the coding trouble I’ve ever gotten myself into was mainly a result of thinking I was smart
See also: the rise of orthogonal version control systems, aka "language package managers".
This comment is close, but it's also about control.
Safety Tip Always use === (triple equals) and !== when testing for equality and inequality in JavaScript.
An example of how not to approach things when aiming to grok.
a quick introduction to what happens when you run the basic Git commands
The is the root cause of the issue re failure to understand.
we write functions as functionName rather than functionName(); the latter ismore common, but people don’t use objectName{} for objects or arrayName[] for arrays,and the empty parentheses makes it hard to tell whether we’re talking about “the functionitself” or “a call to the function with no parameters”
Its performance is not very different from the system versions of grep, which shows that the recursive technique is not too costly and that it's not worth trying to tune the code.
The occurrence of a do-while instead of a while should always raise a question: why isn't the loop termination condition being tested at the beginning
A Window object represents the actual window of the web browser.
No it doesn't. Window is pretty obviously a recapitulation of the W3C DOMWindow.
So What Would a Static Site Generator for the Rest of Us Like Like?
Not like a static site generator, that's for sure. Normal people don't a step in between input source code and the output. They don't want a difference between input and output at all. Programmers want a compilation step, because they're programmers.
Not a web developer? Sucks to be you. The vast majority of the static site generator tools out there are run from the command line, powered by things you've never heard of like Node, Grunt, or Babel.
can build a site in a jiff using any number of site builders like Jekyll
Wirth himself realized the problems of Pascal and his later languages are basically improved versions of Pascal -- Modula, Modula-2, and Oberon. But these languages didn't even really displace Pascal itself let alone C -- but maybe if he had named them in a way that made it clear to outsiders that these were Pascal improvements they would have had more uptake.
Modula and Oberon should have been codenames rather than independent projects.
"=" to mean assignment and resorting to a special symbol for equality, rather than the obviously better reverse
Pascal largely lost to its design opposite, C, the epitome of permissiveness, where you can (for example) add anything to almost anything
C programmers balk and cry, "JavaScript!"
Englebart
NB: "Engelbart"
I spend an inordinate amount of time helping the kids set up their build environments
in Java, the vulgar Latin of programming languages. I figure if you can write it in Java, you can write it in anything
One of my favorite turns of phrase about programming. I come back to it multiple times a year.
You can do this with recursive descent, but it’s a chore.
Jonathan Blow recently revisited this topic with Casey Muratori. (They last talked about this 3 years ago.)
What's a little absurd is that (a) the original discussion is something like 3–6 hours long and doesn't use recursive descent—instead they descended into some madness about trying to work out from first principles how to special-case operator precedence—and (b) they start out in this video poo-pooing people who speak about "recursive descent", saying that it's just a really obnoxious way to say writing ordinary code—again, all this after they three years ago went out of their way to not "just" write "normal" code—and (c) they do this while launching into yet another 3+ hour discussion about how to do it right—in a better, less confusing way this time, with Jon explaining that he spent "6 or 7 hours" working through this "like 5 days ago". Another really perverse thing is that when he talks about Bob's other post (Parsing Expressions) that ended up in the Crafting Interpreters book, he calls it stupid because it's doing "a lot" for something so simple. Again: this is to justify spending 12 hours to work out the vagaries of precedence levels and reviewing a bunch of papers instead of just spending, I dunno, 5 or 10 minutes or so doing it with recursive descent (the cost of which mostly comes down to just typing it in).
So which one is the real chore? Doing it the straightforward, fast way, or going off and attending to one's unrestrained impulse that you for some reason need to special-case arithmetic expressions (and a handful of other types of operations) like someone is going to throw you off a building if you don't treat them differently from all your other ("normal") code?
Major blind spots all over.
My process is very simple. They are way more technical than me. I start off by entering everything into Word
There’s not much of a market for what I’m describing.
There is, actually. Look at Google Docs, Office 365, etc. Those are all an end-run around the fact that webdevs are self-serving and haven't prioritized making desktop publishing for casual users a priority.
The webdev industry subverts users' ability to publish to the Web natively, and Google, MS et al subvert native Web features in order to capture users.
The users are there.
"I've been thinking about the problem with division of labor for 7 years now, and I think I've boiled it down to two sentences. Why division of labor is disempowering: 1. (the setup) Power = capability - supervision. 2. Division of labor tends to discourage supervision."
I think this is too pithy. It's hard to make out what applies to which actors and what's supposed to be good or bad; in order for me to understand this, I have to know a priori Kartik's position on division of labor (it's bad), then work backwards to see what the equations are saying and try to reconstruct his thinking. That's the opposite of what you want! The equations are supposed to be themselves the explanatory aide—not the thing needing explanation.
Division of labor is an extremely mature state for a society. Aiming prematurely for it is counterproductive. Rather than try to imitate more mature domains, start from scratch and see what this domain ends up needing."
Experts without accountability start acting in their own interests rather than that of their customers/users. And we don’t know how to hold programmers accountable without understanding the code they write.
In a healthy community people do their reading in private, and come together to discuss what they read.
Looking at the screen captures, one thing I like about HIEW is that it groups octets into sets of 32 bits in the hex view (by interspersing hyphens (-) throughout). Nice.
(Sounds a little antisocial, sure, but you can imagine good reasons.)
Geez. What?
I'm not even sure Brent actually believes this so much as that he felt the need to post a defense. Or maybe he really does believe it. But it needs no defense.
And the a couple months went by and Apple introduced Swift—decidedly not a scripting language—and eventually Brent bought in almost all the way.
I think librarians, like all users of web-based information systems, should be unpleasantly surprised when they find that their systems haven't been engineered in the common sense ways that make them friendly to ad hoc integration.
peak
Over thirty years later, in 2021, we finally got to see some of the original source code for the World Wide Web. In June of this year, Berners-Lee put an NFT (non-fungible token) of nearly 10,000 lines of the code up for sale at Sothebys.
This suggests that the source code wasn't available before the NFT auction. It's been public domain for 30+ years.
getting into a position to think
Often when I think about the problem of disruptions, environmental distractions, &c. which often results in total productivity death, I'm reminded of Licklider's "getting into a position to think" quip. It's not what he meant when he said it, and when I read him, I understand what he meant, but I somehow always forget and instead most strongly associate it with the process of eliminating disruptions.
However, after finding the magic number, unzip does not check if the comment length correctly describes the comment that must follow. Rather, unzip only checks to make sure the comment length is small enough to not cause an out-of-bounds read beyond the end of the zip file. This means that unzip tolerates arbitrary data append to the end of a zipfile without even so much as a warning. The zip file spec does not allow this arbitrary data
Yeah, no.
The only way to find the End of Central Directory Record is to do a linear search backwards from the end of the file, but even that is not guaranteed to find it. This is because the comment itself can be anything; it can be any bytes; it can even contain the magic number we're looking for.
It's not that difficult.
Scan backwards for the magic number. If you find it, keep scanning and look for other occurrences. If you only found one, then congratulations: you're done—you found the end of central directory record.
The fact that metadata defining the bounds of the comment block are in the end of central directory record at a fixed offset makes this super easy: for each candidate record, assume that it's a well-formed record and compute the boundaries of the comment block. Also compute what would be the boundaries of the start and end of the central directory record. If any of the boundaries are somehow illegal (e.g. they lie past the end of the file), then clearly this candidate is not the right one. If the offset of any candidate record lies within the boundaries of the comment block defined by an earlier candidate record, then the earlier record takes primacy and the later record should be eliminated as a candidate. Of the candidates that remain, choose the one nearest the end. That's it.
What if there are multiple newlines?
How could this possibly pose a problem...?
Available Formats CSV
This would be a good candidate for WebCSV (also known by the more official but definitely worse name CSVW).
Note that this registry omits things such as NOTIFY and M-SEARCH from SSDP (part of the UPnP spec and described on the Cloudflare blog as "poorly standardised"[1] but used nonetheless for various devices, such as Roku[2]).
HTTP Extension Framework
This is RFC 2774.
it's a miracle actually it's not you know even if somebody's copying something it doesn't mean it's not America it could still be a miracle I'm not precluding a miracle there I'm just saying somebody copied
[Laughter] Said, "Yeah, okay. It's a miracle." And I said, "Actually, it's not— you know, even if somebody's copying something, it doesn't mean it's not a miracle. It could still be a miracle. I'm not precluding a miracle there. I'm just saying somebody copied.
NB: this isn't logically consistent.
No more bugzilla and GitHub morning triage.
Well, that's something you chose, not something imposed upon you.
Reading text with a simple, clear, uncluttered layout without any animation or embedded videos or sidebars full of distracting, unrelated extras. If you use the "Reader Mode" in your web browser a lot and you love it because you think that 99% of the time it makes webpages ten times easier to use by throwing out all the useless clutter and just giving you what you want
So sidestepping the sorts of things that result in dark blue text with red links on darkish green backgrounds?
thanks to the complexity of JDSL it took days to do coding work that should only take minutes
“Let me know if you have any more questions,”
Here's one: "But why?" In other words, "What problem does this solve?"
all you have to do
Scott laughed. “You wouldn’t want to ‘just’ run it.
the non-technical interviewer’s comment that it was all “built on top of Subversion” which he assumed was a simple misunderstanding
The author describes in this article a pathological instance of what I've been calling "orthogonal version control systems".
And I think this whole story was made up in jest, but it's basically the principle behind NodeJS/NPM's package.json design—subvert your project's source tree and well-founded version control discipline with your own cockamamie scheme.
For full and authoritative description of this object, see the University of Texas Library Catalog.
My work is part of a larger effort to reframe what we think about Victorian life
Okay. What do we think about Victorian life?
JS is used pervasively with Gnome. As prior art, JS had always been a major part of the Firefox codebase—the app was built with XUL widgets and XBL, which was essentially JSX and Web Components before those ever existed. With a lot of focus on making JS engines fast after Google introduced V8 with Chrome, Gnome started looking at alternative suggestions to GTK-with-C for app development on Gnome. About a year or two before GitHub released Atom, the Gnome folks convened and said that JS was going to be not just a tier-1 language for GTK, but the language that the project would push for Gnome desktop app development. By then integration was pretty mature and had proven itself.
This upset a lot of people on Planet Gnome, though, and they basically revolted. Gnome as a project ended up putting out Gnome Shell, but sort of softened the prior commitment to JS. Too bad. Instead what we got was NPM and Electron, which in addition to tending to bring things bad enough on their own have also gone on to infect the places where you'd have traditionally encountered JS (i.e. web development).
Most people who boot into a Gnome desktop and open up Firefox and then proceed to opportunistically rail in online forums against "JS" (when what they mean is "the NodeJS community and the way that NPM programmers do things") are either unaware of the state of affairs, or are aware but constantly forgetting—i.e. acting and speaking indistinguishably from the sort of people who don't know these things. It's weird. JS isn't slow. It isn't bloated. (Certainly not in comparison to, say, Python.) You can write command-line utilities that finish before equivalent programs that are written in Java do, and if you avoid antipatterns peddled as best practices (basically everything that people associated with Electron suggest you do), you can make desktop apps snappy enough that no one even knows what's happening behind the scenes.
It's a massive shame that the package.json cult has cannibalized such a productive approach to computing.
Since it’s a JavaScript package, I had to resign myself to introducing an optional dependency on Node.js
Get 4 hours back every week.
Great pitch.
Every time I changed labs and computers during my postdoc years, I had to spend a day or two to reinstall everything I needed
When the designer on the team, who also writes CSS, went to go make changes, it was a lot harder for them to implement them. They had to figure out which file to look in, open up command line, run a build step, check that it worked as expected, and then deploy the code.
I hate npm so much. I had a situation where I couldn't work on a project because I couldn't get the dev environment running locally.
Hey, traveler. You'll be interested in: * https://archive.org/details/wholeearthreview00unse_9 * https://wholeearth.info/p/whole-earth-review-spring-1987
“Various people asked to do various things with it, and they referred them to this guy who didn't respond,” Brand says. “And so it was just frustrating for decades.”
What’s a callback? I’ve never truly understood that
Huh?
This comes as an inevitable consequence of the fact that we changed the world once, and are lining up to do so again.
I'd call this quaint in hindsight, but it was obvious with basic levels of foresight that Firefox OS was going to fail.
No author WANTS to mark emphasis or important text.
lol, what?
Can you say that EVERY SINGLE TIME I want bold text that will match the semantics of <strong>? If that's true, then it shouldn't be called <strong>, it should be called <bold>.
You're almost there, buddy. You're so close.
This whole thread feels like an apolitical art project from the types of people who hang out in /r/SelfAwareWolves.
curl (including libcurl) ships a new version at least once every eight weeks. We merge bugfixes at a rate of around three bugfixes per day.
Interesting that the way this is framed tries to give it an incredibly positive spin. In reality, you might as well say, "Look how many bugs we're able to write (and still get people to use the project)."
There’s an idea in the science-fiction community called steam-engine time, which is what people call it when suddenly twenty or thirty different writers produce stories about the same idea. It’s called steam-engine time Âbecause nobody knows why the steam engine happened when it did. Ptolemy demonstrated the mechanics of the steam engine, and there was nothing technically stopping the Romans from building big steam engines. They had little toy steam engines, and they had enough metalworking skill to build big steam tractors. It just never occurred to them to do it.
When Ra is active, you’ll see a persistent disposition, in otherwise intelligent people, to misunderstand trade or negotiation scenarios as dominance/submission scenarios.
Fuck. I just noticed that this line was in here!
Nastasya Philipovna, in The Idiot, demonstrates this kind of anger; when she meets the man who embodies her moral ideal, instead of reaching out to him as a lover, she is outraged that he’s being shabby and noble and ignoring the “way of the world”, and she actively ruins his life. It’s not that she doesn’t appreciate goodness; it’s that it freaks her out. People ought not be that good. It disturbs the universe. Myshkin is missing something — it’s not clear what, because if you look at his words and actions explicitly he seems to be behaving quite sensibly and moderately — but he’s missing some intuition about the “way of the world”, and that enrages everyone around him.
Use stable interfaces instead of unstable ones at the appropriate software boundaries. It's a shame that this needs saying.
The presence of such features can beoutright dangerous if a web application is used for controlling a medical system or a nuclear plant.
Untrue. It is not the presence of these things that "can be outright dangerous". If the programmer is reckless—doing things he or she shouldn't be—then certainly things can get dangerous. But it's a basic responsibility of the programmer not to be reckless.
This is:
Taivalsaari, Antero, Tommi Mikkonen, Dan Ingalls, Krzysztof Palacz, Antero Taivalsaari, Tommi Mikkonen, Dan Ingalls, and Krzysztof Palacz. 2008. “Web Browser as an Application Platform: The Lively Kernel Experience.”
Boy, this is hard to read. I know Marcel has blogged about this, so I won't mention my usual prescription that every academic article needs to be accompanied by a blog post. But I do wish every academic article were required to come with a single page cover sheet that authors are required to fill out and that starts with the words "check this out" or something else of the author's choosing if it can be shown to be equally compelling. It should not be subject to the template that the journal uses.
This is:
Weiher, Marcel, and Robert Hirschfeld. 2019. “Standard Object out: Streaming Objects with Polymorphic Write Streams.” In Proceedings of the 15th ACM SIGPLAN International Symposium on Dynamic Languages, 104–16. DLS 2019. Athens, Greece: Association for Computing Machinery. https://doi.org/10.1145/3359619.3359748
My NGVCS dream implies defacto centralization.
I'm not seeing it.
It'd be a hell of a lot easier to contribute to open source projects if step 0 wasn't "spend 8 hours configuring environment to build"
I call this implicit step zero.
Side note: I have for a long time (>10 years) been an advocate for the unbundling of browser history and bookmarks from the browser itself—not unlike the way Firefox was extracted as a standalone app from the Mozilla project. Firefox just didn't go far enough. I shouldn't have separate app-managed browsing histories for both Chrome and Firefox. (Syncing is not the answer here.) Each should just read and write to the same place on my machine. Same story for bookmarks. Same story for downloads. (Download management, that is—downloads can be written wherever, but when a download is initiated, it should be managed by the system download manager.)
The natural conclusion of most tools for thought is a relational database with rich text as a possible column type. So that’s essentially what I built: an object-oriented graph database on top of SQLite.
Dude, just embrace the Web already.
(NB: By "the Web" I really do mean the Web (URLs, etc) and not browser-based tech like HTML, JS, and CSS.)
drowned in an ocean of banality
... or an ocean of utility, even. See: https://en.wikipedia.org/wiki/Map–territory_relation.
Organizing collections with the filesystem is difficult, because of the hierarchical nature of the filesystem
Sure, imposing hierarchies on data that doesn't fit is a problem, but file systems support symbolic links. And there's the seldom-exercised option of having multiple hard links, too.
the higher the activation energy to using a tool, the less likely you are to use it. Even a small amount of friction can cause me to go, oh, who cares, can’t be bothered
And yet I don’t use them.
The same is true of most personal sites, generally; the gamedev metaphor can be adapted to blog software vs using Twitter. (Twitter would always win.)
Mutability as a liability
In 1945, Vannevar Bush proposed the idea of memex, a hypertext system.
Bush. As We May Think. The Atlantic. 1945.
They are added as simple, unidirectional links by the original authors of whatever it is you’re reading. You can’t add your own link between two pages on New York Times that you find relevant. You can’t create a “trail” of web documents, photographs and pages that are somehow relevant to a topic you’re researching.
This is confused. You are every bit as able to do that as with the medium described in As We May Think. What you can't do is take, say, a copy of an issue of The Atlantic, add links to it, and expect them to magically show up in all copies of the original. But then you can't do that with memex, either, and Bush doesn't say otherwise.
On the web, documents aren’t yours. Almost universally, what you read on the internet is on someone’s else server. You cannot edit it, or annotate it.
Actually, you can.
A user on HN writes on the topic of blogging that they've reverted a publishing regime where they "just create github gists now" and "stopped trying to make something fancy". They're not wrong to change their practices, but it's a nonsequitur to give up maintaining control of their own content.
The problem to identify is that they were building thing X—a personal website probably with a traditional (or at least fashionable) workflow centered around a static site generator and maybe even CI/CD—but they never really wanted X, they wanted Y—in this case GitHub Gists (or something like it). Why were they trying to do X in the first place? Probably some memetic notion that this is what it looks like when you do a personal website. Why is that a meme? Who knows!
Consider that if you want a blogging workflow built around a gist-like experience, you can change your setup to work that way instead. In other words, instead of trying to throw up a blog based on some notion that it should look and feel a certain certain blog-like way, you could just go out and literally clone the GitHub Gists product. Along the way, you'll probably realize you don't actually want that, either. How important is it, really, that there's a link to the GitHub API in the footer, for example?
The point is, though, that you shouldn't start with trying to imagine what your work should look like based on trends of people blogging about blogging setups that they never use and then assume that you'll like it. Start with something that you know you like and then ask, "What can I get rid of in a way that dropping means either that my experience doesn't suffer or is actually improved?"
See also: - Blogging vs. blog setups. - New city, new job, new... website?
Rust may be too complex to learn for scientists who aren't paid to write software but to do research
there are probably more infrequent developers for any popular language than you might think
Truth.
if you're going to write a 00:35:14 plugin for an ide prepare for your hello world to be days of learning and pages of code just to do the hello world
If you're going to write a plugin for an IDE, prepare for your hello-world to be days of learning and pages of code just to do the hello-world.
now i would love someday to do a plug-in for intellij that understands all of the 00:33:01 custom stuff for my game code right you know i would love to but you know that's that's a project
Now, I would love someday to do a plug-in for IntelliJ that understands all of the custom stuff for my game code. Right? You know, I would love to, but you know that's that's a project.
started a Patreon to help support the exploding usage
Crazy. Consider how this compares to sharing the same stuff via blog posts + RSS.
Not all of this is necessary to make a fast, fluid API
Mm... These should be table stakes.
You’re meticulous about little micro-optimizations (e.g. debouncing, event delegation
"Meticulous" (and calling these "micro-optimizations") is a really generous way to label what's described here...
There’s not much you can do in a social media app when you’re offline
I dunno. That strikes me as a weird perspective. You should be able to expect that it will do at least as much as a standard email client (which can do a lot—at a minimum, reading/viewing existing messages and searching through them and the ability to compose multiple drafts that can be sent when you go back online).
someone did a recent analysis showing that Pinafore uses less CPU and memory than the default Mastodon frontend
Given what the Mastodon frontend is like, it would be pretty concerning if that weren't true.
the fact that Mastodon has a fairly bog-standard REST API makes it pretty difficult to implement offline support
Huh? This comes across as nonsequitur.
it would be a pure DX (Developer Experience) improvement, not a UX (User Experience) improvement
This raises questions about how much the original approach made for good DX in the first place (and whether or not the new approach would). That is, when measured against not using a framework.
The whole point of these purported DX wins are supposed to be that—DX wins. When framed in the terms of this post, however, they're clear liabilities...