3,446 Matching Annotations
  1. May 2025
    1. In practice, we can't even get web apps to work on desktop and mobile, to the point where even sites like Wikipedia, the simplest and most obvious place where HTML should Just Work across devices, has a whole fricking separate subdomain for mobile.

      This is a terrible example.

      That is 100% a failing of the folks making those decisions at Wikipedia, not HTML.

    2. I don't want to have to read documentation on how my SSG works (either my own docs or docs on some website) to remember the script to generate the updates, or worry about deploying changes, or fiddling with updates that break my scripts, or anything like that.

      This flavor of fatigue (cf "JavaScript fatigue") is neither particular to nor an intrinsic consequence of static site generators. It's orthogonal and a product of the milieu.

    3. being "followed" isn't always a good thing, it can create pressure to pander to your audience, the same thing that makes social media bad.

      It can also create pressure to hold your tongue on things that you might otherwise be comfortable expressing were it not for the fact that there was available a meticulously indexed, reverse chronological stream of every sequence of words that you've authored and made ambiently available to the world.

    1. The way you make code last a long time is you minimize dependencies that are likely to change and, to the extent you must take such dependencies, you minimize the contact surface between your program and those dependencies.

      It's strange that this basic truth is something that has to be taught/explained. It indicates a failure in analytical thinking, esp. regarding those parts of the question which the quoted passage is supposed to be a response to.

    2. That dependency gets checked into your source tree, a copy of exactly the version you use. Ten years later you can pull down that source and recompile, and it works

      In other words: actually practicing version control.

    1. Figure 5.

      The screenshot here (p. 33) reads:

      Introduction

      The intent of this paper is to describe a number of studies that we have conducted on the use of a personal computing system, one of which was located in a public junior high school in Palo Alto, California. In general, the purpose of these studies has been to test out our ideas on the design of a computer-based environment for storing, viewing, and changing information in the form of text, pictures, and sound. In particular, we have been looking at the basic information-related methods that should be immediately available to the user of such a system, and at the form of the user interface for accessing these methods. There will undoubtedly be users of a personal computer who will want to do something that can be, but not already done; so we have also been studying methods that allow the user to build his own tools, that is, to write computer programs.

      We have adopted the viewpoint that each user is a learner and have approached our user studies with an emphasis on providing introductory materials for users at different levels of expertise. Our initial focus has been on educational applications in which young children are taught how to program and to use personalizable tools for developing ideas in art and music, as well as in science and mathematics.

      It is titled "Smalltalk in the Classroom" and attributed to "Adele Goldberg and Alan Kay, Xerox Palo Alto Research Center, Learning Research Group". Searching around indicates that this title was re-used in SSL-77-2, but neither introduction in that report matches the text shown here.

      Searching around for distinct phrases doesn't turn anything up. Is this a lost paper? Is there some Smalltalk image around from which this text can be extracted?

    1. We all play the game we think we can do better at.

      Is that actually true? Surely there are examples where people play the game that they're less suited for—where the decision is driven by desire?

    1. There are a number of ways to become G, but usually you do it by adopting a complainer mindset. You wake up in a bad mood. You find little flaws in everything you see in the world, and focus on those.

      I don't think that's right—

      My life experiences during the (now bygone) Shirky era and the loose school associated with it really inculcated (in me, at least) the value of small, cumulative improvements contributed by folks on a wide scale. See something wrong? Fix it. Can't fix it (like, say, because you don't have the appropriate authorization)? File a bug so the people who can fix it know that it should be fixed. This matches exactly the description of seeing the "little flaws in everything you see in the world, and focus[ing] on those".

      Looking at those flaws and thinking "this thing isn't as good as it could be" is a necessary first step for optimism. That belief and the belief in the possibility of getting it fixed is the optimist approach.

      When I think of miserable people (and the ones who make me miserable), it's the ones who take the attitude of resignation that everything is shit and you shouldn't bother trying to change it because of how futile it is.

    1. This is a new technology, people (read: businesses) want to take advantage of it. They are often ignorant, or simply too busy to learn to harnass it themselves. Many of them will pay you to weave their way on the web. html programming is one of the most lucrative, and most facile consulting jobs in the computing industry. Setting up basic web sites is not at all hard to do, panicked businesses looking to build their tolllane on the infoway will pay you unprecedented piles of cash for no more than a day's labour.
    1. You probably spew bullshit too. It’s just that you’re not famous, and so it may even be harder to be held accountable. Nobody writes essays poking the holes in some private belief you have.

      The at-oddsness of the two things mentioned here—spewing bullshit and private beliefs that someone could correct you about—is hard to skip over.

    2. most “experts”, in every industry, say some amount of bullshit

      This is my experience working with other people, generally. Another way to put it besides people not "priortizing the truth" is that they just don't care—about many things, the truth being one of them.

    3. Philosopher Harry Frankfurt described “bullshit” as “speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn’t care whether what they say is true or false.”

      I don't think this is a good definition of liar. The bullshitter described here would be a liar if what they're saying is lie.

    1. SMALLTALK (Kay 1973) and PLANNER73 (Hewitt 1973), both embody aninteresting idea which extends the SIMULA (Ichbiah 1971) notion ofclasses, items that have both internal data and procedures. Auser program can obtain an "answer" from an instance of a class bygiving it the name of its request without knowing whether therequested information is data or is procedurally determined. AlanKay has extended the idea by making such classes be the basis fora distributed interpreter in SHALLTALK, where each symbolinterrogates its environment and context to determine how torespond. Hewitt has called a version of such extended classesactors, and has studied some of the theoretical implications ofusing actors (with built-in properties such as intention andresource allocation) as the basis for a programming formalism andlanguage based on his earlier PLANNER work.and PLANNER73 are both not yet publicly available, the ideas mayprovide an interesting basis for thinking about programs. Themajor danger seems to be that too much may have been collapsedinto one concept, with a resultant loss of clarity.

      An early mention of Smalltalk.

    1. message-oriented programming language

      That's interesting. This paper is Copyright 1977 (dated June of that year) and is using the term "message-oriented" rather than "object-oriented".

      Rochus maintains that Liskov and Jones are the originators of the term "object-oriented [programming] language".

      Here's Goldberg in an early paper (that nonetheless postdates the 1976 paper by Liskov and Jones) writing at length about objects but calling Smalltalk "message-oriented" (in line with what Kay later said OO was really about).

    1. We're at the point in humanity's development of computing infrastructure where the source code for program texts (e.g. a module definition) should be rich text and not just ASCII/UTF-8.

      Forget that I said "rich text" for a moment and pretend that I was just narrowly talking about the inclusion of, say, graphical diagrams in source code comments.

      Could we do this today? Answer: yes.

      "Sure, you could define a format, but what should it look like? You're going to have to deal with lots of competing proposals for how to actually encode those documents, right?" Answer: no, not really. We have a ubiquitous, widely supported format that is capable of encoding this and more: HTML.

      Now consider what else we could do with that power. Consider a TypeScript alternative that works not by inserting inline type annotations into the program text, but instead by encoding the type of a given identifier via the HTML class attribute.

      Now consider program parametrization where a module includes multiple options for the way that you use it, and you configure it as the programmer by opening up the module definition in your program editor, gesturing at the thing it is that you want to concretely specify, selecting one of those options, and have the program text for the module react accordingly—without erasing or severing the mechanism for configuration, so if another programmer wants to change the module parameters to satisfy some future need—or lift that module from your source tree and use it in another one for a completely different program—then they can reconfigure it with the same mechanism that you used.

    1. I think that most of the complexity in software development is accidental.

      I feel no strong urge to disagree, but on one matter I do want to raise a question: Is "accidental" even the right term? (To contrast against "essential complexity", that is.)

      A substantial portion of the non-essential complexity in modern software development (granted: something that Brooks definitely wasn't and couldn't have been thinking about when he first came up with his turn of phrase in the 1980s) doesn't seem to be "accidental". It very much seems to be intentional.

      So should we therefore* highlight the contrast between "incidental" vs "intentional" complexity?

      * i.e. would it better serve us if we did?

    1. One of the problems with building a jargon is that terms are vulnerable to losing their meaning, in a process of semantic diffusion - to use yet another potential addition to our jargon. Semantic diffusion occurs when you have a word that is coined by a person or group, often with a pretty good definition, but then gets spread through the wider community in a way that weakens that definition.
    1. Unless techniques to create software increase dramatically in productivity, the future of computing will be very large software systems barely being able to use a fraction of the computing power of extremely large computers.

      In hindsight, this would be a better outcome than what we ended up with instead: very large software systems that, relative to their size—esp. when contrasted with the size of their (often more capable and featureful) forebears—accomplish very little, but demand the resources of extremely powerful computers.

    1. Rust was initially a personal project by Graydon Hoare, which was adopted by Mozilla, his employer, and eventually became supported by the Rust Foundation

      Considering this post is about Swift, describes Lattner's departure, etc., it would have been opportune to mention Graydon Hoare's departure re Mozilla/Rust and his subsequent work at Apple on Swift.

    1. I made it obscenely clear that there was not going going to be an RFC for the work I was talking about (“Pre-RFC” is the exact wording I used when talking to individuals involved with Rust Project Leadership)

      "Pre-RFC" doesn't sound like there's "not going to be an RFC" for it. It rather sounds like the opposite.

    1. I have never seen any form of create generative model output (be that image, text, audio, or video) which I would rather see than the original prompt. The resulting output has less substance than the prompt and lacks any human vision in its creation. The whole point of making creative work is to share one’s own experience - if there’s no experience to share, why bother? If it’s not worth writing, it’s not worth reading.

      I haven't been using LLMs much—my ChatGPT history is scant and filled with the evidence of months going by between the experiments I perform—but I recently had an excellent experience with ChatGPT helping to work out a response that I was incredibly pleased with.

      It went like this: one commenter, let's label them Person A posted some opinion, and then I, Person B, followed up with a concurrence. Person C then appeared and either totally misunderstood the entire premise and logical throughline in a way that left me at a loss for how to respond, or was intentionally subverting the fundamental, basic (though not unwritten) rules of conversation. This sort of thing happens every so often, and it breaks my brain, so I sought help from ChatGPT.

      First I prompted ChatGPT to tell me what was wrong with Person C's comment. Without my saying so, it discerned exactly what the issue was, and described it correctly: "[…] the issue lies in missing the point of the second commenter's critique […]". I was impressed but still felt like pushing for more detail; the specific issue I was looking for ChatGPT to help explain was how Person C was breaking the conversation (and my brain) by violating Grice's maxims.

      It didn't end up getting there on its own within the first few volleys, even with me pressing for more, so I eventually just outright said that "The third comment violates Grice's maxims". ChatGPT responded with the level of sycophantism (and emoji-punctuated bulleted statements) a little too high, but also went on to prove itself capable of being a useful tool in assisting with crafting a response about the matter at hand that I would not have been able to produce on my own and that came across as a lot more substantive than the prompts alone.

    1. Is my data kept private? Yes. Your chosen files will not leave your computer. The files will not be accessible by anyone else, the provided link only works for you, and nothing is transmitted over a network while loading it.

      This is a perfect use for a mostly-single-file browser-based literate program.

  2. Apr 2025
    1. Around the start of ski season this year, we talked about my plans to go skiing that weekend, and later that day he started seeing skiing-related ads.He thinks it's because his phone listened into the conversation, but it could just as easily have been that it was spending more time near my phone

      Or—get this—it was because despite the fact that he "hasn't been for several years", he used to "ski a lot", and it was the start of ski season.

      You don't have to assume any sophisticated conspiracy of ad companies listening in through your device's microphone or location-awareness and user correlation. This is an outcome that could be effected by even the dumbest targeted advertising endeavor with shoddy not-even-up-to-date user data. (Indeed, those would be more likely to produce this outcome.)

    1. I had the Android Emulator in my laptop, which I used to install the application, add our insurance information and figure out where to go. Just to be safe, I also ordered an Android phone to be delivered to me while I went to the hospital, where I used my iPhone's hotspot to set it up and show all the insurance information to the hospital staff.

      If only there were some sort of highly accessible information system that was designed to make resources available from anywhere in the world without any higher requirement besides relatively simple and ubiquitous client software. Developers might then not be compelled to churn out bespoke programs that effectively lock up the data (and prevent it from being referenced, viewed, and kept within arm's reach of the people that are its intended consumers).

    1. Software gets more complicated. All of this complexity is there for a reason. But what happened to specializing?

      This feels like a sharp left turn, given the way the post started out.

      Overspecialization is the root of the problem.

      Companies want generalists. This is actually reasonable and good. What's bad is that companies also believe that they need specialists (for things that don't actually require specialization).

    2. “But I can keep doing things the way that I’ve been doing them. It worked fine. I don’t need React”. Of course you can. You can absolutely deviate from the way things are done everywhere in your fast-moving, money-burning startup. Just tell your boss that you’re available to teach the new hires

      I keep seeing people make this move recently—this (implicit, in this case) claim that choosing React or not is a matter of what you could call costliness, and that React is the better option under those circumstances—that it's less costly than, say, vanilla JS.

      No one ever has ever substantiated it, though. That's one thing.

      The other thing is that, intuitively, I actually know* that the opposite is true—that React and the whole ecosystem around it is more costly. And indeed, perversely the entire post in which the quoted passage is embedded is (seemingly unknowingly, i.e. unselfawarely) making the case against the position of React-as-the-low-cost-option.

      * or "know", if the naked assertion raises your hackles otherwise, esp re a comment that immediately follows a complaint about others' lack of evidence

    1. The one recent exception is “Why Can’t We Screenshot Frames From DRM-Protected Video on Apple Devices?”, which somehow escaped the shitlist and garnered 208 comments. These occasional exceptions to DF’s general shitlisting at HN have always made the whole thing more mysterious to me.

      Geez. How clueless can a person be? (Or is feigned cluelessness also a deliberate part of the strategy for increasing engagement?)

    1. The JSON Resource Descriptor (JRD) is a simple JSON object that describes a "resource" on the Internet, where a "resource" is any entity on the Internet that is identified via a URI or IRI. For example, a person's account URI (e.g., acct:bob@example.com) is a resource.

      Wait, the account URI is a resource? Not the account itself (identified by URI)?

    1. Given the constitutional underpinnings for copyright law as it exists in the US, this commenter's sense of the laws' spiritual intent isn't just off, it's actually in opposition to what the US is trying to effect.

  3. Mar 2025
    1. Variable links are often found in online newspapers, for example, where links to top stories change from day to day. The click command can be used to click on a variable link by describing its location in the web page with a LAPIS text constraint pattern. For example: http://www.salon.com/ # Start at Salon click {Link after Image in Column3}

      This is how RSS/Atom should be implemented in every feed reader, but, to my knowledge, never is. That is—

      If URL to the feed for alice.example.net can be found at <https://alice.example.net/feed.xml> and it is available via feed autodiscovery, it should not require the user to navigate to alice.example.net obtain the URL to the feed, and then paste that into the feed reader. Rather, when I'm looking at my feed reader, I should be able to merely indicate that I want the feed for alice.example.net. The feed reader's job, then, should be to do BOTH of the following:

      (The URL obtained via autodiscovery can be cached.)

      If ever the feed location changes, it should not require the user to then backtrack and go through the process again to obtain the feed URL. With the site alice.example.net itself being kept in the user data store, the feed reader can automate this process. A website could thereby update the URL for its feed whenever it wants—as often as every day, for example, or even multiple times a day—without it being a source of problems for the user.

      To make it even better, the Atom and RSS specs can be extended so that if the user actually pastes in the feed URL <https://alice.example.net/feed.xml>, then that's fine—the specs should permit metadata to advertise that it is the alice.example.net feed (which can be verified by any reader), and the client can store that fact and need not bother the user about the overly-specific URL.

    1. Neat. It's basically a browser with deep integration for Fastmail.

      (Not that Fastmail is neat—it's evidently got a bunch of jerks in charge—but I wholeheartedly approve of user agents operating in their users' best interests, and this qualifies. The fact that this is a site-specific browser made with no involvement of the site operator? A++++)

    1. Please note that if you plan on importing this file back into Hypothesis you will need to export to a JSON file.

      I'm kind of bummed that HTML exports don't contain sufficient information to act as a substitute for the JSON.

    1. to build xz from sources, we need autoconf to generate the configure script

      Should you, though?

      (The author approaches things from a different direction, but it's worth considering this one, too.)

    1. For others, many of whom are voracious readers, the compulsion to read manifests as an exercise in confirmation bias: collecting fragments of ideas that validate existing worldview.

      This is always how I've valued songs and their lyrics.

      I'm aware of it.

    1. If something has value, users should pay for it.

      Forget "value". That's a lateral shift in focus, and therefore a subtle change in subject. This is about costs. So rework the thought:

      If a "user"* is the cause of costs, then we should consider getting the user to pay for it—especially when the costs are big enough.

      (The answer to this is probably resource-proportionate billing for the accumulated expenses.)

      * or "consumer"

    1. Every time you run your bundler, the JavaScript VM is seeing your bundler's code for the first time

      And of course this fares poorly if the input ("your bundler's code") is low-quality.

      It's important to make an apples-to-apples comparison, so you don't end up with the wrong takeaway, like, "Stuff written in JS is always going to be inherently as bad as e.g. Webpack," which is more or less the idea that this paragraph wants you to get behind.

      It shouldn't be surprising that if you reject the way of a bad example, then you avoid the problems that would have naturally followed if you'd have gone ahead and done things the bad way.

      Write JS that has the look of Java and broad shape of Objective-C, and feels as boring as Go (i.e. JS as it was intended; I'm never not surprised by NPM programmers' preference to write their programs as if willingly engaging in a continual wrestling match against a language they clearly hate...)

    2. Every time you run your bundler, the JavaScript VM is seeing your bundler's code for the first time without any optimization hints

      This is really a failure of the NodeJS &co and is not something that's inherent to the essence of JS or any other programming language.

      It makes sense for Web browsers to throw away everything and re-JIT every time since the inputs have just streamed in over the network microseconds ago. It doesn't make sense for a runtime that exclusively runs programs that are installed locally and read from disk instead of over the network; the runtime could do one of (or both): - accept AOT-compiled inputs - cache the results of the JITting

    1. (Shift+Left and Shift+Right) to go from one post to the next without having to click any links

      I just discovered this accidentally, and it's really annoying considering Shift+Left and Shift+Right are existing keyboard shortcuts on almost every platform to change the size of the selection (e.g. with selected text).

    2. WordPress uses a quirky subdirectory structure where it groups posts by year and then by month

      I don't see how 2025/01/10/foo-bar.html is any more or less "quirky" than 2025-01-10-foo-bar.html.

    1. We're overloading the class attribute with two meanings. Is "adr" the name of a class that contains styling information, or the name of a data field?

      What gives you the idea that class is for "contain[ing] styling information"? (This is rhetorical; I know the answer. This remark is a giveaway about the mental model of what HTML classes—NB: not "CSS classes"—are "for".)

    1. Git ships with built-in tools for collaborating over email.

      Just as a point of fact: many distributions' package repositories don't have a Git package that "ships with built-in tools for collaborating over email", which can be seen in the suggested steps given for the systems in this list—the distros do generally provide packages that back the git send-email command, but it's a separate package.

    1. human-knowledge-based chess researchers were not good losers. They said that ``brute force" search may have won this time, but it was not a general strategy, and anyway it was not how people played chess. These researchers wanted methods based on human input to win and were disappointed when they did not.

      Intuitively, it makes sense that brute force would win, which should not be a surprising result. (I suppose the surprise could be simply that there was enough computation available.)

  4. Feb 2025
    1. The component library: Dream and realityMcIlroy’s idea was a large library of tested, documented components. To buildyour system, you take down a couple of dozen components from the shelves andglue them together with a modest amount of your own code.

      Even though my comments elsewhere mention NPM, etc., I myself don't think they quite fit to what McIlroy (and Cox) were talking about. If I recall correctly, both mention configuring components in ways that are not generally seen with modern packages. McIlroy in particular talks about parameterization that suggests to me compile-time (or pre-compile-time) configuration, whereas the junk on NPM traditionally blobs all the behavior together and selects behavior at program runtime.

    2. Although it’s difficult to handle complexity in software, it’s much eas-ier to handle it there than elsewhere in a system. A good engineertherefore moves as much complexity as possible into software.

      Tangent: I've likened system designs for Web tech that requires deep control and/or configuration of the server (versus a design that lets someone just dump content onto a static site) to the leap in productivity/malleability of doing something in hardware versus doing it in software.

      Compare: Mastodon-compatible, ActivityPub-powered fediverse nodes that are required to implement WebFinger versus RSS-/Atom-based blog feeds (or podcasts—both of which you could in theory you could author by hand if you wanted to).

    Annotators

    1. Programs that include many tests of the form if (xinstanceof C) ... are quite common but undermine many of the benefits ofusing objects

      See also: the trend of an overabundance of triple equals operators (===) in NPM programmers' code.

    1. can you imagine your GC being randomly interrupted by its own internal garbage collection?

      The author states this like it's prima facie absurd. But, I dunno, why not? It's neither inconceivable nor absurd (no matter the way the author goes about it in this particular post).

    1. The literature on p r o g r a m m ~ g methodologycontains numerous articles explicating the notion of type. For our purposes it isnot necessary to delve into the theology that surrounds the issue of precisely whatconstitutes a type definition. I will rely on the reader's intuitions.

      And so it goes with programs written for environments with dynamic and loose typing. (In fact, if you swap out the phrase "a type definition" with "this type", then you get what almost amounts to a manifesto for dynamic typing.)

    Annotators

    1. Pratt parsers are a really nice way to handle mathematical expressions like this vs. the usual recursive descent.

      People love to say stuff like this, but then you get things like Jonathan Blow's multi-hour live stream with Casey Muratori where they end up getting bogged down working it all out (while talking about how easy it is and how dumb it is that we even have a term for "recursive descent"). Whereas if they'd just gone with recursive descent, they could have implemented it and re-implemented it four times over by then…

    1. For URLs that are not intended to break, where does that intention reside andhow is it communicated to users?

      Unpopular opinion: all URLs should be "not intended to break". (Or, at the very least, that should be the assumed default unless otherwise specified.)

    2. t’s hard to talk about an identifier that “breaks.” Whendoes a string “break,” except perhaps when it sustains damage to its charactersequence? What really breaks has to do with the identifier’s reference role

      I like this approach to examination. It bears resemblance to the way that I admonish the public about content "changing". There's no such thing as resources "changing". There are only servers re-using identifiers and lying about their resources' identities.

    3. This is not to say that some naming practices still commontoday don’t create URLs that contain clues reflecting location information, justthat URLs may be just as indirect and location-free as any indirect identifier(e.g., URN or Handle).

      I like the way TBL put it in <https://www.w3.org/Provider/Style/URI>:

      "Most URN schemes I have seen look something like an authority ID followed by either a date and a string you choose, or just a string you choose. This looks very like an HTTP URI. In other words, if you think your organization will be capable of creating URNs which will last, then prove it by doing it now and using them for your HTTP URIs."

    4. Unfortunately, decoupling persistent identification fromaccess was an early decision made by designers of PURLs, URNs, and Handles.

      This could benefit from further elaboration.

    5. It turns out that whatpeople really want is the kind of convenient access to which they have becomeaccustomed on the Web. Instead, what we’re looking for are persistent actionableidentifiers, where an actionable identifier is one that widely available tools suchas web browsers can use to convert a simple “click” into access to the object orto a doorway leading directly to access (e.g., the doorway could be a passwordchallenge or object summary description page).

      Good description of the widely perceived value proposition of the (conventional, http(s):-based) Web—but as TBL explains, HTTP and HTML are incidental to true intent and purpose of the Web.

    Annotators

    1. At beginning of empty ZIP archive file.

      Note that the preceding remark that the spec "does not state explicitly be at the physical end of the file" applies as much (and more) to this part. Self-extracting archives have explicit support in PKZIP and other implementations.

    1. This is arguably an indicator that the a fixed schedule does not quite work out for me.

      OR:

      There is a sufficiently high mechanical barrier (in the sense of an imposition from the software itself).

    1. There is a great xkcd about how much time you may spend on automation before yielding negative returns.

      This seems like a weird way to put it. By spending even a microsecond on trying to implement automation, there are negative returns by default. It is only with sufficiently effective automation that the lines intersect and you end up on the "positive returns" side.

  5. pkwaredownloads.blob.core.windows.net pkwaredownloads.blob.core.windows.net
    1. If one of the fields in the end of central directory record is too small to hold required data, the field should be set to -1 (0xFFFF or 0xFFFFFFFF)

      This is a weird way to put it, since the very first note given in this section is that all fields unless otherwise noted are unsigned...

    1. I find myself revisiting now due to buggy behavior observed in jszip (discovered through Darius Kazemi's reliance on it for his Twitter archiver project).

      The problem is that jszip when given a ZIP with Zip64 data in the end of central directory record handles it poorly (presumably jszip simply doesn't have support for Zip64 despite being advertised as such)—except that that's even really the problem.

      The fundamental problem with jszip is that it doesn't support Zip64 (whether it says it does or not) and it makes an assumption that the last file header record in the central directory will be immediately followed by the end of central directory signature and when it encounters a file that violates this assumption its attempts to recover are odd. There's no good reason for jszip or any other software to make that assumption, though, since depending on it is in no way necessary to go ahead and successfully work with the files that are present.

    2. What if there is some local file record that is not referenced by the central directory? Is that valid? This is undefined.

      From an earlier private note (from 2021 July 20):

      A better way to phrase it: "What if there is some byte sequence that coincides with the sequence used for the local file header signature, but nonetheless does not comprise some part of a file record (i.e., one "referenced by the central directory")?

      And the answer, of course, is right there; it is not UB—that isn't a file record.

      Some block of data within the ZIP can only be considered to comprise a local file header iff it is referenced by the central directory. A byte sequence appearing elsewhere that collides with the file header signature is just noise.

    3. The "end of central directory record" must be at the end of the file and the sequence of bytes, 0x50 0x4B 0x05 0x06, must not appear in the comment.

      What happened to the central directory offset not being allowed to be located there?

  6. greggman.github.io greggman.github.io
    1. unzip and unzipRaw are async functions that take a url, Blob, TypedArray, or ArrayBuffer or a Reader.

      The silliness of method overloading. The problem this is trying to solve can be solved by the publisher providing a consistent interface no matter what the input parameter and the consumer choosing the right module to import.

    2. IMO those should be separate libraries as there is little if any code to share between both. Plenty of projects only need to do one or the other.

      This is an interesting comment, because the same logic can be applied to the author's criticism of the APPNOTE spec and his question of whether ZIP can be streamed or not.

    1. I realise that this entire premise falls flat once you include more "dynamic" page components like related content, recent content and so forth.

      In that case, your Web pages are changing more than they probably should be, anyway.

    2. People often talk about build times when referencing SSGs. That always confused me, because how often do you need to rebuild everything? I'm a simple guy, so I like to keep it very simple. To me, that means that when there's one piece of new content, we generate one new page and regenerate the relevant archive pages.
  7. Jan 2025
    1. The one place in the world you get this vibe is probably Japan. Most people just really care. Patrick McKenzie refers to this as the will to have nice things. Japan has it, and the US mostly does not.

      It's a little perverse to link to an inaccessible Twitter/X.Com post here in a post about this topic.

  8. Dec 2024
    1. I don't think the Firefox OS team was ever bigger than maybe 100 people at the absolute most, and I feel like it was closer to half of that. Admittedly it's been over a decade since I left, so my recollection could be wrong, but it was never "most of the resources" by any metric.

      ~50 people and "a tiny percentage of the overall staff" are such bad estimates that it's something you'd expect from the least clued-in segments of the peanut gallery, not someone on payroll. I struggle to come up with words to even react (let alone respond) to it, even in the written medium of online comment threads where you have time to compose your thoughts. It breaks my brain.

  9. Nov 2024
    1. As a gift for a family member who had a non-technical blog, I once gathered posts together, edited them, and turned them into a book.

      I've thought about and would really appreciate a platform like Lulu that would allow you to pay, say, 99¢ each to get printed copies of your favorite blog posts (standalone, not necessarily in book form), where a portion goes to the author. It shouldn't require creators to sign up for e.g. Patreon and consciously designate a priori what is and isn't "subscriber-only" content.

      Like, "I just like this thing that you wrote, dude. Let me get a print copy and also make sure you get a few nickels for it."

    1. 60, 63-68, 70-72, and 74-76

      Okay, seemingly missing from here, and not made up by any of the other sources listed here are volumes:

      • 61
      • 62
      • 69
      • 73
      • and 77(?)

      Anything else? (It doesn't say exactly when it stopped in 1893. Vol 76 was December 1892.)

    1. Bytecode Is Smaller The bytecode generated by SQLite is usually smaller than the corresponding AST coming out of the parser. During initial processing of SQL text (during the call to sqlite3_prepare() and similar) both the AST and the bytecode exist in memory at the same time, so more memory is used then. But that is a transient state. The AST is quickly discarded and its memory recycled

      Does SQLite even need to construct an AST? It's just SQL. Can't it just emit the bytecode directly?

    1. There’s an abyss to cross between using an app and modifying it with code by calling APIs. The user has to switch to a whole other paradigm including setting up a development environment. Consequently, few users take the step from using a tool to customizing or making their own tools.
    1. I've been critical of some of the projects/standards that the author has been involved with and adjacent efforts, precisely because of the tendency of proposals to violate the Principle of Least Power. We'd be a lot better off if more people in that cohort appreciated the architectural style of RSS- and Atom-based Web feeds and how much that impacts adoption.

      Even if it is several years late, it's good that he's starting to ask questions like "is it possible to avoid using a database?" and contemplating things like the ramifications of trying to capture something in static HTML so that it can be relayed by a dumb server whereas in its original design it required, sometimes for a no good reason, an application server with smarts that enabled it to be an active participant.

  10. ben-mini.github.io ben-mini.github.io
    1. It clocks in at 94 pages and has 30 ratings on Amazon! Go IMG_0416! I don’t care what you’re creating- I’m just a fan of creators.

      This is one of those weird positions like, "It doesn't matter who you vote for. Just vote!" that people are sure to regret when faced with the right unforeseen counterexample.

    1. In many of the above examples, once an organizing principle for the system is identified, the details of the solution are quite simple.

      This principle is behind good documentation, too. Too often programmers describe what their solution does and how it does it, but not why. Part of the why is just describing the problem that the solution is meant to address.

    1. I found this really hard to read on archive.is (https://archive.is/YkIyW).

      I used this snippet to reformat the article to manually float the "annotations" (pull-outs) to the margins:

      ```` javascript document.getElementById("CONTENT").style.width = "1720px";

      ([ ...document.querySelectorAll("[id^=annotation]") ]).forEach((x, i) => { if (i % 2) { x.style.left = ""; x.style.right = "-44ch"; } else { x.style.left = "-44ch"; x.style.right = ""; } }); ````

    1. I'm amazed at the lack of thoughtfulness in the original post that this change of heart refers to. From http://rachelbythebay.com/w/2011/11/16/onfire/:

      I assigned arbitrary "point values" to actions taken in the ticketing system. The exact details are lost to the sands of time, but this is an approximate idea. You'd get 16 points for logging into the ticketing system at the beginning of your shift, 8 for a public comment to the customer, 4 for an internal private comment, and 2 for changing status on a ticket. [...] The whole point of writing this was to see who was actually working and who was just being a lazy slacker. This tool made it painfully obvious [...]

      This is, uh, amazingly bad. It goes on, and in a way that makes it sound like self-aware irony, but it's clear by the end that it's not parody.

      The worst support experiences I've had were where it felt like this sort of pressure to conspicuously "perform" was going on behind the scenes, which was directly responsible for the shoddy experience—perfect case studies for Goodhart's Law.

      The author says they've had a change of heart, so surely they've realized this, right? That's what led to the change of heart? Nope. Reading this post, it's this: "my new position on that sort of thing is: fuck them." As in, fuck them for not appreciating the value of this work and needing it to be done for them in the first place. The latter is described at length where they describe the jobs of the managers to already know these things—that is, the stuff that these metrics would say, if the data were being crunched. "Make them do their own damn jobs", the author says.

      (I often see this blog appear on HN, and I've read plenty of the posts that were submitted to HN but have never exactly grokked what was so appealing about any of it. I think with this series of posts, it's a good signal that I can write it off and stop trying to "get" it, because there's nothing to get—just middling observations and, occasionally, bad conclusions.)

  11. Oct 2024
    1. To get a list of all the public domain scans, as of this writing:

      ```` javascript ([ ...document.querySelectorAll("table.auto-style21 a") ]). filter((x) => ( x.textContent.includes("19") && !x.textContent.includes("1929") && !x.textContent.includes("193") && !x.textContent.includes("194"))). map((x) => { let when = x.textContent; if (!when.includes(",")) when = when.split().reverse().join(" ") + " 01";

      try {
        var result = (new Date(when)).toISOString().substr(0, ("1928-10-29T...").indexOf("T"));
      } catch (ex) {
        console.log(x.textContent, when, ex);
      }
      
      if (when != x.textContent) {
        result = result.substr(0, ("1987-12-09").length - ("-09").length);
      }
      return result;
      

      }) ````

    1. It's far more performant than using getter-setters, on top of being more performant than generating getter-setters. Further it's type safe. Eslint or TypeScript can both warn you about non-existing properties and possibly type mis-matches.

      It's also, you know, way more grokkable.

    1. Filter all lists to include just the ones with partial serials:

      ([ ...document.querySelectorAll("li") ]).filter((x) => (!!x.querySelector("img.info"))).filter((x) => (!x.textContent.trim().endsWith("(partial serial archives)"))).forEach((x) => (x.parentNode.removeChild(x)))

    1. by porting ffmpeg to the zig build system, it becomes possible to compile ffmpeg on any supported system for any supported system using only a 50 MiB download of zig. For open source projects, this streamlined ability to build from source - and even cross-compile - can be the difference between gaining or losing valuable contributors.
    1. New products are often incongruent with consumer expectations. Researchershave shown that consumers prefer moderately incongruent products, while beingadverse to extremely incongruent products.
  12. Sep 2024
    1. You can't become the I HAVE NO IDEA WHAT I'M DOING dog as a professional identity. Don't embrace being a copy-pasta programmer whose chief skill is looking up shit on the internet.

      Similarly, a few years ago I was running into a bunch of people saying stuff like, "Every programmer uses Stack Overflow. Everyone." Which is weird because it definitely had the feel of a sort of proactive defensiveness every time it came up, plus there's the fact that it's not true that every programmer uses Stack Overflow. At the time I kept running into this kind of thing, I had basically never used it, but not for lack of trying or any sense of superiority. Every time I'd landed there the only thing I encountered was low-quality answers and a realization that Stack Overflow just doesn't specialize in the kind of stuff that's useful to me. (In the years since, I've landed there quite a bit more than before, and I have found it useful—but almost never for actual programming...)

    2. Especially if there are people within your profession who use their diplomas as a logical fallacy to prove why they're right and you're wrong.

      I don't think I've ever seen this in a technical discussion. Credentialism in the form of "X years experience with Y" or someone trying to flex other parts of their résumé (e.g. previous employers)? Definitely.

      Most often, though, I just run into Ra + a fuckton of ho-hoism. This is never tinged by academic credentials, even a little bit.

    1. a total of 26 volumes

      I'm curious where this comes from. As the archives below indicate, Hathitrust only has volumes up to volume 24 (1898). UT PCL stops also at volume 24. Everything available seems to stop there.

      I did chase down the Mott reference from the Wikipedia article that says it ran until 1900, but I don't remember seeing a volume count. I wonder if 1900 is substantiated anywhere else and whether the volume count is an independent claim or a derivative of the claim of cessation in 1900.

      (Maybe the copyright records indicate two more volumes?)

    1. We do not currently know of free online issues of The New Monthly Magazine. If you know of any, please let us know.

      This is listed at https://onlinebooks.library.upenn.edu/new.html, so I'm not sure why there are no volumes listed.

      Hathitrust has part of the magazine under the title The New Monthly Magazine and Universal Register (1814–1820; missing vols 3, 5, 7, 9, and 10), but under the title The New Monthly Magazine, all these volumes are represented. There are at least some under the name The New Monthly Magazine and Literary Journal (1821–1836), and also some under the name The New Monthly Magazine and Humorist (1837–1852). It has the final volume in 1882 under the name The New Monthly.

    1. These numbers may range from 1 to 9999

      So that means we won't see a classification/subclass with a range like AZ57482. But is there a limit on the number of digits after the decimal?

      (Also, the literal reading of the statement here means that the range in EG9999.293 is invalid—because 9999.293 is greater than 9999.)

    1. Nelson said that he would be describing a lot ofthese ideas in an anthology to be released later in 1989called Replacing the Printed Word

      It doesn't look like this happened.

    1. Turco argued in 2016 that the problem was of supply more than itwas of demand; while it was certainly the case that the sometimes-bewildering multiplicity of potential user interfaces deployed fordifferent digital editions was one factor putting humanities schol-ars off using them, more significant was that the coding skillsets(or the resources needed to buy these in) was so alien to thosesame scholars that it was discouraging them from producing themin the first place
    1. Avoid using a 1 unless specifically instructed to do so in the schedules or in the CSM. If youfind that it is absolutely necessary, never use it as the final digit of a cutter because youmight have to use a zero in the cutter for the next resource. Instead, add another digit.And finally, avoid using a 2 if at all possible. Using a 2 can force the use of a 1, which canforce the use of a 0.
    1. From reading the book, I learned that Cutler had the same mentality for his OS and, in fact, the system wasn’t ported to x86 until late in its development. He wanted developers to target non-x86 machines first to prevent sloppy non-portable coding, because x86 machines were already the primary desktops that developers had. In fact, even as x86 increasingly became the primary target platform for NT, the team maintained a MIPS build at all times, and having most tests passing in this platform was a requirement for launching.

      I'm reminded of the time when it was revealed about 10–15 years or so ago that when Apple switched to x86 from PowerPC, it wasn't the result of a big porting effort. They'd been maintaining portability all along—doing private builds internally that just never saw the light of day. When this came to light, the reaction was huge. People were awed.

      A few years ago, when this piece of trivia was brought back to the forefront of my mind again after having not thought about it for years, I was struck by how silly that reaction was. Of course it makes sense that they'd been maintaining portability. There was nothing stupendous about this.

      I think this is the one time when I saw and felt the effects of Apple's legendary reality distortion field firsthand. (In every other instance, I hadn't been close enough and so only perceived it from afar and only had other sources to trust that it was a real phenomenon.)

    2. This sounds ridiculous, right? Why wasn’t there a CI/CD pipeline to build the system every few hours and publish the resulting image to a place where engineers could download it? Ha, ha. Things weren’t this rosy back then. Having automated tests, reproducible builds, a CI/CD system, unattended nightly builds, or even version control… these are all a pretty new “inventions”. Quality control had to be done by hand, as did integrating the various pieces that formed the system.

      This still describes the way the semiconductor manufacturing world works.

    3. Now, more than 30 years later, NT is behind almost all desktop and laptop computers in the world.

      This is sort of an odd remark. Even excluding servers and focusing only on traditional desktop and laptop computers, Windows' dominance is as weak now as it has been at any other point in the last 30 years.

      Most desktop and laptop computers? Yeah, probably.

      "Almost all"? Surely not.

    1. You have also noticed the blue bar by this time. The bar indicates the number that is selected, and itcan be moved around by double-clicking.Moving it changes the data in the hierarchy pane. Watch the hierarchy pane as I double-click tomove the bar around the screen.

      This characterization is a good case study in the odd (off) conceptualization of computer UI...

    1. number that may be a whole number or a whole number witha decimal, such as 2301, 111, 756.5,

      So the decimal does not indicate the presence (introduction) of a cutter.

      Is the rule, then, that if the character following the decimal is a digit, then it's a decimalized classification, and if it's a non-digit (alpha?) then it's a cutter?

    1. The March 1964 issue has a 1964 copyright notice, but the CCE states that its actual copyright date was December 23, 1963.

      Sean Dudley pointed out something similar to me—even though a given Black Mask issue might be the January 19XX issue, it was probably actually published and distributed sometime in December. This means that if you have a serialization that began in 1928 that you want to us public excerpts from, if it it ended in the January 1929 issue, then there's a good chance that the whole thing is actually public domain.

    1. I referred (indirectly) to this in an annotation on https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/ as "the PDF". As the first page indicates this is rather a PDF—specifically someone's PDF of the ACM's reprint from 1996 (which can be found hanging off this DOI: https://dl.acm.org/doi/10.1145/227181.227186).

      The Atlantic's PDF can be found here https://cdn.theatlantic.com/media/archives/1945/07/176-1/132407932.pdf (at least for now).

    1. Handles, in contrast, can be homed on any handle server and transferred at will -- so another organization can take over handles for a merging or dying organization. Handle namespace names are not related in any way to the hostname of their home server except perhaps by coincidence.

      This is a superficially attractive argument, but it doesn't hold up.

      In practice, links to handles are tied to the domain of the organization operating the handle resolver service. The argument that the handles themselves are durable and can survive the demise of the domain and the org that controls it is not a good one; for everything that makes this true for handles, it's also true for names based on conventional/traditional URLs—

    1. In order to guarantee persistence, the DOI Foundation has built a social infrastructure on top of the technical infrastructure of the Handle System. Persistence is a function of organizations, not of technology; a persistent identifier system requires a persistent organization, agreed policies and defined processes.
    1. I might point out that the definite and formal techniques and procedures provided us by social heritage mostly involve specialized and idealized aspects of the workload and needs of the individual. There apparently never has been an over-all or “system” approach to the problem of assisting the individual in being effective in his over-all problem-solving role.

      I find this extremely hard to parse.

  13. Aug 2024
    1. the retailer response is to send me an individual email every time they notice one

      It's almost that link rot is a problem that publishers should, you know, do something about...

    2. this is a problem for print books as well as for the ebooks of course, but I think we’re more content to let the URLs in print books function essentially as decoration—as signs that there is scholarship underlying their claims

      baffling

    1. Nor were we using the pieces in waysinappropriate to their advertised scope of applicability.

      Kiczales is fond of the metaphor of implementing a spreadsheet by making each cell its own window under the native platform's windowing system.

    Tags

    Annotators

    1. My side projects from 2012-2017 cannot be built or ran because of dependencies. My jsbin repo with lots of experiments cannot be ran anymore. But I have the sqlite database.I forgot to pin dependencies when I was working. It would take a lot of trial and error and effort to get back to where I was.