2,977 Matching Annotations
  1. Oct 2023
    1. A resource can map to the empty set, which allowsreferences to be made to a concept before any realization ofthat concept exist

      A very nice property—

      These are not strictly subject to the constraints of e.g. Git commits, blockchain entities, other Merkel tree nodes.

      You can make forward references that can be fulfilled/resolved when the new thing actually appears, even if it doesn't exist now at the time that you're referring to it.

    1. Messages are delineated by newlines. This means, in particular, that the JSON encoding process must not introduce newlines within a message. Note however that newlines are used in this document for readability.

      Better still: separate messages by double linefeed (i.e., a blank line in between each one). It only costs one byte and it means that human-readable JSON is also valid in all readers—not just ones that have been bodged to allow non-conformant payloads under special circumstances (debugging).

    1. without raising an error

      ... since HTML/XML is not part of the JS grammar (at least not in legacy runtimes, i.e. those at the time of this writing).

    2. ECMA-262 grammar

      So, at minimum, we won't get any syntax errors. But the semantics of the constructs we use means that it's a valid expectation that the browser itself can execute this code itself—even though it is not strictly JS—because the expected semantics here conveniently overlap with some of JS's semantics.

    3. offline documents

      "[...] that is, ones not technically on the Web"

    4. This poses a problem that we'll need to address.

      Add a liaison/segue sentence here (after this one) that says "Browsers, in fact, were not designed with triple scripts in mind at all."

    5. Browsers

      "Web browsers"

    6. This is of course ideal

      huh?

    7. Our main here is an immediately invoked function expression, so it runs as soon as it is encountered. An IIFE is used here since the triple script dialect has certain prohibitions on the sort of top-level code that can appear in a triple script's global scope, to avoid littering the namespace with incidental values.

      Emphasize that this corresponds to the main familiar from other programming systems—that triple scripts doesn't just permit arbitrary use of IIFEs at the top level, so long as you write them that way. This is in fact the correct way to denote the program entry point; it's special syntax.

    8. The code labelled the "program entry point" (containing the main function) is referred to as shunting block.

      Preface this with "In the world of triple scripts"?

      Also, we can link to the wiki article for shunting blocks.

    9. Note that by starting with LineChecker.prototype.getStats before later moving on to LineChecker.analyze, we're not actually practicing top-down programming here...

    10. It expects the system read call to return a promise that resolves to the file's contents.

      Just say "It expects the read call to resolve to the file contents."?

    11. system.print("\nThis file doesn't end with a line terminator.");

      I don't like this. How about:

      system.print("\n");
      system.print("This file doesn't end with a line terminator.");
      

      (This will separate the last line from the preceding section by two blank lines, but that's acceptable—who said there must only be one?.)

    12. What about a literate programming compiler that takes as input this page (as either Markdown or HTML) and then compiles it into the final program?

    13. and these tests can be run with Inaft. Inaft allows tests to be written in JS, which is very similar to the triple script dialect. Inaft itself is a triple script, and a copy is included at tests/harness.app.htm.

      Reword this to say "[...] can be run with Inaft, which is included in in the project archive. (Inaft itself is a triple script, and the triplescripts.org philosophy encourages creators to make and use triple scripts that are designed to be copied into the project, rather than being merely referenced and subsequently downloaded e.g. by an external tool like a package manager.)"

    14. We need to embed the Hypothesis client here to invite people to comment on this. I've heard that one of the things that made the PHP docs so successful is that they contained a comment section right at the bottom of every page.

      (NB: I'm not familiar at all with the PHP docs through actual firsthand experience, so it may actually be wring. I've also seen others complain about this, too. But seems good, on net.)

    15. The project archive's subtree

      Find a better way to say this. E.g. "The subdirectory for Part 1 from the project archive source tree"

    16. [ 0, 0, 0, 1 ]

      And of course there's a bug here. This should be [1, 0, 0, 1].

    17. returns [ 0, 0, 0, 1 ]

      We can afford to emphasize the TYPE family constants here by saying something like:

      Or, to put it another way, given a statement let stats = checker.getStats(), the following results are true:

      stats[LineChecker.TYPE_NONE] // evaluates to `1`
      stats[LineChecker.TYPE_CR]   // evaluates to `0`
      stats[LineChecker.TYPE_LF]   // evaluates to `0`
      stats[LineChecker.TYPE_CRLF] // evaluates to `1`
      
    18. propertes

      "properties"

    19. returned

      "... by getStats."

    20. In fact, this is the default for DOS-style text-processing utilities.

      Note that the example cited is "a single line of text". We should emphasize that this isn't what we mean when we say that this is the default for DOS-style text files. (Of course DOS supports multi-line text files. It's just that the last line will have no CRLF sequence.)

    1. The hack uses some clever multi-language comments to hide the HTML in the file from the script interpreter, while ensuring that the documentation remains readable when the file is interpreted as HTML.

      flems.io uses this to great effect.

      The (much simpler) triplescripts.org list-of-blocks file format relies on a similar principle.

    1. Does the methodology described here (and the way it's actually described here) adequately address the equivalent of its irreducible complexity problem?

    1. I'm reminded of comments from someone on my team a year or two after Chrome was released where they explained that the reason they used it was because it "takes up less space"—on screen, that is; when you installed it, the toolbars took up 24–48 fewer pixels (or whatever) than the toolbars under Firefox's default settings.

      See also: when Moz Corp introduced Personas (lightweight themes) for Firefox. This was the selling point for, like, a stupid amount of people.

    1. the web has become the most resilient, portable, future-proof computing platform we’ve ever created
    2. as the ecosystem around it swirled, the web platform itself remained remarkably stable
    3. There’s a cost to using dependencies. New versions are released, APIs change, and it takes time and effort to make sure your own code remains compatible with them. And the cost accumulates over time. It would be one thing if I planned to continually work on this code; it’s usually simple enough to migrate from one version of a depenency to the next. But I’m not planning to ever really touch this code again unless I absolutely need to. And if I do ever need to touch this code, I really don’t want to go through multiple years’ worth of updates all at once.

      The corollary: you can do that (make it once and never touch it again) if you are using the "native substrate" of the WHATWG/W3C Web platform. Breaking changes in "JavaScript" or "browsers" are rarely actually that. They're project/organizational failures one layer up—someone (who doesn't control users' Web browsers and how they work) decided to stop maintaining something or published a new revision but didn't commit to doing it in a backwards compatible way (and someone decided to build upon that, anyway).

    4. as much as I love TypeScript, it’s not a native substrate of the web
    5. Web components encapsulate all their HTML, CSS and JS within a single file

      Huh? There's nothing inherent to Web Components that makes this true. That's just how the author is using them.

    6. That’s the honest-to-goodness HTML I have in the Markdown for this post. That’s it! There’s no special setup; I don’t have to remember to put specific elements on the page before calling a function or load a bunch of extra resources.1 Of course, I do need to keep the JS files around and link to them with a <script> tag.

      There's nothing special about Web Components; the author could have just as easily put the script block itself there.

    7. Rather than dealing with the invariably convoluted process of moving my content between systems — exporting it from one, importing it into another, fixing any incompatibilities, maybe removing some things that I can’t find a way to port over — I drop my Markdown files into the new website and it mostly Just Works.

      What if you just dropped your pre-rendered static assets into the new system?

    8. although they happened to be built with HTML, CSS and JS, these examples were content, not code. In other words, they’d be handled more or less the same as any image or video I would include in my blog posts. They should be portable to any place in which I can render HTML.
    1. JSON deserializes into common native data types naturally (dictionary, list, string, number, null).You can deserialize XML into the same data types, but

      This is pretty circular reasoning. JSON maps so cleanly to JS data types, for example, because JSON is JS.

      It could trivially be made true that XML maps onto native data types if PL creators/implementors put such a data type (i.e. mixed content trees) into their programming systems... (And actually, given all the hype around XML ~20 years ago, it's kind of weird that that didn't happen—but that's another matter.)

    1. - if (!(typeof data === 'string' || Buffer.isBuffer(data))) { + if (!(typeof data === 'string' || isUint8Array(data))) {

      Better yet, just don't write code like this to begin with.

    2. code leveraging Buffer-specific methods needs polyfilling, preventing many valuable packages from being browser-compatible

      ... so don't rely on it.

      If the methods are helpful then reimplement them (as a library, even) and use that in your code. When passing data to code that you don't control, use the underlying ArrayBuffer instance.

      The very mention of polyfilling here represents a fundamental misapprehension about how to structure a codebase and decide which abstractions to rely on and which ones not to...

      cf https://en.wikipedia.org/wiki/Occupational_psychosis

    1. For security reasons, the nonce content attribute is hidden (an empty string will be returned).

      Yet another awful technical design associated with CSP.

    1. wokesockets

      wat

    2. this script runs every minute on a cronjob, and rebuilds my site if the git repo has been updated

      Geez, talk about wasteful.

    3. you'd need to wind up with external dependencies, since you'd likely need to rely on javascript
    4. scrape user IP addresses
    5. say you want to print the visiting users IP address - how would you do this on a statically generated website? to be honest i'm not sure. typically, you would scrape a header and display it
    1. (I implement this second stock Firefox environment not with a profile but by changing the $HOME environment variable before I run this Firefox. Firefox on Unix helpfully respects $HOME and having the two environments be completely separate avoids various potential problems.)

      Going by this explanation, what Siebenmann means by "not with a profile" is "not by using the profile manager". The revelation that you can use an explicitly redefined $HOME is a neat trick, but if I understand correctly still results in a different profile being created/used. Again, though: neat trick.

    1. Much better if C vNext would just permit Pascal- (and now Go)-style ident: type declarations. It wouldn't even be hard for language implementers to support, and organizations could gradually migrate their codebases to the new form.

    1. You can see how this would happen after seeing former UT Dean of so-called “Diversity, Equity and Inclusion”(DEI) Skyller Walkes, screaming at a group of students

      I watched the clip and was prepared to see something egregious.

      The characterization of Walkes as "screaming at a group of students" doesn't seem justifiable.

    2. screaming at a group of students
    1. The important part, as is so often the case with technology, isn’t coming up with a solution to the post portability problem, but coming up with a solution together so that there is mutual buy-in and sustainability in the approach.

      The solution is to not create keep creating these fucking problems in the first place.

    1. You can do this trick with the “view image”  option in the right-click menu, too – Ctrl-clicking that menu item will open that image in its own new tab.

      Not anymore; that menu item has been removed—and you can only use the "Open Image in New Tab" item now.

  2. Sep 2023
    1. Yesterday I spent a few hours on setting up a website for my music, but then instead of launching it I created a Substack.
    1. A big problem with what's in this paper is that its logical paths reflect the déformation professionnelle of its author and the technologists' milieu.

      Links are Works Cited entries. Works Cited entries don't "break"; the works at the other end don't "change".

    2. zero, it is reasonable to delete the grave-stone

      No. It is never reasonable.

    3. One response, suggested in Ashman andDavis [1998], is that referential integrityis more of a social problem than a tech-nical problem

      Yes.

    4. This is:

      Ashman, Helen. “Electronic Document Addressing: Dealing with Change.” ACM Computing Surveys 32, no. 3 (September 2000): 201–12. https://doi.org/10.1145/367701.367702

    1. If you think about it, even the callback function in a standard Array.prototype.filter() call is a selector.

      Huh?

    2. his changes
    3. With his additional changes

      NB: as of this writing, the user jviide has no public esquery repo. The merged pull request is here: https://github.com/estools/esquery/pull/134.

    4. else if (i === key.length - 1)

      This redundant check could be taken out of the loop. Since last is already "allocated" at function scope, a single line obj = obj[key.slice(last)] after the loop would do the same job, results in shallower cyclomatic nesting depth, and should be faster, too.

    5. this netted another 200ms improvement

      Takeaway: real world case studies have shown, insisting on using for...of and then transpiling it can cost you over half a second versus just writing a standard for loop.

    6. we know that we're splitting a string into an array of strings. To loop over that using a full blown iterator is totally overkill and a boring standard for loop would've been all that was needed.

      Yes! J*TDT applies, which in this case is: Just Write The Damn Thing.

    7. Given that the array of tokens grows with the amount of code we have in a file, that doesn't sound ideal. There are more efficient algorithms to search a value in an array that we can use rather than going through every element in the array. Replacing that line with a binary search for example cuts the time in half.
    1. JavaScrip is an interpreted language, not a compiled one, so by default, it will be orders of magnitude slower than an app written in Swift, Rust, or C++.

      Languages don't fall into the category of being either compiled or not. Implementations do. And the misconception of compiled code being ipso facto faster is a common one, but it's a misconception nonetheless (I suspect most often held by people who've never implemented one).

    1. This is a good example of something that deserves an upvote on the basis of being a positive contribution and/or provides a thought-provoking insight, even though I don't strictly agree with their conclusions or the opinionated parts of what they're saying; a modern package set of memory-safe implementations is something to consider along with what the failure to produce one will do to the project in the long-term. Whether ripgrep, exa, etc. are objectively or subjectively better than their forebears is a separate matter that is beside the point.

    1. Tumblr (owned by Yahooath!)

      NB: now Automattic (Wordpress)

    2. I was browsing someone’s site yesterday, hosted on Wordpress, yay! Except it was throwing plugin error messages. Wordpress is still too hard to maintain. Wordpress is not the answer.
    1. Princes! listen to the voice of God, which speaks through me! Become good Christians! Cease to consider armed soldiers, nobles, heretical clergy, and perverse judges, as your principal supporters: united in the name of Christianity, learn to accomplish all the duties which it imposes on the powerful. Remember that it commands them to employ all their force to increase, in the most rapid manner possible, the social happiness of the poor.

      Markham's translation reads:

      Princes,

      Hearken to the voice of God which speaks through me. Return to the path of Christianity; no longer regard mercenary armies, the nobility, the heretical priests and perverse judges, as your principal support, but, united in the name of Christianity, understand how to carry out the duties which Christianity imposes on those who possess power. Remember that Christianity commands you to use all your powers to increase as rapidly as possible the social welfare of the poor!

    2. The spirit of Christianity is meekness, gentleness, charity, and, above all, loyalty; its arms are persuasion and demonstration.

      Markham's translation reads:

      The spirit of Christianity is gentleness, kindness, charity, and above all, honesty; its weapons are persuasion and example

    1. Whats the total power consumption of all Android devices? Shaving just 1% is probably a couple of coalfired power plants worth of CO2.

      This is one of those times that makes me think, "Okay, is this person saying this because they're coming at from a position of principle, or is it opportunism?" I.e., are they just reaching for plausible arguments that will serve as the plausible means in service to their desired ends?

      Because whatever that number is, it probably pales in comparison to the waste that has followed from the corruption of the fundamentals of the Web—in which every other site is using SPA frameworks and shooting Webpacked-and-bundled agglomeration of NPM modules down the tubes resulting in 10x the waste associated with the widespread use of e.g. jQuery had 10+ years ago—with jQuery itself being the original posterchild for profligate waste wrt the Web. And yet, I'd bet many of the people supporting the commenter's position here would also be among the ones to celebrate monstrously complicated and bloaty geegaws that exist for the express purpose of letting you use "native" C/C++ libraries in Web apps through transcompilation.

    1. Bare-bones setups are either presented as temporary – something you grow out of – or as some sort of hair-shirt hippie modernity avoidance thing – a refusal to engage with “modern” web development.
    2. The amount of boilerplate and number of dependencies involved in setting up a web development project has exploded over the past decade or so. If you browse through the various websites that are writing about web development you get the impression that it requires an overwhelming amount of dependencies, tools, and packages.
    3. my theory is that you can get a modern web dev setup without node or a package manager, using only a tiny handful of standalone utilities and browser dev tools
    1. Reading through your link I caught myself thinking if I would put up with all those boilerplate nix steps just to add a new page to the site.
    1. compute-heavy

      What does "compute-heavy" mean here? How heavy, exactly?

    2. We're setting these chemists up with conda in Ubuntu in WSL in a terminal whose startup command activates the conda environment. Not exactly a recipe for reproducibility after they get a new laptop.

      First step: stop perpetuating the circularity of the reasoning behind the belief that Python is good for computational science.

    1. I admire how nimble you are. I aspire to write blog posts at the drop of a hat like this, but I rarely do.

      But you wrote this comment.

    1. I was a great marketer. I was getting feedback from customers, and I’d pass on every list of what customers wanted to engineering and tell them that’s the features our customers needed.
    1. The best way to learn is through apprenticeship -- that is, by doing some real task together with someone who has a different set of skills.

      This is an underappreciated truth.

    1. <ol><oln>(b)</oln><oli>No employer shall discriminate in any way on the basis of gender in the payment of wages, or pay any person in its employ a salary or wage rate less than the rates paid to its employees of a different gender for comparable work; [...]</oli></ol>

      Mmmm... I dunno. HTML already has <dl>, <dt>, and <dd>. It seems adequate to just (re)-use it for this purpose. That's what a document of statutory law really is—a list of definitions, not an ordered list. They happen to be in order, usually. But what if Congress passed an act that put an item labeled 17 between items 1 and 3? Or π? Or 🌭 (U+1F32D)? (Or "U+1F32D" for that matter?) What fundamental thing is <ol> communicating that <dl> would fail at—to the point that it would compel someone to argue against the latter and insist only on the former?

    2. There is one particular type of document in which the correct handling of the ordinal numbers of lists is paramount. A document type in which the ordinal numbers of the lists cannot be arbitrarily assigned by computer, dynamically, and in which the ordinal numbers of the lists are some of the most important content in the document.I'm referring of course to law.HTML, famously, was developed to represent scientific research papers, particularly physics papers. It should come as no surprise that it imagines documents to have things like headings and titles, but fails to imagine documents to have things like numbered clauses, the ordinal numbers of which were assigned by, for example, an act of the Congress of the United States of America.Of course this is not specific to any one body of law - pretty much all law is structured as nested ordered lists where the ordinal numbers are assigned by government body.It is just as true for every state in the Union, every country, every province, every municipality, every geopolitical subdivision in the world.HTML, from the first version right up to the present version, is fundamentally inimical to being used for marking up and serving legal codes as web pages. It can be done, of course - but you have to fight the HTML every step of the way. You have no access to any semantic markup for the task, because the only semantic markup for ordered lists is OL, which treats the ordinal numbers of ordered lists as presentation not content.
    1. This is problematic if we wish to collect widespread metadata for an entity, for the purposes of annotation and networked collaboration. While nothing in the flat-hash ID scheme stops someone from attempting to fork data by changing even a single bit, thereby resulting in a new hash value, this demonstrates obvious malicious intention and can be more readily detected. Furthermore, most entities should have cryptographic signatures, making such attacks less feasible. With arbitrary path naming, it is not clear whether a new path has been created for malicious intent or as an artifact of local organizational preferences. Cryptographic signatures do not help here, because the original signed entity remains unchanged, with its original hash value, in the leaf of a new Merkle tree.

      Author is conflating multiple things.

    2. Retrieving desired revisions requires knowing where to look

      This is one failure of content-based addressing. When the author controls the shape of identifiers (and the timing of publication), they can just do the inverse of Git's data model: they publish forward commitments--i.e., the name that they intend the next update to have. When they want to issue an update, they just install the content on their server and connect that name to it.

    3. Two previously-retrieved documents cannot independently reference each other because their identities are bound to authoritative network services.

      Well, they could. You could do it with an implementation of a URI-compatible hypertext system that uses really aggressive caching.

    1. Hard-Copy Print Options to Show Address of Objects and Address Specification of Links so that, besides online workers being able to follow a link-citation path (manually, or via an automatic link jump), people working with associated hard copy can read and interpret the link-citation, and follow the indicated path to the cited object in the designated hard-copy document.
    2. Link Addresses That Are Readable and Interpretable by Humans
    3. Every Object Addressable in principal, every object that someone might validly want/need to cite should have an unambiguous address

      This is a good summation of what the Web was supposed to be about. Strange how 30 years on how little we've chipped away at achieving this goal.

    4. designated targets in other mail items

      MIME has ways to refer internally to content delivered in the same message. But what about other (existing) content? Message-ID-based URIs (lot alone URLs) are non-existent (to the best of my knowledge).

      I know the imap URI scheme exists (I see imap URIs all the time in Thunderbird), but they seem unreliable (not universally unambiguous), although I could be wrong.

      Newsgroup URIs are also largely inadequate.

    5. The Hyperdocument "Library System" where hyperdocuments can be submitted to a library-like service that catalogs them and guarantees access when referenced by its catalog number, or "jumped to" with an appropriate link. Links within newly submitted hyperdocuments can cite any passages within any of the prior documents, and the back-link service lets the online reader of a document detect and "go examine" any passage of a subsequent document that has a link citing that passage.

      That this isn't possible with open systems like the Web is well-understood (I think*). But is it feasible to do it with as-yet-untested closed (and moderated) systems? Wikis do something like this, but I'm interested in a service/community that behaves more closely in the concrete details to what is described here.

      * I think that this is understood, that is. That it's impossible is not what I'm uncertain about.

    6. or "execute the process identified a the other end."

      Interesting that this is considered "basic".

    7. Knowledge-Domain Interoperability and an Open Hyperdocument System
    1. case Tokenizer.LDR: return 0x00 * RSCAssembler.U_BIT; case Tokenizer.LDB: return 0x00 * RSCAssembler.U_BIT; case Tokenizer.STR: return 0x01 * RSCAssembler.U_BIT; case Tokenizer.STB: return 0x01 * RSCAssembler.U_BIT;

      Huh?

    2. 0x00 * RSCAssembler.U_BIT

      Huh?

    1. Comparing pancakes to file management is an apples to oranges comparison.

      From this point onwards, I'm going to insist that anything that uses the phrase "[...] apples and oranges" omit it in lieu of the phrase "like comparing filesystems and pancakes".

    1. You'll likely use some libraries where people didn't use type checkers and wrote libraries in a complicated enough way that the analysis cannot give you an answer.
      1. You've chosen a bad library and complain about how bad that library is. That's dumb. (There's no line of reasoning for the argument being made here that doesn't reveal a double standard.)

      2. The entire premise (you'll "likely" be using libraries you don't want to—as if it's something you're forced into doing) is flawed. It basically reduces down to the joke from Annie Hall—A: "The food here is terrible" B: "Yes, and such small portions!"

    1. And, of course, just to be completely clear, this is valid syntax:let _true = true;_true++;_true; // -> 2

      Of course it is. Why wouldn't it be?

    1. also don't ever give someone an unsolicited code review on Twitter. It's rude.)

      This reminds me of people who have encountered others complaining about/getting involved with something that the speaker has decided "isn't any of their business" (e.g. telling someone without a handicap placard not to park in a handicap space) who then go on and rant about it and demand that others not to tell them what to do.

      In other words:

      Don't ever make unprompted blanket criticism+demands like saying "Don't ever [do something]. It's rude." That's rude.

  3. Aug 2023
    1. Another way I get inspiration for research ideas is learning about people's pain points during software development Whenever I hear or read about difficulties and pitfalls people encounter while I programming, I ask myself "What can I do as a programming language researcher to address this?" In my experience, this has also been a good way to find new research problems to work on.
    1. The society as a whole is neither better nor worse off.
    2. Non-stupid people always underestimate the damaging power of stupid individuals. In particular non-stupid people constantly forget that at all times and places and under any circumstances to deal and/or associate with stupid people always turns out to be a costly mistake.

      Despite its ordinality, the Fourth law is the one most worth keeping in mind.

    1. people who were wise from the beginning

      That is, people for whom their present misfortunes have nothing to do with any past (or present) tendencies of stupidity.

    1. This is why I build my personal projects in PHP even though I'm not really a fan. I use PHP and JQuery. It'll work basically forever and I can come back to it in 15 years and it'll still work.

      When people mistakenly raise concerns about the Web platform being fragile, point to this common meme.

    1. The worst part is that Let's Encrypt is preventing us from building a real solution to the problem. The entire certificate authority system is a for-profit scam. It imparts no security whatsoever. But Google gets its money, so it's happy. That means Chrome is happy, and shows no warnings, so the end user is happy too. That makes the website owner happy, and everyone is happy happy happy. But everything is still quite fundamentally fucked. Before Let's Encrypt, people were at least thinking about the problem

      The validity of the author's conclusions notwithstanding, there needs to be a name for this phenomenon.

      Previously: https://www.colbyrussell.com/2019/02/15/what-happened-in-january.html#unacknowledged-un-

    1. TypeTest(x, obj.type, FALSE) ; x.type := ORB.boolType

      The explicit x.type assignment here is redundant, because TypeTest will have already done it (in this case because the third argument is false).

    2. IF sym = ORS.ident THEN ORS.CopyId(modid); ORS.Get(sym); Texts.WriteString(W, modid); Texts.Append(Oberon.Log, W.buf) ELSE ORS.Mark("identifier expected") END ;

      This "IF...ELSE Mark, END" region could be reduced by replacing the three lines corresponding to those control flow keywords with a single call to Check:

      Check(ORS.ident, "identifier expected");
      
    1. I do kind of wish I had learned about big-endian dating sooner, though. But alea iacta est and everything.

      Not at all (re "alea iacta est"). Get this: you can at any time make new, perfected labels and affix them to the spines, covering the old ones, but leaving them in place—just like you augmented the original manufactured product with first labels. This would not be a destructive act like rebinding all the Novel Novel workbooks.

    2. The sketchbook should be workman-like; it’s not a fussy tool for self expression, it’s a daily tool.

      This should be the mindset of people self-publishing on the Web, too. Too bad it's not.

    3. (Yes, my handwriting is atrocious, yes I can read it, yes I apologize to all my grade school teachers who gave me Cs in Penmanship. You tried.)

      So crazy seeing this from an art school person.

    1. throw new Error("panic!"); // XXX

      This could be reduced to throw Error("panic!");. And nowadays I prefer to order the check for document ahead of the one for window, just because.

    2. Code for injecting that button and piggy-backing off the behavior of the BrowserSystem module follows.

      Need to explain how this IIFE works, incl. the logic around events and readyState, etc.

    3. Other elements used in this document include code, dfn, em, and p for denoting inline text comprising a snippet of code, a defined term that is not an abbreviation, inline text that should be emphasized, and a paragraph, respectively.

      I failed to cover the use of ul and li tags.`

    4. as of this writing in 2021

      As of today (and for some time before this), and at least as I recall, the status quo with Firefox has changed so monospace text uses the same size as other code, like in Chrome. I may be mistaken, though.

    5. Note that the use of the text/plain+css media type here

      NB: this should be "Note the use [...]"

    6. between style tags and not in a script element

      Note that I bungled rule in the code block that precedes it, so it looks like it's in hybrid style/script block. Spot the error:

        body script[type="text/plain+css"]::before {
          content: '\3Cstyle type="text/plain+css"\3E';
        }
      
    1. Additionally, with the old wiki, only registered users could edit the wiki. With the new docs, because it's in a repo on GitHub, anyone can contribute to the documentation

      This is such a weird fuckin' sentence. It's framed as if it's going from narrow to wide-open, but it's actually the opposite.

      wat

    1. it asks for the street address of the lot. I have never seen this information printed on any parking lot in my life. it suggests several "nearby" options; they are actually half a mile away. unable to figure this conundrum out even for myself, i sigh and walk her through installing Park Mobile

      Instead of opening Google Maps...?

    1. This is a double whammy: at the time, it gets dissmissed almost outright for the reason that, essentially, "everyone has an opinion", and then months or years later when it's evident that it did know better and it the official tack was flawed, it doesn't even get the acknowledgment that, yes, in fact that's the mindset that should have gotten buy-in.
    1. I get frustrated whenever I have knowledge (specifically Web Platform knowledge) to solve a problem, but the abstraction prevents me from using my knowledge.

    1. We really f'ed up the web didn't we?
    2. I think I get what you're saying but I have some difficulty moving past the fact that you're claiming it doesn't need to be a website because it would be sufficient if it was a bunch of hosted markup documents that link to each other.
    1. With Go, I can download any random code from at least 2018, and do this: go build and it just works. all the needed packages are automatically downloaded and built, and fast. same process for Rust and even Python to an extent. my understanding is C++ has never had a process like this, and its up to each developer to streamline this process on their own. if thats no longer the case, I am happy to hear it. I worked on C/C++ code for years, and at least 1/3 of my development time was wasted on tooling and build issues.
    1. @1:14:37:

      when you have a Dynabook and you have simulations and multidimensional things as your major way of storing knowledge, the last thing you want to do is print any of it out. Because you are destroying the multidimensionality and you can't run them

    1. It is not unrealistic to forsee the costs ofcomputation and memory plummeting by orders ofmagnitude, while the cost of human programmers increases.It will be cost effective to use large systems like ~. forevery kind of programming, as long as they can providesignificant increases in programmer power. Just ascompilers have found their way into every application overthe past twenty years, intelligent program-understandingsystems may become a part of every reasonablecomputational environment in the next twenty.
    1. A close-up photograph taken by DART just two seconds before the collision shows a similar number of boulders sitting on the asteroid’s surface — and of similar sizes and shapes

      Where's that photograph available, and why isn't it either included or linked here?

    1. This is probably a good place to comment on the difference between what we thought of as OOP-style and the superficial encapsulation called "abstract data types" that was just starting to be investigated in academic circles. Our early "LISP-pair" definition is an example of an abstract data type because it preserves the "field access" and "field rebinding" that is the hallmark of a data structure. Considerable work in the 60s was concerned with generalizing such structures [DSP *]. The "official" computer science world started to regard Simula as a possible vehicle for defining abstract data types (even by one of its inventors [Dahl 1970]), and it formed much of the later backbone of ADA. This led to the ubiquitous stack data-type example in hundreds of papers. To put it mildly, we were quite amazed at this, since to us, what Simula had whispered was something much stronger than simply reimplementing a weak and ad hoc idea. What I got from Simula was that you could now replace bindings and assignment with goals. The last thing you wanted any programmer to do is mess with internal state even if presented figuratively. Instead, the objects should be presented as sites of higher level behaviors more appropriate for use as dynamic components.

      I struggle to say with confidence that I understand what Kay is talking about here.

      What I glean from the last bit about goals—if I understand correctly—is something I've thought a lot about and struggled to articulate, but I wouldn't characterize it as "object-oriented"...

  4. www.dreamsongs.com www.dreamsongs.com
    1. Foreword
    2. Preface
    3. 1 3 4 5 7 9 8 6 4 2

      Why does the number 4 appear twice in the printer's key? Mistake?

    4. Because this PDF does not include outline metadata, I have inserted jump points by highlighting the names of the chapter on the page where that chapter begins for each chapter in the book. These can be filtered by the "chapter heading" tag.

    5. A Personal Narrative: Stanford
    6. A Personal Narrative: Journey to Stanford
    7. Writing Broadside
    8. What We Do
    9. Productivity: Is There a Silver Bullet?
    10. The End of History and the Last Programming Language
    11. Language Size
    12. The Bead Game, Rugs, and Beauty
    13. The Quality Without a Name
    14. The Failure of Pattern Languages
    15. Pattern Languages
    16. Abstraction Descant
    17. Habitability and Piecemeal Growth
    18. Reuse Versus Compression
    19. This is:

      Gabriel, Richard P. Patterns of Software: Tales from the Software Community. New York: Oxford University Press, 1996. https://www.dreamsongs.com/Files/PatternsOfSoftware.pdf

    1. a 1985 broadcast of Computer Chronicles (13:50) on UNIX: As for the future of UNIX, he [Bill Joy] says its Open Source Code

      That's not what she says (but of course you're already aware of this).

      Compare:

      • "open Source Code" (read like German)
      • "open-source code"

      The claim here is that she's using the latter meaning. She is not. It's the former.

    1. @1:26:22

      I wasn’t really thinking about this until sometime in the ’90s when I got an email from someone who said, “Can you tell me if this is the correct meaning of the Liskov substitution principle?” So that was the first time I had any idea that there was such a thing, that this name had developed.[...] I discovered there were lots and lots of people on the Internet having arguments about what the Liskov substitution principle meant.

    2. @41:15

      I used to feel kind of jealous of the electrical engineers because I thought, “At least they have these components and they connect them by wires, and that forces you to really focus on what those wires are.” Whereas software was so plastic that people used to design without thinking much about those wires, and so then they would end up with this huge mess of interconnections between the pieces of the program, and that was a big problem.

    1. Our goal is not to argue about proper nouns

      And yet you are arguing (instead of just fixing your mistake). Why?

      You even went out of your way to change the post: it used to say "Codecov is now Open Source"[1]. In the time since, you have changed it so "Code is Now Open Source"[2].

      This is notable for two reasons: it means that it's not outside the bounds of reasonableness to ask why the post hasn't changed since you've been confronted about the discontent, but it also raises questions about why you made that particular change in the first place. By a reasonable guess, I'd bet it has something to do with the fact that writing it as "Open Source" (rather than merely "open source") does real damage to any argument that the latter is generic and doesn't have any particular significance, thus allowing you to repudiate the OSI and the OSD. Which, of course, means that you guys are total fuckin' slimeballs, since you are now actively taking steps to cover your tracks.

      1. https://archive.is/aSH9K

      2. https://archive.is/pyd5b

    1. JS's birth and (slightly delayed) ascent begins roughly contemporaneous with its namesake—Java. Java, too, has managed to go many places. In the HN comments section in response to a recent look back at a 2009 article in IEEE Spectrum titled "Java’s Forgotten Forebear", user tapanjk writes: Java is popular [because] it was the easiest language to start with https://news.ycombinator.com/item?id=18691584 In the early 2000s in particular, this meant that you could expect to find tons of budding programmers adopting Java on university campuses, owing to Sun's intense campaign to market the language as a fixture in many schools' CS programs. Also around this time, you could expect its runtime—the JRE—to be already installed on upwards of 90% of prospective users' machines. This was true even when the systems running those machines were diverse. There was a (not widely acknowledged) snag to this, though: As a programmer, you still had to download the authoring tools necessary for doing the development itself. So while the JRE's prevalence meant that it was probably already present on your own machine (in addition to those of your users), its SDK was not. The result is that Java had a non-zero initial setup cost for authoring even the most trivial program before you could get it up and running and putting its results on display. Sidestepping this problem is where JS succeeded.

      Fielding actually has a whole section in his dissertation about this (6.5.4.3 "Java versus JavaScript").

    1. A JSON engineer attempting to meet Level 3 of the Richardson Maturity Model

      I love the epithet used here: a "JSON engineer".

    1. It can be amusing to see authors taking pains to describe recommended paths through theirbooks, sometimes with the help of sophisticated traversal charts — as if readers ever paidany attention, and were not smart enough to map their own course. An author is permitted,however, to say in what spirit he has scheduled the different chapters, and what path hehad in mind for what Umberto Eco calls the Model Reader — not to be confused with thereal reader, also known as “you”, made of flesh, blood and tastes.The answer here is the simplest possible one. This book tells a story, and assumesthat the Model Reader will follow that story from beginning to end, being however invitedto avoid the more specialized sections marked as “skippable on first reading” and, if notmathematically inclined, to ignore a few mathematical developments also labeledexplicitly.

      Great attitude.

    1. let's keep a universally understood specificcompiler reference language so that all of us, no matter whatour computer, can share what work we have done
    1. You don’t necessarily know who or what server B blocks or doesn’t block. You may not even know that server C exists or who Adolf is. But all it takes is someone to put a post in server C’s eyeline and they can take it and keep it, and then ignore any and all requests to delete it. Meanwhile, Adolf and his friends Rudolf and Hermann can have a lovely little laugh at your expense in your replies… on their server.

      Yes, and people can also get together at the nearest bar, bookstore, coffee shop, library, etc. and snicker while making fun of you over drinks... and there is absolutely no mechanism to stop them or to get around this.

      They can also start a small club where they perform skits about how dumb they think you are and then start inviting other people to their twice-monthly Bloonface Is So Stupid get-togethers that are open to the public and write plays and put on stage productions about it. And there is absolutely no mechanism to stop them or to get around this.

    2. it can start making API requests to your server, anonymously, to get your account information and any other public posts you have

      If you're giving stuff out to anyone who asks for it, then you're giving stuff out anyone who asks for it.

    3. what is expected of them

      Obnoxious application of this turn of phrase.

    4. If your server closes down, and does not run the “self-destruct” command that tells all servers it has ever federated with to delete all record of it, its users and its posts, then they will stay on those servers indefinitely with no simple means of deleting them, or even knowing that they are there. And that’s assuming that the other servers would have honoured that deletion request anyway. Again, a bad actor doesn’t have to.

      The fact that this strikes the writer as being notable means there's something crazy afoot wrt to expectations.

      If you send me an email to delete all record of your conversations, I can choose to honor it or not. If you send it to my email service provider, you'll have to (a) somehow convince them to do it, and (b) hope that I am relying on them to store my copies, so that in the event they do honor your asinine request, my access is actually severed because I haven't (read: my client hasn't) already e.g. fetched the material in question.

    5. If you have any objection at all to your posts and profile information being potentially sucked up by Meta, Google, or literally any other bad actor you can think of, do not use the fediverse. Period. Even if your personal chosen bogey-man does not presently suck down every single thing you and your contacts post, absolutely nothing prevents them from doing so in the future, and they may well already be doing so, and there’s next to nothing you can do about it.

      Compare: if you have any objection at all to your GeoCities pages being sucked up by Yahoo!, AltaVista, or literally any other bad actor you can think of, do not publish to the Web. Period. Absolutely nothing stops your personal chosen bogey-man from sucking down every single thing you post. They may well already be doing so, and there's next to nothing you can do about it.

    6. a bad actor simply has to behave not in good faith and there is absolutely no mechanism to stop them or to get around this

      So "bad actor" here means someone who asks for a copy of your stuff, and you send it to them, and then when you decide you don't want to have given it to them and so you tell them to please get rid of it, they say "no thanks, I'll keep it"?

  5. Jul 2023
    1. I tried precompiling the JavaScript code to QuickJS bytecode (to avoid parsing overhead), but that only saved about 40 milliseconds (I guess parsing is really fast!).
    1. Alternative approach to consider: don't rebuild older posts. Transform them from Markdown into HTML in situ and then never worry about recompiling that post again. You could do this by either keeping both the Markdown source and the output document around, or by crafting the output in such a way that the compilation process is reversible—so you could delete the Markdown source after it's compiled and derive it from the output in the event that you ever wanted to recover it.

    1. The problem with this approach is that output files never specify their dependencies. Looking at it from the other direction, if I modify this post's Markdown file, the only change to Goldsmith's initial data model is the content of this Markdown file. The problem is that this one input file could impact numerous output files: the post itself, the Atom feed, any category/keyword index pages (especially if keywords are added or removed), the home page, and, of course, the archive page.

      This is a good summary of the problems affecting static site generators (and program compilers) generally.

    1. Has tons of native packages... but is it portable to Windows?

      Note than jart has been doing a bunch of interesting stuff with truly cross-platform binaries in the form of Actually Portable Executables and has settled on embedding Lua.

    2. Python for tools and scripts JavaScript on the web

      Nah. Use JS for your scripts, too. Python is far from "ubiquitous".

    3. it's a shame because I really like C# and the .NET standard library

      You can, by the way, target the design of the .NET APIs in your non-C# program and then just fill your own re-implementation that works just well enough to service your application's needs. This strategy is way too undervalued.

    4. C/C++ support cross-compiling through a painful process of setting up an entire compilation environment for the target.
    5. That leaves C#

      There's also Java which supports AOT compilation to native via Graal. I haven't done a comparison, but I would think it's similar in bloat to the experiments with C#.

    6. Obviously, it's not fair to compare the compile-to-native languages to scripting languages

      Sure it is! Particularly since those "scripting" languages also contain compile-to-native code generators of their own—they just execute at runtime rather than ahead of time. (Although, in the case of QuickJS, which was covered in an earlier episode of a related series[1], it permits you to you to use it just like ahead-of-time compilers.)

      1. https://log.schemescape.com/posts/programming-languages/minimal-dev-env-3.html
    7. the Rust SDK is disappointingly heavy

      Yes. The Rust team had the opportunity to fix more than one thing wrong with C++, one of them being very important to making hacking on Gecko more approachable—the heft of the build requirements—and they went the opposite direction. Major fumble which should have resulted in a vote of no confidence to anyone thinking clearly (i.e. not intoxicated by hype).

      To put it succinctly: Rust blew it.

    1. What about an old laptop?

      Almost certainly, I'd bet.

    2. Would a newer Raspberry Pi hit the sweet spot between performance and cheapness?

      Almost certainly, I'd bet.

    3. though I worry what will happen as the language continues to evolve beyond QuickJS's implementation

      Just because the technical committee adds more stuff to the language spec, it doesn't mean you have to use those things. Whatever exists now will exist as a subset of the future version, so just stick with that. (You probably don't need the difference, anyway.)

    4. I'll admit that md2blog hasn't been optimized. It always regenerates the entire site, and it's written in JavaScript. On my desktop computer, I can build my site in maybe 1 second. On the Raspberry Pi (admittedly using a much simpler JavaScript runtime), it took over 6 minutes!

      This shouldn't be the case, "written in JavaScript" or not. I suspect it's rather a consequence of the dependencies. Large parts of Firefox circa 15 years ago were written in JS and ran on similar-ish hardware and executed in an interpreter rather than a JITting VM that should, when measured, work out to be less performant than QuickJS.

      On the other hand, I am aware of the (original?) Pi's deficient floating point handling, which involves emulating floating point operations with software routines—and JS numbers are all, in theory, IEEE 754 doubles, but QuickJS is a capable runtime that should be smoothing over that wrinkle behind the scenes—and I have no idea if the Pi used here has the same limitation in the first place.

    5. Navigating large C files in Vim was slow (frequently, I could see the screen re-drawing individual lines of text)

      This seems odd, especially so since the experience using w3m was described as "pretty snappy". What's Vim doing wrong?

    6. Installing packages and opening web pages in w3m was pretty snappy, but compiling QuickJS was a rude awakening. I typed make and then left to do something else when it became apparent that it might be a while. Later, I came back, and it was still compiling! With link-time optimization, it took almost half an hour to build QuickJS.

      I wonder what the experience is like compiling QuickJS using TCC instead of GCC.

    1. a factor of 10 did go into faster responses to the user’s actions

      We've seen the opposite trend in the last 10 years or so.

    2. a reusable component costs 3 to 5 times as much as a good module. The extra money pays for: ·         Generality: A reusable module must meet the needs of a fairly wide range of ‘foreign’ clients, not just of people working on the same project. Figuring out what those needs are is hard, and designing an implementation that can meet them efficiently enough is often hard as well. ·         Simplicity: Foreign clients must be able to understand the interface to a module fairly easily, or it’s no use to them. If it only needs to work in a single system, a complicated interface is all right, because the client has much more context. ·         Customization: To make the module general enough, it probably must be customizable, either with some well-chosen parameters or with some kind of programmability, which often takes the form of a special-purpose programming language. ·         Testing: Foreign clients have higher expectations for the quality of a module, and they use it in more different ways. The generality and customization must be tested as well. ·         Documentation: Foreign clients need more documentation, since they can’t come over to your office. ·         Stability: Foreign clients are not tied to the release cycle of a system. For them, a module’s behaviour must remain unchanged (or upward compatible) for years, probably for the lifetime of their system.   Regardless of whether a reusable component is a good investment, it’s nearly impossible to fund this kind of development.

      a reusable component costs 3 to 5 times as much as a good module. The extra money pays for: Generality[...] Simplicity[...] Customization[...] Testing[...] Documentation[...] Stability[...] ¶ Regardless of whether a reusable component is a good investment, it’s nearly impossible to fund this kind of development.

    1. In a true docu-ment-centered system, you start aspreadsheet by just putting in columns(e.g. with tabs)
    1. Some applications, likeMicrosoft Word, come with a sophisticated customization sub-system that allows users tochange the menus and keyboard accelerators for commands. However, most applicationsstill do not have such a facility because it is difficult to build.

    Tags

    Annotators

    1. Maybe part of the problem is that I'm grossly under-estimating the amount of work involved in "post an interesting technical article to it once or twice a year" for people who don't already spend a lot of their time writing.

      Writing is one thing. Writing for a public audience (read: writing persuasively) is another thing. Case in point: "how much push-back this one [blog post] gets", which comes as a surprise to Simon, he says.

    1. my advice is very much focused on "working for ambitious technology companies"

      Good start at an ontology for different kinds of work? Samsung Austin Semiconductor, for example, does not fall within the class that Simon calls "ambitious technology companies", despite nominally being a "tech" company and ostensibly "ambitious".

    1. This post presumes that a given candidate is looking for career in show business. There's no good reason to make that logical leap.

      Bank managers (or HR folks at tech companies for that matter..) don't seem to be getting told to curate the equivalent of a GitHub profile. (LinkedIn notwithstanding—Microsoft, stop trying to make "fetch" happen.) Why should a software engineer?

    1. pulling some code that could be inline into a function allows someone to more easily replace that function

      There are shades of OO ideology in this, unintentionally.

    2. As anyone familiar with software development knows, the difficulty of adding new features or modifying existing ones grows very quickly, much faster than linearly, with the total number of features. They interfere with one another. By reducing the number of shipped features, we reduce the difficulty of modification. Anybody can do it (or have somebody do it for them).The more users we try to appease out of the box, the harder things become for those we haven’t served yet. A more rigorous analysis would attempt to model costs and benefits, do the math, etc. I’ll leave it at noting that the combination of the 80/20 rule and superlinear complexity growth means we probably aren’t amortizing as much effort as we would hope by adding every feature to a single code base.

      Simple but non-obvious truth.

    1. Sure but if the job listings are saying “College Degree in something” applicants without a degree are likely to get rejected well before interviews because it is an easy filter for HR.

      Why do we never try to address how obviously inadequate most who are hired into HR are? Filter those.

    1. tech’s focus on prioritizing output over credentials

      Is this even real? It feels to me, as someone outside the Bay Area, like something that is either a result of Bay Area parochialism (for lack of a better word) or a mistake: tech does prioritize "credentials"—they're just credentials in the form of résumé-driven data points related to tech stacks (e.g. React, Kubernetes, Docker...) and employment history, rather than academic credentials.

    2. Crow and Dabars explain that most universities aspire towards offering a Michelin star experience to students, but what we actually need is a ‘fast casual’, Cheesecake Factory-like option that can provide an affordable, quality education to millions

      I'm conflicted by this analogy, because I think that in a certain sense a fast-casual Cheesecake Factory for degrees is exactly what universities have turned themselves into. NB: that's "for degrees", not for "quality education" as nayafia relates here.