started a Patreon to help support the exploding usage
Crazy. Consider how this compares to sharing the same stuff via blog posts + RSS.
started a Patreon to help support the exploding usage
Crazy. Consider how this compares to sharing the same stuff via blog posts + RSS.
Not all of this is necessary to make a fast, fluid API
Mm... These should be table stakes.
You’re meticulous about little micro-optimizations (e.g. debouncing, event delegation
"Meticulous" (and calling these "micro-optimizations") is a really generous way to label what's described here...
There’s not much you can do in a social media app when you’re offline
I dunno. That strikes me as a weird perspective. You should be able to expect that it will do at least as much as a standard email client (which can do a lot—at a minimum, reading/viewing existing messages and searching through them and the ability to compose multiple drafts that can be sent when you go back online).
someone did a recent analysis showing that Pinafore uses less CPU and memory than the default Mastodon frontend
Given what the Mastodon frontend is like, it would be pretty concerning if that weren't true.
the fact that Mastodon has a fairly bog-standard REST API makes it pretty difficult to implement offline support
Huh? This comes across as nonsequitur.
it would be a pure DX (Developer Experience) improvement, not a UX (User Experience) improvement
This raises questions about how much the original approach made for good DX in the first place (and whether or not the new approach would). That is, when measured against not using a framework.
The whole point of these purported DX wins are supposed to be that—DX wins. When framed in the terms of this post, however, they're clear liabilities...
it’s a lot of work to manually migrate 200+ components to what is essentially a new framework
The web started off as a simple, easy-to-use, easy-to-write-for infrastructure. Programmers have remodelled HTML in their own image, and made it complicated, hard to implement, and hard to write for, excluding many potential creators.
When can we expect the Web to stop pretending to be the old things, and start being what it really ought to be?
The Web already is what it is, at least—and what that is is not an imitation of the old. If anything, it ought to be more like the old, cf Tschichold.
Things like citability are crucial, not just generally, but in that they are fundamental to what the Web was supposed to have been, and modern Web practices overwhelmingly sabotage it.
This conference imitating the old Providing papers for this conference is a choice between latex (which is a pre-web technology) or Word! There's a page limit! There's a styleguide on how references should be visually displayed! IT'S ALL ABOUT PAPER!
This post is a narrative rant (in the same vein of Dan Luu's "Everything is Broken" post) about my problems one afternoon getting a Fancy New Programming Language to work on my laptop.
We weren't able to compile this code
Some people are extremely gifted mathematicians with incredible talent for algorithmic thinking, yet can be totally shut down by build configuration bullshit.
The repo was 3 years old. Surely it wouldn't be that hard to get running again? Ha!Here's what went well.Installing Android Studio. I remember when this was a chore - but I just installed the Flatpak, opened it, and let it update.Cloning the repo. Again, simple.Importing the project. Couple of clicks. Done.Then it all went to hell.
Since infrequent developers spend relatively little time dealing with the language, setting up and running additional pieces of software is a much higher overhead for them and is generally not worth it if they have a choice.
I sometimes find myself hacking together a quick console-based or vanilla JS prototype for an idea and then just stop there because messing with different cloud providers, containers, react, webpack and etc is just soul draining. I remember when I was 14 I'd throw up a quick PHP script for my project, upload it to my host and get it up and running in just a few minutes. A month ago I spent week trying to get Cognito working with a Serverless API and by the time I figured it out I was mentally done with the project. I cannot ever seem to get over this hump. I love working on side projects but getting things up and running properly is just a huge drag these days.
My husband reviews papers. He works a 40h/wk industry job; he reviews papers on Saturday mornings when I talk to other people or do personal projects, pretty much out of the goodness of his heart. There is no way he would ever have time to download the required third party libraries for the average paper in his field, let alone figure out how to build and run it.
I was trying to make it work with Python 2.7 but, after installing the required packages successfully I get the following error:
Cidraque · 2016-Oct-23 Only linux? :( Matt Zucker · 2016-Oct-23 It should work on any system where you can install Python and the requirements, including windows.
Hi there, I can't run the program, it gives me this output and I can't solve the problem by myself
This is, like THE foundational keystone document to the problem I'm trying to communicate for "builds and burdens".
I think I tried to install Jekyll once and I had the wrong version of Ruby so I gave up
@14:55
[Note: I am doing edits directly in dist/source-map.js because I did not want to spend time figuring out the build process]
My first experience with Scheme involved trying and failing to install multiple Scheme distributions because I couldn’t get all the dependencies to work.
The hidden curriculum consists of the unwritten rules, unspokennorms, and field-specific insider knowledge that are essential forstudent success but are not taught in classes. Examples includesocial norms about how to interact with authority figures, whereto ask for unadvertised career-related opportunities, and how tonavigate around the official rules of a bureaucracy.
Clever. I also like the framing of MIT's "Missing Semester" https://missing.csail.mit.edu/
The fact that most free software is privacy-respecting is due to cultural circumstances and the personal views of its developers
Thereafter, I would need to build an executable, which, depending on the libraries upon which the project relies could be anything from straightforward to painful.
@1:24:40
Starting from
mainisn't actually a good way to explain a program almost ever, unless the program is trivial.
We estimate that by 2025, Signal will require approximately $50 million dollars a year to operate—and this is very lean
Nah. Wrong.
This is:
Hsu, Hansen. 2009. “Connections between the Software Crisis and Object-Oriented Programming.” SIGCIS: Michael Mahoney and the Histories of Computing.
We undoubtedly produce software by backward techniques. Weundoubtedly get the short end of the stick in confrontations with hardwarepeople because they are the industrialists and we are the crofters. Softwareproduction today appears in the scale of industrialization somewherebelow the more backward construction industries. I […] would like toinvestigate the prospects for mass production techniques in software.17
Hsu only cites Mahoney for this, but the original McIlroy quote is from "Mass Produced Software Components".
well the real world of course isn't statically typed
Firefox seems to impose a limit (at least in the latest release that I tested on) of a length* of 2^16 i.e. 65536. You can test this by creating a bookmarklet that starts javascript:"65525+11/// followed by 65512 other slashes and then a terminating quote. If you modify it to be any longer, the bookmarks manager will reject it (silently failing to apply the change). If you select another bookmarklet and then reselect the one you edited, it will revert to original "65525+11" one.
* haven't checked whether this is bytes or...
This snippet removes some of the empty a elements to make the headings anchors instead:
javascript
([ ...document.querySelectorAll("a[name] +h1, a[name] +h2, a[name] +h3, a[name] +h4, h1 +a[name], h2 +a[name], h3 +a[name], h4 +a[name]") ]).map((x) => {
if (x instanceof HTMLHeadingElement) {
var link = x.previousElementSibling;
var heading = x;
} else {
var link = x;
var heading = x.previousElementSibling;
}
link.parentElement.removeChild(link);
heading.setAttribute("id", link.name);
})
The HTML encoding of this document contains several errors, some of which substantially affect the way it's read. This fixes one of those problems in Appendix II:
javascript
([ ...document.querySelectorAll("op") ]).reverse().forEach((op) => {
let f = document.createDocumentFragment();
f.append(document.createTextNode("<OP>"), ...op.childNodes);
op.parentElement.replaceChild(f, op);
})
The problem show be apparent on what is, at the time of this writing, line 4437:
html
<code>IF ?w THEN ?x<OP>?y ELSE ?z<OP>?y</code>
(The angle brackets around the occurrences of "OP" should be encoded as HTML entities. Because they aren't they end up getting parsed as HTML op elements (which isn't a thing) and screwing up the document tree.)
This is:
Buckland, Michael K. 1997. “What Is a ‘Document’?” Journal of the American Society for Information Science 48 (9): 804–9. https://doi.org/10.1002/(SICI)1097-4571(199709)48:9%3C804::AID-ASI5%3E3.0.CO;2-V.
I keep repeating this in the hopes that it sticks, because too much OO code is written like Java, and too many programmers believe that OO is defined by Java.
This reads like a total non-sequitur at this point in the post.
The key and only feature that makes JavaScript object-oriented is the humble and error-prone this
If you don’t own your platform (maybe you’re publishing to Substack or Notion), you can at least save your website to the Wayback Machine. I would also advise saving your content somewhere you control.
The Wayback Machine should provide an easy way for website authors to upload archives that can be signed and validated with the same certificate you're serving on your domain, so you neither you nor the Internet Archive needs to waste more resources than necessary having the Wayback Machine crawl your site in the ordinary way.
When talking to Ollie about this, he told me that some people leave their old websites online at <year>.<domain> and I love that idea
At the expense of still breaking everyone's links.
If you know you're going to do this, publish all your crap at <year>.<domain> now. Or even <domain>/<year>/. Oh wait, we just partially re-invented the recommendations of a bunch of static site generators.
Better advice: don't touch anything once you've published it. (Do you really need to free up e.g. /articles/archive-your-old-projects articles from your namespace? Why?)
This is, unfortunately, not the dumbest thing Liam has written that I've come across.
we should be able to utilize tabs for any application and combine tabs between them
Microsoft had a demo of this. It got shelved.
You're probably looking for https://riku.miso.town/.
I've mentioned it before, but what I find interesting is the idea of really parsing shell (scripts) like a conventional programming language—e.g. where what would ordinary be binary invocations are actually function calls i.e. to built-ins (and all that implies, such as inlining, etc).
Thompson observed that backtracking required scanning some parts of the input string multiple times. To avoid this, he built a VM implementation that ran all the threads in lock step: they all process the first character in the string, then they all process the second, and so on.
What about actual concurrency (i.e. on a real-world CPU using e.g. x86-64 SMP) and not just a simulation? This should yield a speedup on lexing, right? Lexing a file containing n tokens under those circumstances should then take about as long as lexing the same number of tokens in a language that only contains a single keyword foo—assuming you can parallelize up to the number of keywords you have, with no failed branching where you first tried to match e.g. int, long, void, etc before finally getting around to the actual match.
The next article in this series, “Regular Expression Matching: the Virtual Machine Approach,” discusses NFA-based submatch extraction. The third article, “Regular Expression Matching in the Wild,” examines a production implementation. The fourth article, “Regular Expression Matching with a Trigram Index,” explains how Google Code Search was implemented.
Russ's regular expression article series makes for a good example when demonstrating the Web's pseudomutability problem. It also works well to discuss forward references.
A more efficient but more complicated way to simulate perfect guessing is to guess both options simultaneously
NB: Russ talking here about flattening the NFA into a DFA that has enough synthesized states to represent e.g. in either state A or state B. He's not talking about CPU-level concurrency. But what if he were?
A contributor license agreement, or CLA, usually (but not always) includes an important clause: a copyright assignment.
Mm, no.
There are CLAs, and there are copyright assignments, and there are some companies that have CLAs that contain a copyright assignment, but they don't "usually" include a copyright assignment.
People are greedy. They tend to be event-gluttons, wishing to receive far more information than they actually intend to read, and rarely remember to unsubscribe from event streams.
Relative economies of scale were used by Nikunj Mehta in his dissertation to compare architectural choices: “A system is considered to scale economically if it responds to increased processing requirements with a sub-linear growth in the resources used for processing.”
Wait, why is sub-linear growth a requirement...?
Doesn't it suffice if there are some c₁ and c₂ such that costs are characterized by U(x) = rᵤx + c₁ and returns are V(x) = rᵥx + c₂ where rᵥ < rᵤ and the business had enough capital to reach the point where U(x) < V(x)?
the era of specialization: people writing about technical subjects in a way that only other scientists would understand. And, as their knowledge grew, so did their need for specialist words to describe that knowledge. If there is a gulf today, between the man-in-the-street and the scientists and the technologists who change his world every day, that’s where it comes from.
Vannevar Bush on this phenomenon:
A few people even complained that my dissertation is too hard to read. Imagine that!
To be fair: it's not an example of particularly good writing. As Roy himself says:
["hypertext as the engine of hypermedia state"*] is fundamental to the goal of removing all coupling aside from the standardized data formats and the initial bookmark URI. My dissertation does not do a good job of explaining that (I had a hard deadline, so an entire chapter on data formats was left unwritten) but it does need to be part of REST when we teach the ideas to others.
I'm actually surprised that Fielding's dissertation gets cited so often. Fielding and Taylor's "Principled Design of the Modern Web Architecture" is much better.
* sic
The problem is that various people have described “I am using HTTP” as some sort of style in itself and then used the REST moniker for branding (or excuses) even when they haven’t the slightest idea what it means.
It isn’t RESTful to use POST for information retrieval when that information corresponds to a potential resource, because that usage prevents safe reusability and the network-effect of having a URI.
Controversial opinion: response bodies should never have been allowed for POST requests.
the methods defined by HTTP are part of the Web’s architecture definition, not the REST architectural style
See also: Roy's lamentations in "On software architecture".
most folks who use the term are talking about REST without the hypertext constraint
what application means in our industry: applying computing to accomplish a given task
Another day, another series of bombastic Dalewyn shitposts.
This is:
Dahl, Ole-Johan, and Kristen Nygaard. “SIMULA: An ALGOL-Based Simulation Language.” Communications of the ACM 9, no. 9 (September 1966): 671–78. https://doi.org/10.1145/365813.365819
How about an example that doesn't make you cringe: a piece of code known as Foo.java from conception through all its revisions to the most recent version maintains the same identity. We still call it Foo.java. To reference a specific revision or epoch is what Fielding is getting at with his "temporally varying member function MR(t), where revision r or time t maps to a set of spatial parts" stuff. In short, line 15 of Foo.java is just as much a part as version 15 of Foo.java, they just reference different subsets of its set of parts (one spatial and one temporal).
it’s definitely too late for a clearer naming scheme so let’s move on
No way. Not too late for a better porcelain that keeps the underlying data model but discards the legacy nomenclature entirely.
it sounds like it’s some complicated technical internal thing
it is
after almost 15 years of using git, I’ve become very used to git’s idiosyncracies and it’s easy for me to forget what’s confusing about it
I think that the website code started to feel like it had bitrotted, and so making new blog posts became onerous.
almost every other time I've had the misfortune of compiling a c(++) application from scratch it's gone wildly wrong with the most undiagnose-able wall of error messages I've ever seen (and often I never manyage to figure it out even after over a day of trying because C developers insist on using some of the most obtuse build systems conceivable)
where I have access to the full reply chain, of which my own instance often captures only a subset
extremely frustrating
The experience is so bad, I don't know why Mastodon even bothers trying to synthesize and present these local views to the user. I either have to click through every time, or I'm misled into thinking that my instance has already shown me the entire discussion, so I forget to go to the original.
I realized that what I wanted is not a better Mastodon client, but a better Mastodon workflow
If you remove the word "Mastodon" from this sentence, this insight holds for a lot of things.
In many ways, computing security has regressed since the Air Force report on Multics was written in June 1974.
the modern textual archive format
The ar format is underrated.
The solution, Hickey concludes, is that we ought to model the world not as a collection of mutable objects but a collection of processes acting on immutable data.
Compelling offer when you try draw upon your experience to visualize the opportunity cost of proceeding along the current path by focusing on the problem described, but it's basically a shell game; the solution isn't a solution. It rearranges the deck chairs—at some cost.
HTML had blown open document publishing on the internet
... which may have really happened, per se, but it didn't wholly incorporate (subsume/cannibalize) conventional desktop publishing, which is still in 2023 dominated by office suites (a la MS Word) or (perversely) browser-based facsimiles like Google Docs. Because the Web as it came to be used turned out to be as a sui generis medium, not exactly what TBL was aiming for, which was giving everything (everything—including every existing thing) its own URL.
Hixie does have a point (though he didn't make it explicitly) and that is that the script doesn't really add anything semantic to the document, and thus would be better if it was accessed as an external resource
Interesting distinction.
the thought of a richly self-documenting script
or slightly more honestly as “RESTful” APIs
I don't think that arises from honesty. I'm pretty sure most people saying "RESTful" don't have any clue what REST really is. I think they just think that RESTful was cute, and they're not trying to make a distinction been "REST" and "RESTful" (i.e. "REST... ish", or "REST-inspired" if we're being really generous). Not most of them, at least.
REST purists
I really hate this phrase. It's probably one of the leading causes of misunderstanding. It's unfortunate that it's used here.
Fielding’s dissertation isn’t about web service APIs at all
how REST became a model for web service APIs
It didn't, though. It became a label applied to Web service APIs, despite having nothing to do with REST.
today it would not be at all surprising to find that an engineering team has built a backend using REST even though the backend only talks to clients that the engineering team has full control over
It's probably not REST, anyway.
why REST is relevant there
"could be relevant" (if you don't really understand it)
Fielding came up with REST because the web posed a thorny problem of “anarchic scalability,” by which Fielding means the need to connect documents in a performant way across organizational and national boundaries. The constraints that REST imposes were carefully chosen to solve this anarchic scalability problem.
There are better ways to put this.
the common case of the Web
Good way to put it.
REST gets blindly used for all sorts of networked applications now
the label¹, at least
inspired by Unix pipes
More appropriate might be "extracted from (the use of) UNIX pipes".
should
I don't know if that's totally accurate. "Could", maybe.
We could ask, I guess, but.
He was interested in the architectural lessons that could be drawn from the design of the HTTP protocol; his dissertation presents REST as a distillation of the architectural principles that guided the standardization process for HTTP/1.1.
I don't think this is the best way to describe it. He was first interested in extracting an abstract model from the implementation of the Web itself (i.e. how it could be and was often experienced at the time—by simply using it). His primary concern was using that as a rubric against which proposals to extend HTTP would have to survive in order to be accepted by those working on standardization.
The biggest of these misconceptions is that the dissertation directly addresses the problem of building APIs.
"The biggest of these misconceptions [about REST] is that [Fielding's] dissertation directly addresses the problem of building APIs."
For example (another HN commenter you can empathize with), danbruc insists on trying to understand REST in terms of APIs—even while the correct description is being given to him—because that's what he's always been told: https://news.ycombinator.com/item?id=36963311
This is:
Ciortea, Andrei, Olivier Boissier, and Alessandro Ricci. “Engineering World-Wide Multi-Agent Systems with Hypermedia.” In Engineering Multi-Agent Systems, edited by Danny Weyns, Viviana Mascardi, and Alessandro Ricci, 11375:285–301. Lecture Notes in Computer Science. Cham: Springer International Publishing, 2019. https://doi.org/10.1007/978-3-030-25693-7_15.
Toillustrate this principle, an HTML page typically provides the user with a num-ber of affordances, such as to navigate to a different page by clicking a hyperlinkor to submit an order by filling out and submitting an HTML form. Performingany such action transitions the application to a new state, which provides theuser with a new set of affordances. In each state, the user’s browser retrievesan HTML representation of the current state from a server, but also a selec-tion of next possible states and the information required to construct the HTTPrequests to transition to those states. Retrieving all this information throughhypermedia allows the application to evolve without impacting the browser, andallows the browser to transition seamlessly across servers. The use of hyperme-dia and HATEOAS is central to reducing coupling among Web components, andallowed the Web to evolve into an open, world-wide, and long-lived system.In contrast to the above example, when using a non-hypermedia Web service(e.g., an implementation of CRUD operations over HTTP), developers have tohard-code into clients all the knowledge required to interact with the service.This approach is simple and intuitive for developers, but the trade-off is thatclients are then tightly coupled to the services they use (hence the need for APIversioning).
Finally, it allows anauthor to reference the concept rather than some singularrepresentation of that concept, thus removing the need tochange all existing links whenever the representationchanges
I'm against this, because on net it has probably been more harmful than beneficial.
At the very least, if the mapping is going to change—and it's known/foreseeable that it will change, then it should be returning 3xx rather than 200 with varying payloads across time.
A resource can map to the empty set, which allowsreferences to be made to a concept before any realization ofthat concept exist
A very nice property—
These are not strictly subject to the constraints of e.g. Git commits, blockchain entities, other Merkel tree nodes.
You can make forward references that can be fulfilled/resolved when the new thing actually appears, even if it doesn't exist now at the time that you're referring to it.
Messages are delineated by newlines. This means, in particular, that the JSON encoding process must not introduce newlines within a message. Note however that newlines are used in this document for readability.
Better still: separate messages by double linefeed (i.e., a blank line in between each one). It only costs one byte and it means that human-readable JSON is also valid in all readers—not just ones that have been bodged to allow non-conformant payloads under special circumstances (debugging).
without raising an error
... since HTML/XML is not part of the JS grammar (at least not in legacy runtimes, i.e. those at the time of this writing).
ECMA-262 grammar
So, at minimum, we won't get any syntax errors. But the semantics of the constructs we use means that it's a valid expectation that the browser itself can execute this code itself—even though it is not strictly JS—because the expected semantics here conveniently overlap with some of JS's semantics.
offline documents
"[...] that is, ones not technically on the Web"
This poses a problem that we'll need to address.
Add a liaison/segue sentence here (after this one) that says "Browsers, in fact, were not designed with triple scripts in mind at all."
Browsers
"Web browsers"
This is of course ideal
huh?
Our main here is an immediately invoked function expression, so it runs as soon as it is encountered. An IIFE is used here since the triple script dialect has certain prohibitions on the sort of top-level code that can appear in a triple script's global scope, to avoid littering the namespace with incidental values.
Emphasize that this corresponds to the main familiar from other programming systems—that triple scripts doesn't just permit arbitrary use of IIFEs at the top level, so long as you write them that way. This is in fact the correct way to denote the program entry point; it's special syntax.
The code labelled the "program entry point" (containing the main function) is referred to as shunting block.
Preface this with "In the world of triple scripts"?
Also, we can link to the wiki article for shunting blocks.
Note that by starting with LineChecker.prototype.getStats before later moving on to LineChecker.analyze, we're not actually practicing top-down programming here...
It expects the system read call to return a promise that resolves to the file's contents.
Just say "It expects the read call to resolve to the file contents."?
system.print("\nThis file doesn't end with a line terminator.");
I don't like this. How about:
system.print("\n"); system.print("This file doesn't end with a line terminator.");
(This will separate the last line from the preceding section by two blank lines, but that's acceptable—who said there must only be one?.)
What about a literate programming compiler that takes as input this page (as either Markdown or HTML) and then compiles it into the final program?
and these tests can be run with Inaft. Inaft allows tests to be written in JS, which is very similar to the triple script dialect. Inaft itself is a triple script, and a copy is included at tests/harness.app.htm.
Reword this to say "[...] can be run with Inaft, which is included in in the project archive. (Inaft itself is a triple script, and the triplescripts.org philosophy encourages creators to make and use triple scripts that are designed to be copied into the project, rather than being merely referenced and subsequently downloaded e.g. by an external tool like a package manager.)"
We need to embed the Hypothesis client here to invite people to comment on this. I've heard that one of the things that made the PHP docs so successful is that they contained a comment section right at the bottom of every page.
(NB: I'm not familiar at all with the PHP docs through actual firsthand experience, so it may actually be wring. I've also seen others complain about this, too. But seems good, on net.)
The project archive's subtree
Find a better way to say this. E.g. "The subdirectory for Part 1 from the project archive source tree"
[ 0, 0, 0, 1 ]
And of course there's a bug here. This should be [1, 0, 0, 1].
returns [ 0, 0, 0, 1 ]
We can afford to emphasize the TYPE family constants here by saying something like:
Or, to put it another way, given a statement
let stats = checker.getStats(), the following results are true:stats[LineChecker.TYPE_NONE] // evaluates to `1` stats[LineChecker.TYPE_CR] // evaluates to `0` stats[LineChecker.TYPE_LF] // evaluates to `0` stats[LineChecker.TYPE_CRLF] // evaluates to `1`
propertes
"properties"
returned
"... by getStats."
In fact, this is the default for DOS-style text-processing utilities.
Note that the example cited is "a single line of text". We should emphasize that this isn't what we mean when we say that this is the default for DOS-style text files. (Of course DOS supports multi-line text files. It's just that the last line will have no CRLF sequence.)
The hack uses some clever multi-language comments to hide the HTML in the file from the script interpreter, while ensuring that the documentation remains readable when the file is interpreted as HTML.
flems.io uses this to great effect.
The (much simpler) triplescripts.org list-of-blocks file format relies on a similar principle.
please. If I want to open a link in a new window, I'll do it myself!
Does the methodology described here (and the way it's actually described here) adequately address the equivalent of its irreducible complexity problem?
I'm reminded of comments from someone on my team a year or two after Chrome was released where they explained that the reason they used it was because it "takes up less space"—on screen, that is; when you installed it, the toolbars took up 24–48 fewer pixels (or whatever) than the toolbars under Firefox's default settings.
See also: when Moz Corp introduced Personas (lightweight themes) for Firefox. This was the selling point for, like, a stupid amount of people.
the web has become the most resilient, portable, future-proof computing platform we’ve ever created
as the ecosystem around it swirled, the web platform itself remained remarkably stable
There’s a cost to using dependencies. New versions are released, APIs change, and it takes time and effort to make sure your own code remains compatible with them. And the cost accumulates over time. It would be one thing if I planned to continually work on this code; it’s usually simple enough to migrate from one version of a depenency to the next. But I’m not planning to ever really touch this code again unless I absolutely need to. And if I do ever need to touch this code, I really don’t want to go through multiple years’ worth of updates all at once.
The corollary: you can do that (make it once and never touch it again) if you are using the "native substrate" of the WHATWG/W3C Web platform. Breaking changes in "JavaScript" or "browsers" are rarely actually that. They're project/organizational failures one layer up—someone (who doesn't control users' Web browsers and how they work) decided to stop maintaining something or published a new revision but didn't commit to doing it in a backwards compatible way (and someone decided to build upon that, anyway).
as much as I love TypeScript, it’s not a native substrate of the web
Web components encapsulate all their HTML, CSS and JS within a single file
Huh? There's nothing inherent to Web Components that makes this true. That's just how the author is using them.
That’s the honest-to-goodness HTML I have in the Markdown for this post. That’s it! There’s no special setup; I don’t have to remember to put specific elements on the page before calling a function or load a bunch of extra resources.1 Of course, I do need to keep the JS files around and link to them with a <script> tag.
There's nothing special about Web Components; the author could have just as easily put the script block itself there.
Rather than dealing with the invariably convoluted process of moving my content between systems — exporting it from one, importing it into another, fixing any incompatibilities, maybe removing some things that I can’t find a way to port over — I drop my Markdown files into the new website and it mostly Just Works.
What if you just dropped your pre-rendered static assets into the new system?
although they happened to be built with HTML, CSS and JS, these examples were content, not code. In other words, they’d be handled more or less the same as any image or video I would include in my blog posts. They should be portable to any place in which I can render HTML.
JSON deserializes into common native data types naturally (dictionary, list, string, number, null).You can deserialize XML into the same data types, but
This is pretty circular reasoning. JSON maps so cleanly to JS data types, for example, because JSON is JS.
It could trivially be made true that XML maps onto native data types if PL creators/implementors put such a data type (i.e. mixed content trees) into their programming systems... (And actually, given all the hype around XML ~20 years ago, it's kind of weird that that didn't happen—but that's another matter.)
- if (!(typeof data === 'string' || Buffer.isBuffer(data))) { + if (!(typeof data === 'string' || isUint8Array(data))) {
Better yet, just don't write code like this to begin with.
code leveraging Buffer-specific methods needs polyfilling, preventing many valuable packages from being browser-compatible
... so don't rely on it.
If the methods are helpful then reimplement them (as a library, even) and use that in your code. When passing data to code that you don't control, use the underlying ArrayBuffer instance.
The very mention of polyfilling here represents a fundamental misapprehension about how to structure a codebase and decide which abstractions to rely on and which ones not to...
For security reasons, the nonce content attribute is hidden (an empty string will be returned).
Yet another awful technical design associated with CSP.
wokesockets
wat
this script runs every minute on a cronjob, and rebuilds my site if the git repo has been updated
Geez, talk about wasteful.
you'd need to wind up with external dependencies, since you'd likely need to rely on javascript
scrape user IP addresses
say you want to print the visiting users IP address - how would you do this on a statically generated website? to be honest i'm not sure. typically, you would scrape a header and display it
(I implement this second stock Firefox environment not with a profile but by changing the $HOME environment variable before I run this Firefox. Firefox on Unix helpfully respects $HOME and having the two environments be completely separate avoids various potential problems.)
Going by this explanation, what Siebenmann means by "not with a profile" is "not by using the profile manager". The revelation that you can use an explicitly redefined $HOME is a neat trick, but if I understand correctly still results in a different profile being created/used. Again, though: neat trick.
Much better if C vNext would just permit Pascal- (and now Go)-style ident: type declarations. It wouldn't even be hard for language implementers to support, and organizations could gradually migrate their codebases to the new form.
You can see how this would happen after seeing former UT Dean of so-called “Diversity, Equity and Inclusion”(DEI) Skyller Walkes, screaming at a group of students
I watched the clip and was prepared to see something egregious.
The characterization of Walkes as "screaming at a group of students" doesn't seem justifiable.
screaming at a group of students
This links to https://twitter.com/realchrisrufo/status/1654154804942258177
The important part, as is so often the case with technology, isn’t coming up with a solution to the post portability problem, but coming up with a solution together so that there is mutual buy-in and sustainability in the approach.
The solution is to not create keep creating these fucking problems in the first place.
You can do this trick with the “view image” option in the right-click menu, too – Ctrl-clicking that menu item will open that image in its own new tab.
Not anymore; that menu item has been removed—and you can only use the "Open Image in New Tab" item now.
This is great stuff!
If you consider destructive to be great, then sure, it's great.
Yesterday I spent a few hours on setting up a website for my music, but then instead of launching it I created a Substack.
A big problem with what's in this paper is that its logical paths reflect the déformation professionnelle of its author and the technologists' milieu.
Links are Works Cited entries. Works Cited entries don't "break"; the works at the other end don't "change".
zero, it is reasonable to delete the grave-stone
No. It is never reasonable.
One response, suggested in Ashman andDavis [1998], is that referential integrityis more of a social problem than a tech-nical problem
Yes.
This is:
Ashman, Helen. “Electronic Document Addressing: Dealing with Change.” ACM Computing Surveys 32, no. 3 (September 2000): 201–12. https://doi.org/10.1145/367701.367702
If you think about it, even the callback function in a standard Array.prototype.filter() call is a selector.
Huh?
his changes
Ibid.
With his additional changes
NB: as of this writing, the user jviide has no public esquery repo. The merged pull request is here: https://github.com/estools/esquery/pull/134.
else if (i === key.length - 1)
This redundant check could be taken out of the loop. Since last is already "allocated" at function scope, a single line obj = obj[key.slice(last)] after the loop would do the same job, results in shallower cyclomatic nesting depth, and should be faster, too.
this netted another 200ms improvement
Takeaway: real world case studies have shown, insisting on using for...of and then transpiling it can cost you over half a second versus just writing a standard for loop.
we know that we're splitting a string into an array of strings. To loop over that using a full blown iterator is totally overkill and a boring standard for loop would've been all that was needed.
Yes! J*TDT applies, which in this case is: Just Write The Damn Thing.
Given that the array of tokens grows with the amount of code we have in a file, that doesn't sound ideal. There are more efficient algorithms to search a value in an array that we can use rather than going through every element in the array. Replacing that line with a binary search for example cuts the time in half.
Standards are made by those who show up. But.. it is a privilege to have the opportunity to show up.
JavaScrip is an interpreted language, not a compiled one, so by default, it will be orders of magnitude slower than an app written in Swift, Rust, or C++.
Languages don't fall into the category of being either compiled or not. Implementations do. And the misconception of compiled code being ipso facto faster is a common one, but it's a misconception nonetheless (I suspect most often held by people who've never implemented one).
This is a good example of something that deserves an upvote on the basis of being a positive contribution and/or provides a thought-provoking insight, even though I don't strictly agree with their conclusions or the opinionated parts of what they're saying; a modern package set of memory-safe implementations is something to consider along with what the failure to produce one will do to the project in the long-term. Whether ripgrep, exa, etc. are objectively or subjectively better than their forebears is a separate matter that is beside the point.
Tumblr (owned by Yahooath!)
NB: now Automattic (Wordpress)
I was browsing someone’s site yesterday, hosted on Wordpress, yay! Except it was throwing plugin error messages. Wordpress is still too hard to maintain. Wordpress is not the answer.
Princes! listen to the voice of God, which speaks through me! Become good Christians! Cease to consider armed soldiers, nobles, heretical clergy, and perverse judges, as your principal supporters: united in the name of Christianity, learn to accomplish all the duties which it imposes on the powerful. Remember that it commands them to employ all their force to increase, in the most rapid manner possible, the social happiness of the poor.
Markham's translation reads:
Princes,
Hearken to the voice of God which speaks through me. Return to the path of Christianity; no longer regard mercenary armies, the nobility, the heretical priests and perverse judges, as your principal support, but, united in the name of Christianity, understand how to carry out the duties which Christianity imposes on those who possess power. Remember that Christianity commands you to use all your powers to increase as rapidly as possible the social welfare of the poor!
The spirit of Christianity is meekness, gentleness, charity, and, above all, loyalty; its arms are persuasion and demonstration.
Markham's translation reads:
The spirit of Christianity is gentleness, kindness, charity, and above all, honesty; its weapons are persuasion and example
Whats the total power consumption of all Android devices? Shaving just 1% is probably a couple of coalfired power plants worth of CO2.
This is one of those times that makes me think, "Okay, is this person saying this because they're coming at from a position of principle, or is it opportunism?" I.e., are they just reaching for plausible arguments that will serve as the plausible means in service to their desired ends?
Because whatever that number is, it probably pales in comparison to the waste that has followed from the corruption of the fundamentals of the Web—in which every other site is using SPA frameworks and shooting Webpacked-and-bundled agglomeration of NPM modules down the tubes resulting in 10x the waste associated with the widespread use of e.g. jQuery had 10+ years ago—with jQuery itself being the original posterchild for profligate waste wrt the Web. And yet, I'd bet many of the people supporting the commenter's position here would also be among the ones to celebrate monstrously complicated and bloaty geegaws that exist for the express purpose of letting you use "native" C/C++ libraries in Web apps through transcompilation.
Bare-bones setups are either presented as temporary – something you grow out of – or as some sort of hair-shirt hippie modernity avoidance thing – a refusal to engage with “modern” web development.
See also: https://www.colbyrussell.com/2019/03/06/how-to-displace-javascript.html
The amount of boilerplate and number of dependencies involved in setting up a web development project has exploded over the past decade or so. If you browse through the various websites that are writing about web development you get the impression that it requires an overwhelming amount of dependencies, tools, and packages.
my theory is that you can get a modern web dev setup without node or a package manager, using only a tiny handful of standalone utilities and browser dev tools
Highlighted by judell here: https://social.coop/@judell/111007954269882776
Reading through your link I caught myself thinking if I would put up with all those boilerplate nix steps just to add a new page to the site.
compute-heavy
What does "compute-heavy" mean here? How heavy, exactly?
We're setting these chemists up with conda in Ubuntu in WSL in a terminal whose startup command activates the conda environment. Not exactly a recipe for reproducibility after they get a new laptop.
First step: stop perpetuating the circularity of the reasoning behind the belief that Python is good for computational science.
I admire how nimble you are. I aspire to write blog posts at the drop of a hat like this, but I rarely do.
But you wrote this comment.
I was a great marketer. I was getting feedback from customers, and I’d pass on every list of what customers wanted to engineering and tell them that’s the features our customers needed.
The best way to learn is through apprenticeship -- that is, by doing some real task together with someone who has a different set of skills.
This is an underappreciated truth.
<ol><oln>(b)</oln><oli>No employer shall discriminate in any way on the basis of gender in the payment of wages, or pay any person in its employ a salary or wage rate less than the rates paid to its employees of a different gender for comparable work; [...]</oli></ol>
Mmmm... I dunno. HTML already has <dl>, <dt>, and <dd>. It seems adequate to just (re)-use it for this purpose. That's what a document of statutory law really is—a list of definitions, not an ordered list. They happen to be in order, usually. But what if Congress passed an act that put an item labeled 17 between items 1 and 3? Or π? Or 🌭 (U+1F32D)? (Or "U+1F32D" for that matter?) What fundamental thing is <ol> communicating that <dl> would fail at—to the point that it would compel someone to argue against the latter and insist only on the former?
There is one particular type of document in which the correct handling of the ordinal numbers of lists is paramount. A document type in which the ordinal numbers of the lists cannot be arbitrarily assigned by computer, dynamically, and in which the ordinal numbers of the lists are some of the most important content in the document.I'm referring of course to law.HTML, famously, was developed to represent scientific research papers, particularly physics papers. It should come as no surprise that it imagines documents to have things like headings and titles, but fails to imagine documents to have things like numbered clauses, the ordinal numbers of which were assigned by, for example, an act of the Congress of the United States of America.Of course this is not specific to any one body of law - pretty much all law is structured as nested ordered lists where the ordinal numbers are assigned by government body.It is just as true for every state in the Union, every country, every province, every municipality, every geopolitical subdivision in the world.HTML, from the first version right up to the present version, is fundamentally inimical to being used for marking up and serving legal codes as web pages. It can be done, of course - but you have to fight the HTML every step of the way. You have no access to any semantic markup for the task, because the only semantic markup for ordered lists is OL, which treats the ordinal numbers of ordered lists as presentation not content.
This is problematic if we wish to collect widespread metadata for an entity, for the purposes of annotation and networked collaboration. While nothing in the flat-hash ID scheme stops someone from attempting to fork data by changing even a single bit, thereby resulting in a new hash value, this demonstrates obvious malicious intention and can be more readily detected. Furthermore, most entities should have cryptographic signatures, making such attacks less feasible. With arbitrary path naming, it is not clear whether a new path has been created for malicious intent or as an artifact of local organizational preferences. Cryptographic signatures do not help here, because the original signed entity remains unchanged, with its original hash value, in the leaf of a new Merkle tree.
Author is conflating multiple things.
Retrieving desired revisions requires knowing where to look
This is one failure of content-based addressing. When the author controls the shape of identifiers (and the timing of publication), they can just do the inverse of Git's data model: they publish forward commitments--i.e., the name that they intend the next update to have. When they want to issue an update, they just install the content on their server and connect that name to it.
Two previously-retrieved documents cannot independently reference each other because their identities are bound to authoritative network services.
Well, they could. You could do it with an implementation of a URI-compatible hypertext system that uses really aggressive caching.
Any addressable thing will have an identifier.
Hard-Copy Print Options to Show Address of Objects and Address Specification of Links so that, besides online workers being able to follow a link-citation path (manually, or via an automatic link jump), people working with associated hard copy can read and interpret the link-citation, and follow the indicated path to the cited object in the designated hard-copy document.
Link Addresses That Are Readable and Interpretable by Humans
Every Object Addressable in principal, every object that someone might validly want/need to cite should have an unambiguous address
This is a good summation of what the Web was supposed to be about. Strange how 30 years on how little we've chipped away at achieving this goal.
designated targets in other mail items
MIME has ways to refer internally to content delivered in the same message. But what about other (existing) content? Message-ID-based URIs (lot alone URLs) are non-existent (to the best of my knowledge).
I know the imap URI scheme exists (I see imap URIs all the time in Thunderbird), but they seem unreliable (not universally unambiguous), although I could be wrong.
Newsgroup URIs are also largely inadequate.
The Hyperdocument "Library System" where hyperdocuments can be submitted to a library-like service that catalogs them and guarantees access when referenced by its catalog number, or "jumped to" with an appropriate link. Links within newly submitted hyperdocuments can cite any passages within any of the prior documents, and the back-link service lets the online reader of a document detect and "go examine" any passage of a subsequent document that has a link citing that passage.
That this isn't possible with open systems like the Web is well-understood (I think*). But is it feasible to do it with as-yet-untested closed (and moderated) systems? Wikis do something like this, but I'm interested in a service/community that behaves more closely in the concrete details to what is described here.
* I think that this is understood, that is. That it's impossible is not what I'm uncertain about.
or "execute the process identified a the other end."
Interesting that this is considered "basic".
Knowledge-Domain Interoperability and an Open Hyperdocument System
It's this: https://doi.org/10.1145/99332.99351.
(Also available from https://www.dougengelbart.org/content/view/114/.)
case Tokenizer.LDR: return 0x00 * RSCAssembler.U_BIT; case Tokenizer.LDB: return 0x00 * RSCAssembler.U_BIT; case Tokenizer.STR: return 0x01 * RSCAssembler.U_BIT; case Tokenizer.STB: return 0x01 * RSCAssembler.U_BIT;
Huh?
0x00 * RSCAssembler.U_BIT
Huh?
Comparing pancakes to file management is an apples to oranges comparison.
From this point onwards, I'm going to insist that anything that uses the phrase "[...] apples and oranges" omit it in lieu of the phrase "like comparing filesystems and pancakes".
You'll likely use some libraries where people didn't use type checkers and wrote libraries in a complicated enough way that the analysis cannot give you an answer.
You've chosen a bad library and complain about how bad that library is. That's dumb. (There's no line of reasoning for the argument being made here that doesn't reveal a double standard.)
The entire premise (you'll "likely" be using libraries you don't want to—as if it's something you're forced into doing) is flawed. It basically reduces down to the joke from Annie Hall—A: "The food here is terrible" B: "Yes, and such small portions!"
And, of course, just to be completely clear, this is valid syntax:let _true = true;_true++;_true; // -> 2
Of course it is. Why wouldn't it be?
also don't ever give someone an unsolicited code review on Twitter. It's rude.)
This reminds me of people who have encountered others complaining about/getting involved with something that the speaker has decided "isn't any of their business" (e.g. telling someone without a handicap placard not to park in a handicap space) who then go on and rant about it and demand that others not to tell them what to do.
In other words:
Don't ever make unprompted blanket criticism+demands like saying "Don't ever [do something]. It's rude." That's rude.
Another way I get inspiration for research ideas is learning about people's pain points during software development Whenever I hear or read about difficulties and pitfalls people encounter while I programming, I ask myself "What can I do as a programming language researcher to address this?" In my experience, this has also been a good way to find new research problems to work on.
The society as a whole is neither better nor worse off.
Non-stupid people always underestimate the damaging power of stupid individuals. In particular non-stupid people constantly forget that at all times and places and under any circumstances to deal and/or associate with stupid people always turns out to be a costly mistake.
Despite its ordinality, the Fourth law is the one most worth keeping in mind.
people who were wise from the beginning
That is, people for whom their present misfortunes have nothing to do with any past (or present) tendencies of stupidity.
This is why I build my personal projects in PHP even though I'm not really a fan. I use PHP and JQuery. It'll work basically forever and I can come back to it in 15 years and it'll still work.
When people mistakenly raise concerns about the Web platform being fragile, point to this common meme.
The worst part is that Let's Encrypt is preventing us from building a real solution to the problem. The entire certificate authority system is a for-profit scam. It imparts no security whatsoever. But Google gets its money, so it's happy. That means Chrome is happy, and shows no warnings, so the end user is happy too. That makes the website owner happy, and everyone is happy happy happy. But everything is still quite fundamentally fucked. Before Let's Encrypt, people were at least thinking about the problem
The validity of the author's conclusions notwithstanding, there needs to be a name for this phenomenon.
Previously: https://www.colbyrussell.com/2019/02/15/what-happened-in-january.html#unacknowledged-un-
TypeTest(x, obj.type, FALSE) ; x.type := ORB.boolType
The explicit x.type assignment here is redundant, because TypeTest will have already done it (in this case because the third argument is false).
I believe that comma thing was added recently.
I do kind of wish I had learned about big-endian dating sooner, though. But alea iacta est and everything.
Not at all (re "alea iacta est"). Get this: you can at any time make new, perfected labels and affix them to the spines, covering the old ones, but leaving them in place—just like you augmented the original manufactured product with first labels. This would not be a destructive act like rebinding all the Novel Novel workbooks.
The sketchbook should be workman-like; it’s not a fussy tool for self expression, it’s a daily tool.
This should be the mindset of people self-publishing on the Web, too. Too bad it's not.