This type of complexity has nothing to do with complexity theory
Also not to be confused with the notion from the area of information theory of Kolmogorov complexity. (At least not directly—but that isn't to say there is no relation there.)
This type of complexity has nothing to do with complexity theory
Also not to be confused with the notion from the area of information theory of Kolmogorov complexity. (At least not directly—but that isn't to say there is no relation there.)
Pretty nuts that Safari isn't open source. I thought for sure that Edge was going to be fully open source, both before and after the Blink conversion. Why even build closed source browsers in 2023?
In the end, they added a special browser quirk that detects our engine and disables OffscreenCanvas. This does avoid the compatibility disaster for us. But tough luck to anyone else
I agree that this approach is bad. I hate that this exists. The differences between doctype-triggered standards and quirks mode was bad enough. This is so much worse—and impacts you even when you're in ostensible standards mode.
I tried my best to persuade Apple to delay it, but I only got still-fairly-vague wording around it being likely to ship as it was.
Huh? Why? Why even waste the time? Just go fix your code.
preserves web compatibility
"... you keep using that word"
Safari is shipping OffscreenCanvas 4 years and 6 months after Chrome shipped full support for it, but in a way that breaks lots of content
I don't think that has been shown here? The zip.js stuff breaking is one thing, but the poor error detection regarding off-screen canvas doesn't ipso facto look like part of a larger pattern.
doesn't Apple care about web compatibility? Why not delay OffscreenCanvas
Answer: because they care about Web compatibility. If they delay X because Y is not ready, then that's ΔT where their browser remains incompatible with the rest of the world, even though it doesn't have to be.
Firstly my understanding of the purpose of specs was to preserve web compatibility - indeed the HTML Design Principles say Support Existing Content. For example when the new Array flatten method name was found to break websites, the spec was changed to rename it to flat so it didn't break things. That demonstrates how the spec reflects the reality of the web, rather than being a justification to break it. So my preferred solution here would be to update the spec to state that HTML canvas and OffscreenCanvas should support the same contexts. It avoids the web compatibility problem we faced (and possibly faced by others), and also seems more consistent anyway. Safari should then delay shipping OffscreenCanvas until it supported WebGL, and then all the affected web content keeps working.
This is a huge reach.
Although it's debatable whether having mismatched support is a good idea for a vendor, arguing that it breaks the commitment to compatibility is off. Construct broke not because something was removed, but because something was added and your code did not handle that well.
MDN documentation mentioned nothing about inconsistent availability of contexts
Two things: * Why would it have mentioned anything? It wouldn't have. It hadn't shipped yet. * MDN is not prescriptive; it's written by volunteers
typeof OffscreenCanvas !== "undefined"
The second = sign is completely superfluous here. Only one is necessary.
Construct requires WebGL for rendering. So it was seeing OffscreenCanvas apparently supported, creating a worker, creating OffscreenCanvas, then getting null for a WebGL context, at which point it fails and the user is left with a blank screen. This is in fact the biggest problem of all.
Well, the biggest problem is that anything can ever lead to a blank screen because Construct isn't doing simple error detection.
My father owed this man money, and this was his revenge.
If you are allowed by someone to owe them money, then what are you getting revenge for...?
the expert blind spot effect [ 25 ], whentutorial creators do not anticipate steps where novice tutorial takers
the export blind spot effect, when tutorial creators do not anticipate steps where novice tutorial takers may have difficulty
I call this "developer tunnel vision".
After 10 years of industry and teaching nearly 1000 students various software engineering courses, including a specialized course on DevOps, I saw one common problem that has not gotten better. It was amazing to see how difficult and widespread the simple problem of installing and configuring software tools and dependencies was for everyone.
Even notebooks still are problematic, for example, this study found that only 25% of Jupyter notebooks could be executed, and of those, only 4% actually reproduced the same results.
engage
In other words: spam
This will also take the stress away from the developers in maintaining the SublimeText core, which will be supported by the community while they can focus on pro features for the text editor.
I feel that open sourcing SublimeText is the only way for SubmlineText to be relevant and compete against VSCode.
The purpose of SublimeText is not to be "relevant" or "compete" against VSCode in the social media influencer sense of relevance and competition. It is to be a text editor that makes the author money both directly and indirectly (i.e. by selling licenses and being the kind of text editor that the author themselves uses to make software).
Is It Time to Open Source SublimeText?
This is such bizarre article and headline. It's almost clickbait.
Some links are sensitive, and their owners do not want them easily discovered.
This is folly.
Identifiers are public information.
https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-806
computer-supported collaborative work (CSCW)
Glenn, a seasoned pilot and astronaut, had just purchased an Ansco Autoset camera for a mere $40 from a drugstore
Whether it was from a drugstore or not, that $40 in 1962 was like $400 today...
On the one hand, it's a drag to do two different implementations, but on the other hand, it's a drag to have one of the two problems be solved badly because of baggage from the other problem; or to have the all-singing-all-dancing solution take longer than two independent solutions together.
Premature generalization is the root of all evil?
Well really the requirement is "small changes should be fast", right?
Calling out the X/Y Problem.
differentiating between using a database for indexing and as a canonical data store
Most people who think they need a database really just need a cache? See jwz on the Netscape mail client:
So, we have these ``summary files,'' one summary file for each folder, which provide the necessary info to generate that message list quickly.
Each summary file is a true cache, in that if it is deleted (accidentally or on purpose) it will be regenerated when it is needed again
100 lines of Python
Why would you pick Python for this?
There are no broader legal stakes here
That's not a position supported by the opinion itself. It claims (explicitly) that there are.
“For example, I personally believe that Visual Basic did more for programming than Object-Oriented Languages did,” Torvalds wrote, “yet people laugh at VB and say it's a bad language, and they've been talking about OO languages for decades. And no, Visual Basic wasn't a great language, but I think the easy DB interfaces in VB were fundamentally more important than object orientation is, for example.”
never once reasoning about physical locations, hardware, operating systems, runtimes, or servers
... but instead replacing all the cognitive load that would have gone to that task instead to reasoning about AWS infrastructure...
Coincidentally or not, the demise of Visual Basic lined up perfectly with the rise of the web as the dominant platform for business applications.
... which, as it turns out, is exactly what Yegge said he thought was going to happen in his response to the question that Linus was answering.
Almost all Visual Basic 6 programmers were content with what Visual Basic 6 did. They were happy to be bus drivers: to leave the office at 5 p.m. (or 4:30 p.m. on a really nice day) instead of working until midnight; to play with their families on weekends instead of trudging back to the office. They didn't lament the lack of operator overloading or polymorphism in Visual Basic 6, so they didn't say much.The voices that Microsoft heard, however, came from the 3 percent of Visual Basic 6 bus drivers who actively wished to become fighter pilots. These guys took the time to attend conferences, to post questions on CompuServe forums, to respond to articles. Not content to merely fantasize about shooting a Sidewinder missile up the tailpipe of the car that had just cut them off in traffic, they demanded that Microsoft install afterburners on their buses, along with missiles, countermeasures and a head-up display. And Microsoft did.
It gave me the start in understanding how functions work, how sub-procedures work, and how objects work. More importantly though, Visual Basic gave me the excitement and possibility that I could make this thing on my family's desk do pretty much whatever I wanted
“The prevailing method of writing Windows programs in 1990 was the raw Win32 API. That meant the 'C' Language WndProc(), giant switch case statements to handle WM_PAINT messages. Basically, all the stuff taught in the thick Charles Petzold book. This was a very tedious and complex type of programming. It was not friendly to a corporate ‘enterprise apps' type of programming,”
Yes, that is true. There's nothing you can do about that without breaking basic web expectations of URLs staying the same. The new endpoints can serve up the old content or Announce references to them, but the old URLs do need to continue resolving and at a minimum serve up a redirect if you want maximum availability.It would be a nice improvement to have a URL scheme that allowed referencing posts relative to a webfinger lookup to reduce the impact of that.
Consider also a change to the conventions of UGC, where service operators give control of the objects (URLs) "owned" by a given user over to them, the owner. You should be able to connect your account with a request servicer like Cloudflare. You upload a document specifying how a worker should handle the request to your servicer of choice, inform the website operator that you'd like to route your requests through the servicer, and you're good.
Browser-based interfaces are slow, clumsy, and require you to be online just to use them.
No they don't.
This conflates the runs-in-a-browser? property with the depends-on-mobile-code? property.
MDN is a documentation/tutorial website run by the creators of Firefox
mm... not really
To achieve the goals of e-Science, we must change research culture globally
This is such a good title.
NOTE: Cyren URL Lookup API has 1,000 free queries per month per user. COMPLETELY SEPARATE NOTE: You can use services like temp-mail to create temporary email addresses.
Just be honest about your scumminess, instead of trying to be cute.
Substack is growing fast: they now have 1M+ paid subscriptions but apparently generate no revenue. Which is already worrying to me. Because it means that yes, they can keep running like this if they keep getting investments but at some point something has to change.
What if someone exploited this for a conversion strategy—and it was planned that way all along?
Startup A receives VC investment. They spend it wooing creators to their platform, and those creators are able in turn to make a profit. The startup doesn't take a cut. The signs that they are going to crash and burn appear. Suddenly, 6 months before even the most pessimistic critics would have guessed, the startup announces that it is the end of their incredible journey. They warn everyone that in two weeks it will flip to read-only, and then 4 weeks after that it will go dark. Everything goes nuts. The creators were relying on the platform themselves to make money. No platform means no money. Suddenly a solution appears: Startup B. They offer more or less a drop-in replacement for Startup A's highest revenue-generating users. The only catch is that Startup B's plans cost money. Startup C also appears, along with Startup D, each catering to a different segment of stalwarts who haven't signed a deal with B. In fact, B then announces that they're investing in C and D, in order to promote a healthy ecosystem. Somehow A appears and says that they're investing in C and D, too. Meanwhile, A's sunset never happened—a month after the site went read-only, it's still in read-only mode. Then A announces low-cost paid plans, flips back to read-write, and opens for new signups, having successfully converted the most lucrative clients to the more expensive plans with B since they knew it was in their best interests to maintain continuity of revenue no matte the costs.
I could port it to Hugo or Jekyll but I think the end result would make it harder to use, not simpler.
Could this same design—or a similar one—be made available in a simpler form?
Yes.
I dislike the concept of editing old content on personal sites.
Does that dislike extend to the reformatting of old publications e.g. when you pick a new template for your site? I'd guess not, but I'd argue that you should at least consider not doing that, either.
haha. let’s not do that.
... unless...?
perhaps subconsciously i have carried over those principles here
Better analogy: in real life you can't actually unpublish something. At best you can go around trying to snatch up copies and then burn them. More realistically, you can published a new edition with corrections to errata incorporated, but—notably—the "bad" version will always be *Blah Blah Blah, first ed." The existence of the second edition doesn't erase the first one from people's bookshelves.
It's on the Web and how poorly orgs' information architecture is carried out that foo.example.com/bar can be one thing one day and then another thing entirely the next day. If we used URLs like <https://mitpress.mit.edu/Zachary, G. Pascal. Endless Frontier: Vannevar Bush, Engineer of the American Century. 1st ed. MIT Press, 1999.> would this be as big of a problem? Would doing so nudge born-digital documents in the same direction?
that’s not to say things should NEVER be edited
Edit it, sure. But don't clutter the ability to unambiguously refer to the previous version (by the same name it was given initially) as a way to distinguish it from later versions.
i don’t think it was designed to replicate the file and file folder model
In my experience the file-and-folder model is the reason behind so much URL breakage. People don't seem to realize that even if you have foo/bar/ and foo/baz/, then that doesn't mean that you need to have a view that involves bar/ and baz/ among other things contributing to the "clutter" perceived when gazing upon something called foo/ (and that foo/, in turn, cluttering up whatever it's "inside").
It's this perceived clutter and the compulsion to declutter that leads to people moving stuff around in the pursuit of a more legible model.
in my experience, google docs documents are very rarely if ever deleted. every organization i’ve ever done work for that uses google workspace1 has a problem with document bloat where google drive is just a mess of disorganized files, and document management is a job in itself
There's a lot wrong with Google Docs, but the fact that documents stick around is not one of them. Documents should stick around.
Use GitHub issues for blog comments, wiki pages and more!
No.
The only exception is a page which is deliberately a "latest" page
Nah. The latest URI should be a (temporary) redirect to the canonical URI of whatever the latest version is.
There are no reasons at all in theory for people to change URIs (or stop maintaining documents)
"Don't change your URIs" and "don't stop maintaining your documents" is contradictory.
If Kartik has a published document at /about in 2022 and then when I visit /about in 2023 what it said before is no longer there and it says something else instead (because he has been "maintaining" "it"), then that's a URI change. These are two separate documents, and the older one used to be called /about but now the newer one is.
During 1999, http://www.whdh.com/stormforce/closings.shtml was a page I found documenting school closings due to snow. An alternative to waiting for them to scroll past the bottom of the TV screen! I put a pointer to it from my home page.
It's actually the expectation that /stormforce/closings.shtml should mutate, reflecting recency that is anathema to the project here...
By reducing the duration of operations, he increased the chances of patient survival, saved thousands of lives, and pissed off the surgeons after they found out that he used the same methods for bricklayers. Surely those holier-than-thou doctors deserved better than to be compared to a bricklayer. 😉
The tone immediately makes me question the credibility/reliability of the information in this piece. (Perhaps, though, I made a mistake in not realizing not to expect too much from a site calling itself "allaboutlean.com"...)
He also optimized surgeons’ work, establishing the now-common method of a nurse handing the instruments to the surgeon rather than the surgeon turning around and looking for the right tool.
Not unlike shopping carts and the modern grocery store (Piggly Wiggly), footnotes, and page numbers, this is something that had to be invented.
Notably, though, it is not a market product.
likely the new people learning to code and yelling about the new shiny libraries they found
Bart asked me about what it is that I think causes NPM to be so bad, generally (or something like that), and I responded with the one-word answer "insecurity".
I think "striving for acceptance" is a better, more diplomatic way to put it.
What do you typically use?
Imagine if they answered "Google Docs, and then pick 'HTML' when you save it".
It's the sort of thing that seems obviously wrong on its face, but in practice is more expedient and not egregiously worse than the accepted alternative.
A hypertext link is a goto.
This is an unfinished form of the paper that's available (without broken inline images) here:
http://diglib.stanford.edu:8091/diglib/pub/reports/commentor.html
A draft(?) version with inline comments by Winograd: http://hci.stanford.edu/~winograd/papers/annotations.html
Traveller:
Try https://dougengelbart.org/content/view/114/ instead.
If you truly want to understand NLS, you have to forget today. Forget everything you think you know about computers. Forget that you think you know what a computer is. Go back to 1962. And then read his intent.
Alternatively, try cajoling yourself to invert the "[kind of like] the present, but cruder" thinking and frame the present in terms of the past—with present systems being "Engelbart, implemented poorly".
If CSS names change
wat
I should be able to edit after I publish.
'k, but we should also be able to see (a) that it has been edited, (b) what was edited, (c) how to unambiguously refer to a particular revision. To not offer the ability to do so is to take advantage of something that is technically achievable given the architecture of the Web but violates the original intent (i.e. to give someone a copy that looks like this at one point and then when they or someone else asks for that thing at a later date to then lie and say that it really looks like that).
One of the reasons this is so complicated is that there’s no simple orfast way to pay out musicians or labels for songs that are streamed in podcasts over RSS.
WTF? This has fuck-all to do with RSS.
The burden of resubscribing on a per-podcast basis every 7-15 days goes up exponentially as the podcasts being monitored grows into 6 or 7 digits.
Mmm... how? It's just linear, unless I'm missing something.
If you want to know within 1 minute if a podcast has a new episode
You don't need to do that.
Also: this problem is not specific to podcasting. It affects everything to do with RSS, generally.
contains
read: links to
when you try to simulate it on the screen it not only becomes silly but it slows you down
We've come up with a rule that helps us here: a change that updates node_modules may not touch any other code in the codebase.
This makes it sound like a hack/workaround, but to want to do otherwise is to want to do something that is already on its face wrong. So there's no issue.
Yes, this can be managed by a package-lock.json
This shouldn't even be an argument. package-lock.json isn't free. It's like cutting all foods with Vitamin C out of your diet and then saying, "but you can just take vitamin supplements." The recommended solution utterly fails to account for the problem in the first place, and therefore fails to justify itself as a solution.
It's unlikely to matter from a performance perspective. If you're only going to load, it doesn't really matter.
Uh... what? This is a total shifting of the goalposts.
“Why don’t you just” is not an appropriate way to talk to another adult. It’s shorthand for, “I have instantaneously solved your problem because I am The Solution Giver.
Excellent summation.
It isn't a good long term solution unless you really don't care at all about disk space or bandwidth (which you may or may not).
Give this one another go and think it through more carefully.
This web site is maintained by Tim Kindberg and Sandro Hawke as a place for authoritative information about the "tag" URI scheme. It is expected to stay small and simple.
Emphasis: last sentence
The common perception of the Web as a sui generis medium is also harmful. Conceptually, the most applicable relevant standard for Web content are just the classic standards of written works, generally. But because it's embodied in a computer people end up applying the standards of have in mind for e.g. apps.
You check out a book from the library. You read it and have a conversation about it. Your conversation partner later asks you to tell them the name of the book, so you do. Then they go to the library and try to check it out, but the book they find under that name has completely different content from what you read.
He would say, “To be early is to be on time. To be on time is to be late.”
I abhor tardiness and agree that being early is good advice, but this is a fucking stupid saying.
Being on time is on time.
(I mean this generally—not specific to the setting of this article.)
required us to show up for concerts at least 30 minutes early. If we were not 45 minutes early, we were marked as tardy
"30 minutes", immediately followed by "45 minutes"... What?
As for committing node_modules, there are pros and cons. Google famously does this at scale and my understanding is that they had to invest in custom tooling because upgrades and auditing were a nightmare otherwise. We briefly considered it at some point at work too but the version control noise was too much.
If you don't want version control, then that's your choice, but admit it (ideally out loud for others to hear, but failing that then at least to yourself) that that's what you're about.
What problem does this try to solve?
Funny (and ironic) that you should ask...
I myself have been asking lately, what problem does the now-standard "Run npm install after you clone the repo" approach solve? Can you state the NPM hypothesis?
See also: builds and burdens
I'm not going to make the same defenses that folks on HN prefer.
But the problem with Casey's worldview is that it provides no accommodations for the notion of zero-cost abstractions.
absolute gem of a book, I use it for my compilers class:https://grugbrain.dev/#grug-on-parsing
I didn't realize recursive descent was part of the standard grugbrain catechism, too, but it makes sense. Grugbrain gets it right again.
Not unrelated—I always liked Bob's justification for using Java:
I won't do anything revolutionary[...] I'll be coding in Java, the vulgar Latin of programming languages. I figure if you can write it in Java, you can write it in anything.
https://journal.stuffwithstuff.com/2011/03/19/pratt-parsers-expression-parsing-made-easy/
As is evident, this is a structured document. The structure is specified as HTML using tags to denote HTML elements, and a styling language called CSS is used to specify rules that use selectors to match elements. The desired styles can be applied to matched elements by specifying the properties which should take effect for each rule.
This goes on to provide a bunch more info with the express purpose of making it possible to 1. Print out this page, and then 2. Recreate the whole thing by hand if you wanted to, using only the printed page as reference.
Could easily add a section that describes a bookmarklet that you could use to transform the "live" (in-browser) document into something formatted like the one at "This page is a truly naked, brutalist html quine" https://secretgeek.github.io/html_wysiwyg/html.html.
Note that if you add any highlights using Hypothes.is to any of the CSS code blocks here, it will break them.
If you're looking for Stavros's "no-bullshit image host", that's https://imgz.org/
After running code to load all of the outages by loading zoomed-in parts of the map, we verify that the number of outages we found matches the summary’s number of total outages. If it doesn’t, we don’t save the data, and we log an error.
NB: there may be a race condition here? In which case, running into errors should be (one) expected outcome.
currentEtor
ETOR probably stands for Estimated Time of Restoration
If HTML was all the things we wanted it to be, we designed it to be, if reality actually matched the fantasies we tell ourselves in working group meetings, then mobile apps wouldn't be written once for iOS and once for Android and then once again for desktop web, they'd be written once, in HTML, and that's what everyone would use. You wouldn't have an app store, you'd have the web.
This is stated like unanimous agreement is a foregone conclusion.
The Web is for content. Just because people do build in-browser facsimiles of mobile-style app UIs doesn't mean that the flattening of content and controls into a single stream is something that everyone agrees is a good thing and what should be happening. They should be doing the opposite—curbing/reigning it in.
for all of the work that we've put into HTML, and CSS, and the DOM, it has fundamentally utterly failed to deliver on its promise
You mean your promise—the position of the Web Hypertext Application Technology Working Group.
Have you considered that the problem might have been you and what you were trying to do? You're already conceding failure at what you tried. Would it be so much worse to say that it was the wrong thing to have even been trying for?
we will only gain as we unleash the kinds of amazing interfaces that developers can build when you give them the raw bedrock APIs that other platforms already give their developers
You mean developers will gain.
they're holding developers back
Fuck developers.
Jesus fucking Christ. Fuck this shit.
Developers are scrambling to get out of the web and into the mobile app stores.
This isn't new. Also: good—application developers shouldn't be the only ones holding the keys to the kingdom when it comes to making stuff available on the Web. Authors* and content editors should have those keys.
* in the classic sense; not the post-millennium dilution/corruption where "authors" is synonymous with the tautologically defined "developers" that are spoken of when this topic is at the fore
Checking your own repos on a new computer is one thing… inheriting someone else’s project and running it on your machine in the node ecosystem is very rough.
On HN, the user bitwize (without knowing he or she is doing so) summarizes (the first half, at least) of the situation described here:
The appeal of JavaScript when it was invented was its immediacy. No longer did you have to go through an edit-compile-debug loop, as with Java, or even an edit-upload-debug loop as with a Perl script, to see the changes you made to your web-based application. You could just mash reload on your browser and Bob's your uncle!
The JavaScript community, in all its wisdom, reinvented edit-compile-debug loops for its immediate, dynamic language and I'm still assmad about it. So assmad that I, too, forgo all that shit when working on personal projects.
more tips for no-build-system javascript
Basically, ignore almost everything that Modern JS practitioners tell you that you need to be doing. You're halfway there with this experiment.
One of the most interesting and high-quality JS codebases that has ever existed is all the JS that powers/-ed Firefox, beginning from its conception through to the first release that was ever called "Firefox", the Firefox 3 milestone release, and beyond. To the extent that there was any build system involved (for all intents and purposes, there basically wasn't), the work it performed was very light. Basically a bunch of script elements, and later Components.utils.import calls for JSMs (NB: not to be confused with NodeJS's embarrassing .mjs debacle). No idea what things are like today, but in the event that there's a lot of heavy, NodeJS-style build work at play, it would be wrong to conclude that it has anything to do with necessity e.g. after finally reaching the limits of what no-build/low-build can give you (rather than just the general degradation of standards across Mozilla as a whole).
But my experience with build systems (not just Javascript build systems!), is that if you have a 5-year-old site, often it’s a huge pain to get the site built again.
Together we seek the best outcome for all people who use the web across many devices.
The best possible outcome for everyone likely includes a world where MS open sourced (at least as much as they could have of) Trident/EdgeHTML—even if the plan still involved abandoning it.
The compiler recognizes the syntax version by the MODULE keyword; if it is written in small caps, the new syntax is assumed.
Ooh... this might benefit from improvement. I mentioned to Rochus the benefits of direct (no build step) runnability in the vein of Python or JS, and he responded that he has already done this for Oberon+ on the CLI/CLR.
For the same reasons that direct runnability is attractive, so too might be the ability to copy and paste to mix both syntaxes. (Note that this entirely a different matter from whether or not it is a good idea to commit files that mix upper and lower case; I'm talking about friction.) Then again, maybe not—how much legacy Oberon is being copied and pasted?
I agree, of course, with the criticism of the price point. As I often say, $9.99/month (or even $4.99/month) is more expensive than premium email—and no matter how cool you think your thing is, it's way less important than email. You should always return something for ~$20, especially if you already have a free tier. (When I say "for $20" here, I'm talking about a one time payment, or on a subscription basis that maxes out at $20/yr.)
The following musings are highly specific to the market for what's being sold here.
Paying $20 should get you something that you aren't bothered about again for the next year. Maybe to make it even easier, enable anyone to request a refund of their $20 for any reason within the first 7 days. This gives a similar feel to a free trial, but it curbs abuse and helps target serious buyers in the first place. In the event that 7 days is not enough time even for people to convince themselves that they need it, maybe keep open the ability to use a severely limited version of the service for the remainder of the year. E.g. you can continue to log in and simulate what you'd get with the full version, but it's only accessible to you because you can't publish them and/or share links with anyone who doesn't have access to your account.
Despite being a Rust apologist and the fact that this paper makes Rust look better than its competitors, Steve Klabnik says this paper is quite bad and that he wishes people would stop referencing it.
For Oberon+, try https://oberon-lang.github.io/
We have CSV files as a kind of open standard
The W3C actually chartered a CSV working group for CSV on the Web. Their recommendation resolves ambiguities of the various CSV variants, and they even went on to supercharge the format in various well-defined, nice-to-have ways.
Here is a larger example. In this case, the directory structure of the modules corresponds to the import paths.
Huh? This sounds like it's saying the opposite of what was said two paragraphs earlier.
Another point made by Wirth is that complexity promotes customer dependence on the vendor. So there is an incentive to make things complex in order to create a dependency of the customer generating a more stable stream of income.
Title
In order to make it way easier to keep track of things in bookmarklet workspaces, there needs to be an option that adds an incrementing counter (timestamp?) to the bookmarklet title, so when dragging and dropping into your bookmarks library, you don't lose track of what's going on with a sea of bookmarklets all named the same thing.
window.history.pushState(null, "unescaped bookmarklet contents");
This has been giving me problems in the latest Firefox releases. It ends up throwing an error.
I probably spend 50% of my time trying to figure out why the build system is breaking, again.
@34:00
In theory? RDF, it's awesome. I like it a lot. This is why I'm working on this. But in practice [...] the developer experience it's not great, and when people when they see Turtle and RDF and all these things, they don't like it, you know? My opinion is that it's because of lack of learning materials and the developer experience to get started.
Usability and accessibility can impact where a technology falls on the spectrum: not paying attention to these dimensions makes it harder to move to higher levels of agency, staying more exclusive as "Look at what I/you/we can do, as the capable ones"
Miguel de Icaza Jun 17, 2022 @migueldeicaza Replying to @migueldeicaza @markrendle and 2 others The foundation should fund, promote and advance a fully open source stack. And the foundation should remove every proprietary bit from the code in http://dotnet.org.
Microsoft can and should compete on the open marketplace on their own. [...] And we should start with the debugger: we should integrate the Samsung one, it should be the default for OmniSharp and this is now we get contributions and improvements- not by ceding terrain to someone that can change the rules to their advantage at will.
I tried (although perhaps not valiantly, but as an outsider) to convince Miguel and the then-Director of the .NET Foundation in 2015 that this state of affairs was probably coming and that he/they should reach out to the FSF/GNU to get RMS to lift the .NET fatwa, become a stakeholder/tastemaker in the .NET ecosystem, and encourage free software groupies to take charge so that FSF/GNU would be around as a failsafe for the community and would inevitably benefit greatly esp. from any of MS's future failure on this front. I tried the same in reverse, too. They seemed to expect me to be a liaison, and I couldn't get them to talk to each other directly, even though that's what needed to happen.
Nobody but Eric would have thought of that shortcut to solve the problem.
I find it odd that this is framed here as an example of an "unusual thinker". The solution seems natural, if underappreciated, for a domain where any tool's output target is one that was specifically crafted to intersect with what is ordinarily a (or in this case, the) "preferred form for modification".
You can (and we probably all more often should) do the same thing with e.g. HTML+CSS build pipelines that sit untouched for years and in that course become unfashionable and undesirable...
you can't hang useful language features off static types.For example, TypeScript explicitly declares as a design goal that they don't use the type system to generate different code
This is a good thing. https://bracha.org/pluggableTypesPosition.pdf
Refer to §6 #3.
It would even use eye contact correction software to make it feel like you were actually looking at each other.
If this were using professionally installed videoconferencing hardware, there would be no need for "eye contact correcting software" if done right. The use of such software would be an indicator of failure elsewhere.
the NABC model from Stanford. The model starts with defining the Need, followed by Approach, Benefits, and lastly, Competitors. Separating the Need from the Approach is very smart. While writing the need, the authors have to understand it very well. The approach and benefits sections are pretty straightforward, where authors define their strategy and list down the advantages. Since most people focus on them when they talk about ideas, it's also easy to write. Then the competition section comes. It is the part the authors have to consider competitors of their proposal. Thinking about an alternative solution instead of their suggestion requires people to focus on the problem instead of blindly loving and defending their solutions. With these four parts, the NABC is a pretty good model. But it's not the only one.
Publish content to your website using Indiekit’s own content management system or any application that supports the Micropub API
"... assuming you rebase your site on top of Indiekit beforehand" (which is a big leap).
I’m formally launching Indiekit, the little Node.js server with all the parts needed to publish content to your personal website and share it on social networks. Think of Indiekit as the missing link between a statically generated website and the social web protocols developed by the IndieWeb community and recommended by the W3C.
Now just get rid of the server part.
The real missing link between (conventional) static sites and the W3C's social protocols is still static. This post itself already acknowledges the reasons why1.
Still, installing Indiekit on a web server can be a bit painful.
Publishing them to the modern web is too hard and there are few purpose-built tools that help
It’s too hard to build these kinds of experiences
"... with current recommended practices", that is.
On the other hand, it means that you now need to trust that Apple isn’t going to fuck with the podcasts you listen to.
There really is no substantial increase in trust. You were already trusting their player to do the right thing.
One convenience feature is that if you paste the Apple Podcasts directory listing instead of the feed URL, I’ll look up the feed URL from the listing and treat it as a redirect.
Some thoughts:
This is indeed good UX.
What's not good UX—and which I discovered/realized this week—is that there seems to be no easy/straightforward way to map a podcasts.apple.com podcast page to either its feed URL or to the original publisher/podcast's Web site. In reality, this would be as trivial as embedding a <link>.
Additionally, there's a missed opportunity in the podcasting space to make judicious use of the Link header—every piece of media served as part of a podcast should include in the HTTP response a link back to the canonical feed URL and/or the original site! (And they should also have CORS enabled, while they're at it.) Why isn't this already a thing? Answer: because it's a trivial detail; podcasters could do this, but what's the point?, they'd say—almost no one is going to attach a network inspector to the requests and check to see whether they're sending these headers for only the sake of steadfast and adherence to hypermedia ideals. Worth noting that this is the exact opposite of Jobs's principle of carrying out good craftsmanship throughout e.g. a chest of drawers or when building a cabinet, even for the parts that no one will see, in order to "sleep well at night". Maybe this could be used to shame Apple?
the author complains that an Apple Podcast user has to go through the app (and all its restrictions), but again, not that different from Instagram posts. As a user, you must go through Instagram to see photos.
And cyclists need to make sure they have wheels attached before riding a bicycle.
This is one of those things that superficially seems like a relevant retort, but as a reply it's actually total nonsense.
Or, if you wanted to put it more abrasively: Instagram photos are not podcasts, dumbass.
Good example of the "always return something (for $20)" principle.
this is misplaced outrage on the author's part, since Apple has never produced RSS feeds
Another casualty of the Copenhagen interpretation of ethics (or close enough, at least)?
Premium feeds are rehosted by Apple and it's huge PITA because we have ad-supported public feeds and ad-free premium feeds and need to build them twice.
The author here makes it sound like they have to reach out and grab content stream chunks, stitch them together with their own hands, and then plonk them down on the assembly line for 14 hours a day or something.
It's a program. You write a program that does the building.
There is no predetermined correlation between this import path and the file system, and the imported module doesn’t have to know anything about the import path used in an importing module.
This is not a good approach. It's the opposite of what you want. Module resolution remains easy for computers (because of their speed), but tedious for humans.
As a writer, maybe there's some benefit for no correlation. As a reader trying to understand a foreign codebase, esp. one who is in the moment trying to figure out, "Where is this thing defined? Where can I read the source code?" when jumping through procedure definitions, not being able to trivially ascertain which file a given thing is in is unnecessary friction. Better to offload a tiny bit of work onto the author who knows the codebase (or at least their own immediate intention) well rather than to stymie the progress of dozens/hundreds of readers trying to work things out.
it quickly became clear that this approach would reach its limits as soon as several people contributed modules
Worth noting that for many of Rochus's own projects (esp. ones related to Oberon/Oberon+), it's just a bunch of .cpp and .h files in one directory...
Having purchased to a new laptop, I find myself having to install additional software to compile
igal needs Perl to run and it also relies on a few other programs that come standard with most Linux distributions.
Hyperdocumentsmay be submitted to a library-like service (an adminis-tratively established, AUGMENT Journal) that catalogsthem, and provides a permanent, linkable address andguaranteed as-published content retrieval. This Jour-nal system handles version and access control, pro-vides notifications of supercessions and generallymanages open-ended document collections.
Imagine an arxiv-like depository that dealt with native hypertext, rather than TeX/PDF.
Food for thought: PWP as a prereq? What about RASH+SingleFileZ
Meta-level referencing (addresses on linksthemselves) enables knowledge workers to commentupon links and otherwise reference them.
Individualapplication subsystems (graphical editors, programlanguage editors, spreadsheets) work with knowl-edge products, but do not “own” hyperdocuments inthe sense of being responsible for their storage
The opposite of current Web app norms (contra desktop).
The overriding class Shape hasadded a slot, color. Since Shape is the superclass of all other classes in ShapeLi-brary, they all inherit the new slot.
This is the one thing so far where the procedural syntactic mechanism isn't doing obvious heavy lifting.
The slot definition of List fills the role of an import statement, as do thoseof Error and Point.
... at some expense to ergonomics.
It's odd that they didn't introduce special syntax for this. They could have even used import to denote these things...
The factory method is somewhat similar to a traditional constructor. How-ever, it has a significant advantage: its usage is indistinguishable from an ordinarymethod invocation. This allows us to substitute factory objects for classes (orone class for another) without modifying instance creation code. Instance cre-ation is always performed via a late bound procedural interface.
The class semantics for new in ES6 really bungled this in an awful way.
Newspeak programs enjoy the property of representation independence
Is that mechanism or culture?
All names are late bound
Not all names, I think. Local identifiers, for example...?
Every item filed in a bugtracker should correspond to a defect.
Patch based systems are idiotic, that's RCS, that is decades old technology that we know sucks (I've had a cocktail, it's 5pm, so salt away).Do you understand the difference between pass by reference and pass by value?
Larry makes a similar analogy (pass by value vs pass by reference) to my argument about why patches are actually better at the collaboration phase—pull requests are fragile links. Transmission of patch contents is robust; they're not references to external systems—a soft promise that you will service a request for the content when it comes. A patch is just the proposed change itself.
Literate programming worked beautifully until wegot to a stage where we wanted to refactor theprogram. The program structure was easy tochange, but it implied a radical change to thestructure of the book. There was no way we couldspend a great deal of time on restructuring thebook so we ended up with writing appendices andappendices to appendices that explained what wehad done. The final book became unreadable andonly fit for the dustbin.The lesson was that the textbook metaphor is notapplicable to program development. A textbook iswritten on a stable and well known subject whilea program is under constant evolution. Weabandoned literate programming as being toorigid for practical programming. Even if we got itright the first time, it would have failed in thesubsequent maintenance phases of the program’slife cycle.
How do we package software in ways that maximize its reusability while minimizing the level of skill required to achieve reuse?
Is that really the ultimate, most worthy goal? It seems that "minimizing the level of skill required[...]" is used as a proxy here for what we're really after—minimizing the total cost of producing the thing we want. Neither the minimization of skilled use nor reuse should be held as a priori goals.
if you are running a software business and you aren't at, like, Google-tier scale, just throw it all in a monorepo
The irony* of this comment is that Google and Google engineers are famously some of the most well-known users/proponents of monorepos.
* not actual irony; just the faux irony—irony pyrite, or "fool's irony", if you like
I would argue that it’s simply more fun to engage with the digital world in a read-write way, to see a problem and actually consider it fixable by tweaking from the outside
He doesn't exactly say it here, but many others making the same observations will pair it with the suggestion that this is because of some intrinsic property of the digital medium. If you think about it, that isn't true. If you consider paper, people tend to be/feel more empowered to tweak it for their own use (so long as they own the copy); digital artifacts seem more hands-off, despite their potential, because the powers involved are reserved for wizards, largely thanks to the milieu that those who are the wizards have cultivated to benefit themselves and their livelihood first, rather than empowering the ordinary computer user.
Software should be a malleable medium, where anyone can edit their tools to better fit their personal needs. The laws of physics aren’t relevant here; all we need is to find ways to architect systems in such a way that they can be tweaked at runtime, and give everyone the tools to do so.
It's clear that gklitt is referring to the ability of extensions to augment the browser, but: * it's not clear that he has applied the same thought process to the extension itself (which is also software, after all) * the conception of in-browser content as software tooling is likely a large reason why the perspective he endorses here is not more widespread—that content is fundamentally a copy of a particular work, in the parlance of US copyright law (which isn't terribly domain-appropriate here so much as its terminology is useful)
a platform with tremendous potential, but somewhat disorganized and neglected under current management
This has almost always been the case—at least as far back as 10+ years ago with addons.mozilla.org, too.
CSS classes
NB: there's no such thing as a "CSS class". They're just classes—which you may use to address things using CSS's selector language, since it was conveniently (and wisely) designed from the beginning to incorporate first-class* support for them.
* no pun intended
it’s getting harder to engineer browser extensions well as web frontends become compiled artifacts that are ever further removed from their original source code
because it’s building on an unofficial, reverse-engineered foundation, there are no guarantees at all about when things might change underneath
This is an unfortunate reality about the conventions followed by programmers building applications with Web-based interfaces: no one honors the tradition of the paper-based forms that their digital counterparts are supposed to mimic; they're all building TOSS-style APIs (and calling that REST) instead of actual, TURN-style REST interfaces.
too much focus on the ‘indie’ (building complicated self-hosted everything-machines) and not enough on the ‘web’
a special/reserved GET param could be used in order to specifying the version hash of the specific instance of the resource you want
we have one of the most powerful languages for manipulating everything in the browser (ecmascript/javascript) at our disposal, except for manipulating the browser itself! Some browsers are trying to address this (e.g. http://conkeror.org/ -- emacs styled tiled windows in feature branch!) and I will be supporting them in whatever ways I can. What we need is the bash/emacs/vim of browsers -- e.g. coding changes to your browser (emacs style) without requiring recompiling and building.
That was what pre-WebExtensions Firefox was. Mozilla Corp killed it.
See Yegge's remarks on The Pinocchio Problem:
The very best plug-in systems are powerful enough to build the entire application in its own plug-in system. This has been the core philosophy behind both Emacs and Eclipse. There's a minimal bootstrap layer, which as we will see functions as the system's hardware, and the rest of the system, to the greatest extent possible (as dictated by performance, usually), is written in the extension language.
Firefox has a plugin system. It's a real piece of crap, but it has one, and one thing you'll quickly discover if you build a plug-in system is that there will always be a few crazed programmers who learn to use it and push it to its limits. This may fool you into thinking you have a good plug-in system, but in reality it has to be both easy to use and possible to use without rebooting the system; Firefox breaks both of these cardinal rules, so it's in an unstable state: either it'll get fixed, or something better will come along and everyone will switch to that.
Something better didn't come along, but people switched anyway—because they more or less had to, since Mozilla abandoned what they were switching from.
Sciter. Used for rendering the UI of apps. There's no browser using Sciter to display websites, and the engine is Closed source.
Worth noting that c-smile, the creator of Sciter, put out an offer during COVID lockdowns to make Sciter open source if someone would fund it for $100k. That funding never came through.
You're looking for https://mek.fyi/math
You're looking for https://mek.fyi/essays/software/what-the-browser-is-missing
https://michaelkarpeles.com/math.html
https://michaelkarpeles.com/essays/philosophy/what-the-browser-is-missing.html
https://michaelkarpeles.com/essays/philosophy/what-the-browser-is-missing.html
My central goal is to further Paul Otlet, et al's, vision and head toward an amalgamous World Wide Web (a Universal Knowledge Repository) freed of arbitrary, discrete "document" boundaries.
My central goal is a universal knowledge repository freed of discrete "document" boundaries
Readers must learn specific reflective strategies. “What questions should I be asking? How should I summarize what I’m reading?” Readers must run their own feedback loops. “Did I understand that? Should I re-read it? Consult another text?”
I generally don't have to do that when reading except when reading books or academic papers. This suggests that there's not really anything wrong with the form of the book, but rather its content (or the stylistic presentation of that content, really).
I've said it a bunch the biggest barrier to accessibility of academic articles specifically is the almost intolerable writing style that almost every non-writer adopts when they're trying to write something to the standards for acceptance in a journal. Every journal article written for joyless robots should be accompanied by a blog post (or several of them) on the author's own Web site that says all the same things but written for actual human beings.
Readers can’t just read the words. They have to really think about them. Maybe take some notes. Discuss with others. Write an essay in response. Like a lecture, a book is a warmup for the thinking that happens later.
What if, when you bought a book, included was access to a self-administered test for comprehension? Could this even solve the paying-for-things-limits-access-to-content problem? The idea would be to make the thing free (ebooks, at least), but your dead tree copy comes with access to a 20-minute interactive testing experience (in a vendor-neutral, futureproof format like HTML and inline JS—not necessarily a Web-based learning portal that could disappear at any moment).
I saw this tech talk by Luis Von Ahn (co-creator of recaptcha) and learned about the idea of harnessing human computation
Consider: a version of the game 20 Questions that helps build up a knowledge base that can be relied upon for Mek's aforementioned Michael Jackson-style answers.
How did it work? GNUAsk (the aspirational, mostly unreleased search engine UI) relied on hundreds of bots, running as daemons, and listening in on conversations within AOL AIM, IRC, Skype, and Yahoo public chat rooms and recording all the textual conversations.
See also: Universal Access to Knowledge raises a concern of misuse
and developers are required to have the Ruby runtime in their environment, which isn’t ideal.
In one of our early conversations with developers working on CLIs outside of Shopify, oclif came up as an excellent framework of tools and APIs to build CLIs in Node. For instance, it was born from Heroku’s CLI to support the development of other CLIs. After we decided on Node, we looked at oclif’s feature set more thoroughly, built small prototypes, and decided to build the Node CLI on their APIs, conventions, and ecosystem. In hindsight, it was an excellent idea.
Minority(?) viewpoint: oclif-based command-line apps (if they're anything like Heroku's, at least) follow conventions that are alien and make them undesirable.
There’s a caveat that we’re aware of—while Hydrogen and App developers only require one runtime (Node), Theme developers need two now: Ruby and Node.
Well, you could write standards-compliant JS... Then people could run it on the runtime everyone already has installed, instead of needing to download Node.
Of all the programming languages that are used at Shopify, Ruby is the one that most developers are familiar with, followed by Node, Go, and Rust.
Node is not a programming language.
And my mom is getting older now and I wish I had all the comments, posts, and photos from the past 14 years to look back on and reminisce. Can’t do that now.
This reminds me of, during the height of the iPod era, when someone I know was gifted* an non-Apple music player and some iTunes gift cards—their first device for their first music purchases not delivered on physical media. They created an iTunes account, bought a bunch of music on the Music Store, and then set about trying to get it onto their non-Apple device, coming to me when it wasn't going well trying to get it to work themselves. I explained how Apple had (at the time) made iTunes Music Store purchases incompatible with non-Apple devices. Their response was baffling to me:
Rather than rightly getting pissed at Apple for this state of affairs, they did the opposite—they expressed their disdain about the non-Apple MP3 player they were given** and resolved to get it exchanged for credit so they could buy a (pricier, of course) Apple device that would "work". That is, they felt the direct, non-hypothetical effects of Apple's incompatibility ploy, and then still took exactly the wrong approach by caving despite how transparently nefarious it all was.
Returning to this piece: imagine if all that stuff hadn't been locked up in the social media silo. Imagine if all those "comments, posts, and photos from the past 14 years" hadn't been unwisely handed over for someone else to keep out of reach unless you assimilated. Imagine just having it delivered directly to your own inbox.
* NB: not by me
* NB: not as a consequence for mimetic desire for the trendiest device; they were perfectly happy with the generic player before they understood the playback problem
It’s not feasible to constantly be texting and phone calling Paula from 10th grade geometry, etc.
This was initially confusing. What makes texting infeasible, but doing it through Facebook is feasible? I realized upon reaching the end of the next paragraph: "I cant make a new Facebook in 2023 and add all these old friends. Literally psychotic behavior."
When this person talks about "keeping up", they don't mean "interacting". They mean non-interactively keeping tabs on people they once knew but that they don't really have an ongoing relationship with.
See also: the work of Jeannette Wing on (the value of (teaching)) computational thinking.
It's interesting how few comments are engaging with the substance of the piece. They are encountering for the first time the idea that Rikard is providing a commentary on—that is, giving students their own big kid Web site, an idea that "belongs" to the "Domain of One's Own" effort—and expressing enthusiasm for it here as comments nominally about this piece, which is really rather intended to express a specific, reflective/critical response to overall idea, and does not pretend to present that idea as a novel suggestion for the first time...
References to "the World Wide Wruntime" is a play on words. It means "someone's Web browser". Viz this extremely salient annotation: https://hypothes.is/a/i0jxaMvMEey_Elv_PlyzGg
Missed opportunity here to simply name this GET.
the patriotic or religious bumper-stickers
College graduates in 2005 could understand what this meant. I'm skeptical that college graduates in 2023 can really grok this allusion, even if it were explained.
See also:
this previous comment thread with a minority detractor view on Idiocracy [...] argues it’s a little more dated to it’s specific Bush-era cultural milieu than everyone remembers
https://news.ycombinator.com/item?id=29738799
E.g.:
[Idiocracy's] "you talk faggy" [...] sadly was common in real life during the mid-00s [...] but would be completely taboo now
how annoying and rude it is that people are talking loudly on cell phones in the middle of the line. And look at how deeply and personally unfair this is
That's actually not (just) seemingly "personally unfair"—it's collectively unfair. The folks responsible for these things serve as the better example of self-centeredness...
Because my natural default setting is the certainty that situations like this are really all about me. About MY hungriness and MY fatigue and MY desire to just get home, and it’s going to seem for all the world like everybody else is just in my way.
The fact that we're not talking about a child here but that it was considered a normal for a 43-year-old man in 2005 to have this as his default setting perhaps explains quite a lot about the evident high skew of self-centeredness in folks who are now in their sixties and seventies.
I didn't notice this in 2005, but maybe I wasn't paying close enough attention.
clichés
thought-terminating ones, even
there is actually no such thing as atheism. There is no such thing as not worshipping. Everybody worships
Very sophomoric argument, and it's hard not to point out the irony between this claim and everything preceding it wrt self-assuredness.
Is it impossible for there to exist people to whom this description doesn't apply, or is it merely annoying and inconvenient to consider the possibility that they might?
none of this is likely, but it’s also not impossible
Everyone here has done this, of course. But it hasn’t yet been part of you graduates’ actual life routine, day after week after month after year.
By way of example, let’s say it’s an average adult day, and you get up in the morning, go to your challenging, white-collar, college-graduate job, and you work hard for eight or ten hours, and at the end of the day you’re tired and somewhat stressed and all you want is to go home and have a good supper and maybe unwind for an hour, and then hit the sack early because, of course, you have to get up the next day and do it all again. But then you remember there’s no food at home. You haven’t had time to shop this week because of your challenging job, and so now after work you have to get in your car and drive to the supermarket. It’s the end of the work day and the traffic is apt to be: very bad. So getting to the store takes way longer than it should, and when you finally get there, the supermarket is very crowded, because of course it’s the time of day when all the other people with jobs also try to squeeze in some grocery shopping. And the store is hideously lit and infused with soul-killing muzak or corporate pop and it’s pretty much the last place you want to be but you can’t just get in and quickly out; you have to wander all over the huge, over-lit store’s confusing aisles to find the stuff you want and you have to manoeuvre your junky cart through all these other tired, hurried people with carts (et cetera, et cetera, cutting stuff out because this is a long ceremony) and eventually you get all your supper supplies, except now it turns out there aren’t enough check-out lanes open even though it’s the end-of-the-day rush. So the checkout line is incredibly long, which is stupid and infuriating. But you can’t take your frustration out on the frantic lady working the register, who is overworked at a job whose daily tedium and meaninglessness surpasses the imagination of any of us here at a prestigious college. But anyway, you finally get to the checkout line’s front, and you pay for your food, and you get told to “Have a nice day” in a voice that is the absolute voice of death. Then you have to take your creepy, flimsy, plastic bags of groceries in your cart with the one crazy wheel that pulls maddeningly to the left, all the way out through the crowded, bumpy, littery parking lot, and then you have to drive all the way home through slow, heavy, SUV-intensive, rush-hour traffic, et cetera et cetera. Everyone here has done this, of course. But it hasn’t yet been part of you graduates’ actual life routine, day after week after month after year.
how to keep from going through your comfortable, prosperous, respectable adult life dead, unconscious, a slave to your head and to your natural default setting of being uniquely, completely, imperially alone day in and day out
"All of humanity's problems stem from man's inability to sit quietly in a room alone" —Blaise Pascal
It is not the least bit coincidental that adults who commit suicide with firearms almost always shoot themselves in: the head. They shoot the terrible master.
For better or worse, people will continue to run things to inspect the results manually—before grumbling about having to duplicate the effort when they make the actual test. The ergonomics are too tempting, even when they're an obvious false economy.
What about a test-writing assistant that let you just copy and paste your terminal session into an input field/text file which the assistant would then process and transform into a test?
My first startup, Emu, was a messaging app. My second startup, Miter, was not a messaging app but sometimes acted suspiciously like one. Am I obsessed with communication software? Well…maybe. I’m fascinated by the intersection of people, communities, and technology
and, apparently, business—which definitely explains the author's overall position, specific recommendations, and the fact that this blindspot (about failing to mention the intersection of business with their interest in messengers).
anything is better than SMS
Happy to hear this is the author's position, at least, because delta.chat and/or something like it is really the only reasonable way forward.
(This isn't to say the current experience with delta.chat doesn't have problems itself. I'm not even using it, in fact.)
perpetuates SMS’s dependence on the mobile carrier and device
This isn't true: where does the "and device" part come in?
you can message anyone, anywhere, without thinking about it too much
But we have already heard plenty of evidence about why this isn't true...
Your Mom has an iPhone so she loved it. Your brother has an Android, so he saw a blue-green blob with a couple tan blobs in the middle.
"I'll blame Apple" is both an acceptable and reasonable response to this.
typing indicators, read receipts, stickers, the ability to edit and delete messages.
Yeah, I don't want any of those. It's not that I'm merely unimpressed—I am actively opposed to several of them for good reasons.
Messages get lost.
The only reason why I "switched" to Signal ~5 years ago, was because it became clear that some of my messages weren't coming/going through.
When I switched to Signal, the experience was even worse. Someone would send the message or attempt a voice call, but Signal would not reliably notify me that this happened. I'd open the app to find notifications for things that should've been delivered hours/days earlier.
This had nothing to do with my app/phone settings. Signal did deliver some notifications, but it would do so unreliably. Eventually, I switched back to SMS in part because I was baffled by how the experience with Signal could be so much worse—as well as a bunch of dodgy decisions by the Signal team (which was actually the main catalyst for the switch back, despite the deliverability problems).
might lose all your messages if you switch phones
As a point of fact, this has nothing to do with SMS per se....
That’s pretty much it
The lack of emphasis on the original design motivations for the Web as an analog for someone sitting at the reference desk in e.g. a corporate library who will field your request for materials is something that should be corrected.
The usefulness of JSON is that while both systems still need to agree on a custom protocol, it gives you an implementation for half of that custom protocol - ubiquitous libraries to parse and generate the format, so the application needs only to handle the semantics of a particular field.
To be clear: when PeterisP says parse the format, they really mean lex the format (and do some minimal checks concerning e.g. balanced parentheses). To "handle the semantics of a particular field" is a parsing concern.