3,145 Matching Annotations
  1. Apr 2022
    1. File not found (404 error)

      This is perverse (i.e. an instance of Morissette's false irony).

    1. it might be worth-while to point out that the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise
    1. At a higher level of grouping, we have more trouble. This is the level of DLLs, microservices, and remote APIs. We barely have language for talking about that. We have no word for that level of structure.
  2. small-tech.org small-tech.org
    1. Ongoing research Building on our work with Site.js, we’ve begun working on two interrelated tools: NodeKit The successor to Site.js, NodeKit brings back the original ease of buildless web development to a modern stack based on Node.js that includes a superset of Svelte called NodeScript, JSDB, automatic TLS support, WebSockets, and more.

      "How much of your love of chocolate has to do with your designs for life that are informed by your religious creed? Is it incidental or essential?"

    1. The percentage of Democrats who are worried about speaking their mind is just about identical to the percentage of Republicans who self-censor: 39 and 40 percent, respectively

      What are Republicans worrying about when they self-censor? Being perceived as too far right and trying to appear more moderate, or catching criticism from their political peers if they were to express skepticism about some of the goofiest positions that Republicans are associated with at the moment?

    2. knowing that we could lose status if we don’t believe in something causes us to be more likely to believe in it to guard against that loss. Considerations of what happens to our own reputation guides our beliefs, leading us to adopt a popular view to preserve or enhance our social positions

      Belief, or professed belief? Probably both, but how much of this is conscious/strategic versus happening in the background?

    3. Interestingly, though, expertise appears to influence persuasion only if the individual is identified as an expert before they communicate their message. Research has found that when a person is told the source is an expert after listening to the message, this new information does not increase the person’s likelihood of believing the message.
    4. Many have discovered an argument hack. They don’t need to argue that something is false. They just need to show that it’s associated with low status. The converse is also true: You don’t need to argue that something is true. You just need to show that it’s associated with high status.
    1. This comment makes the classic mistake of mixing up the universal quantifier ("for all X") and the existential quantifier ("there exists [at least one] X"), when (although neither are used), the only thing implied is the latter.

      https://en.wikipedia.org/wiki/Universal_quantification

      https://en.wikipedia.org/wiki/Existential_quantification

      What the average teen is like doesn't compromise Stallman's position. If one "gamer" (is that a taxonomic class?) follows through, then that's perfectly in line with Stallman's mission and the previously avowed position that "Saying No to unjust computing even once is help".

      https://www.gnu.org/philosophy/saying-no-even-once.html

    1. the C standard — because that would be tremendously complicated, and tremendously hard to use

      "the C standard [... is] tremendously complicated, and tremendously hard to use [...] full of wrinkles and [...] complex rules"

    2. It should be illegal to sell a computer that doesn't let users install software of their own from source code.

      Should it? This effectively outlaws a certain type of good: the ability to purchase a computer of that sort, even if that's what you actually want.

      Perhaps instead it should be illegal to offer those types of computers for sale if the same computer without restrictions on installation aren't also available.

    1. myvchoicesofwhatto.attemptandxIiwht,not,t)a-ttemptwveredleterni'ine(toaneiiubarma-inohiomlreatextentbyconsidlerationsofclrclfea-sibilitv

      my choices of what to attempt and what not to attempt were determined to an embarrassingly great extent by considerations of clerical feasibility.

    1. I am now ready to start rendering in GIMP. I use GIMP because it's good and free, but I prefer PAINT for the drawing.

      GIMP needs an MSPaint "persona".

    1. In his response, Shirky responds that he was unable to keep up with overhead of running a WordPress blog, so it fell into disrepair and eventually disappeared.

      Is this proof, from one of the biggest social critics on the topic of crowdsourcing (known for taking a favorable stance on it), that it doesn't work? Separately, is the promise of the Web itself a false one?

      I'm not going to say "yes", but I will say—and have said before—that the "New Social" era of the 2010s saw a change of environment from when Here Comes Everybody was written. I think it highlights a weakness of counter-institutional organization—by definition the results aren't "sticky"—that's the purview of institutions. What's more, even institutions aren't purely cumulative.

    1. I have a theory that most people conceptualize progress as this monotonically increasing curve over time, but progress is actually punctuated. It's discrete. And the world even tolerates regress in this curve.
    1. I was already aware that images cannot be inserted in the DOM like you would any normal image. If you write <img src="https://my-pod.com/recipes/ramen.jpg">, this will probably fail to render the image. That happens because the image will be private, and the POD can't return its contents without proper authentication.
    1. their

      "there"

    2. What if the offset to the central directory is 1,347,093,766? That offset is 0x504b0506 so it will appear to be end central directory header.

      This is, I think, the only legitimate criticism here so far. All the others that amount to questions of "back-to-front or front-to-back?" can be answered: back-to-front.

      This particular issue, however, can be worked around by padding the central directory one byte (or four) so that its not at offset 1,247,093,766. Even then, the flexibility in the format and this easy solution means that even this criticism is mostly defanged.

    1. hopefully feed readers can treat permanent redirects as a sign to permanently update their feed URLs, then I can remove it. They probably don't, much like bookmarks don't
    2. I saw the need for a JSON feed format

      Did you really, though? Probably there was a need for a feed data source that was as easy to work with as parsing a JSON object, I'm betting. And I'd wager that there was no real obstacle to writing a FeedDataSource that would ingest the Atom feed and then allow you to call toString("application/json") to achieve the same effect.

    1. The model of high fixed cost, low marginal cost applies to pretty much every consumer good or service sold in the industrial age. If you think publishing the first copy of a piece of software is hard, try setting up a new production line in a factory, or opening a restaurant, or producing a music album or movie.
    1. The marginal cost to Zoom of onboarding a new customer is almost zero
    2. When you’re building a house, you have a pretty good idea of how many people that house will impact. The market has already demonstrated that they’ll pay for a roof and a walk-in shower and a state of the art heated toilet seat. If you erect a sturdy 4 bedroom, 2 ½ bathroom house with these amenities in a desirable neighborhood, you can rest easy knowing that you’ll be able to sell it. This is not how software works. The main problem here is that houses and software have wildly different marginal costs. If I build a house and my neighbor really likes it and wants to live in a house that’s exactly the same, it will cost them almost as much to build as it cost me. Sure, they might be able to save a few thousand dollars on architectural fees, but they’ll still need wood, wires, and boatloads of time from skilled plumbers, electricians, and carpenters to assemble those raw materials into something resembling a house. Marginal cost is just the cost to produce one more unit of something - in this case, one more house. Contrast this with software, which lives in the realm of often-near-zero marginal costs.

      Devops-heavy software changes this. Is GitHub standard fare the way it is because developers are knowingly or unknowingly trying to keep themselves employed—like the old canard/conspiracy theory about how doctors aren't interested in healing anyone, because they want them sick?

    1. To drive this point home:

      I sometimes get people who balk at my characterization of GitHub-style anti-wikis as being inferior to, you know, actual wikis. "You can just use the GitHub UI to edit the files", they'll sometimes say.

      A case study: a couple days ago, I noticed that the lone link in the current README for Jeff Zucker's solid-rest project is a 404. I made a note of it. Just now, I reset my GitLab password, logged in to solid/chat, and notified Jeff https://gitter.im/solid/chat?at=611976c009a1c273827b3bd1. Jeff's response was, "I'll change it".

      This case is rich with examples of what makes unwikis so goddamn inefficient to work with. First, my thought upon finding the broken link was to take note of it (i.e. so that it can eventually be taken care of) rather than fixing it immediately, as would have been the case with a wiki. More on this in a bit. Secondly, my eventual action was still not something that directly addressed the problem—it was to notify someone else† of the problem so that it might be fixed by them, due to the unwiki nature of piles of Git-managed markdown. Thirdly, even Jeff's reflex is not to immediately change it—much like my reaction, his is to note the need for a fix himself and then to tell me he's going to change it, which he will presumably eventually do. Tons of rigamarole just to get a link fixed‡ that remains broken even after having gone through all this.

      † Any attempt to point the finger at me here (i.e. coming up short from having taken the wrong action—notifying somebody rather than doing it myself) would be getting it wrong. First, the fact that I can't just make an edit without taking into account the myriad privacy issues that GitHub presents is materially relevant! Secondly, even if I had been willing to ignore that thorn (or jump through the necessary hoops to work around it) and had used the GitHub web UI as prescribed, it still would have ended up as a request for someone else to actually take action on, because I don't control the project.

      ‡ Any attempt to quibble here that I'm talking about changing a README and not (what GitHub considers) a wiki page gets it wrong. We're here precisely because GitHub's unwikis are a bunch of files of markdown. The experience of changing an unwiki page would be rife with the same problems as encountered here.

    1. Yes, the website itself is a project that welcomes contributions. If you’d like to add information, fix errors, or make improvements (especially to the visual design), talk to @ivanreese in the Slack or open a PR.

      Contribution: make it a wiki. (An actual wiki, not GitHub-style anti-wikis.)

    2. If that project doesn’t pan out, we’ll set up an existing wiki or similar collaborative knowledge system at the start of 2021.

      Oops.

    1. Each entry is a Markdown file stored in the _pages directory.

      Nah, dude. That's not a wiki.

    1. Meta note: I always have difficulty reading Linus Lee. I don't know what it is. Typography? Choices in composition (that is, writing)? Other things to do with composition (that is, visually)?

    1. JavaScript is popular outside of the browser almost entirely on the merit of its ecosystem, its tooling, and the trivial debugging experience enabled by the repl.

      Disagree. JS is popular because its tooling was easy to get started with (i.e. did have merit)—but that's not really the case any more.

    1. I find the tendency of people to frame their thinking in terms of other people's involvement in the projects in question being to make money (as with GitHubbers "contributing"/participating by filing issues only because they're trying to help themselves on some blocker they ran into at work) pretty annoying.

      The comments here boil down to, "You say it should be rewarding? Yeah, well, if you're getting paid for it, you shouldn't have any expectations that the the act itself will be rewarding."

      I actually agree with this perspective. It's the basis of my comments in Nobody wants to work on infrastructure that "if you get an infusion of cash that leaves you with a source of funding for your project[...] then the absolute first thing you should start throwing money at is making sure all the boring stuff that [no] one wants to work". I just don't think it's particularly relevant to what Kartik is arguing for here.

      Open source means that apps are home-cooked meals. Some people get paid to cook, but most people don't. Imagine, though, if the state of home cooking were such that it took far more than one hour's effort (say several days, or a week) before we could reap an appreciable reward—tonight's dinner—and that this were true, by and large, for almost every one of your meals. That's not the case with home cooking, fortunately, but that is what open source is like. The existence of professional chefs doesn't change that. There's still something wrong that could stand to be fixed.

    2. great, I'm genuinely happy for you. But

      In other words, "allow me to go on and recontextualize this to the point of irrelevancy to the original purpose of this piece of writing".

    3. the employment contract I've signed promises a year of payment after a year of effort

      I find that hard to believe. It could be true—I might be wrong—but it's pretty odd if so and still far from the norm.

      The very fact that this "year of payment after a year of effort" probably really means "an annual salary of c$100k per year (for some c > 1), paid fractionally every two weeks" pretty much undermines this bad attempt at a retort.

    1. I feel like better code visualization would solve a lot of my problems. Or at least highlight them.

      The other commenter talks about a typical sw.eng. approach to visualization (flame graphs), but I want programs visualized as a manufacturing/packing/assembly line on a factory floor. Almost like node editors like Unreal's Blueprints, but in three dimensions, and shit visibly moving around between tools on the line in a way that you can actually perceive. Run the testcase on a loop, so you have a constant stream of "shit visibly moving around", and it runs at fractional speed so the whole process takes, say 10 seconds from front-to-back instead of near instantaneously like it normally would (and allow the person who's debugging to control the time scaling, of course). You find bugs by walking the line and seeing, "oh, man, this purple shit is supposed to be a flanged green gewgaw at this point in the line", so you walk over and fix it.

      (This is what I want VR to really be used for, instead of what's capturing people's attention today—games and lame substitutes for real world interaction like Zuckerberg is advocating for.)

    1. I want an hour of reward

      marktani asks (not unreasonably):

      what is an hour of reward?

      Kartik's response[1] is adequate, I feel.

      1. https://news.ycombinator.com/item?id=30041447
    2. NB: This piece has been through many revisions (or, rather, many different pieces have been published with this identifier).

      Check it out with the Wayback Machine: https://web.archive.org/web/*/http://akkartik.name/about

    3. A big cause of complex software is compatibility and the requirement to support old features forever.

      I don't think so. I think it's rather the opposite. Churn is one of the biggest causes for what makes modifying software difficult. I agree, however, with the later remarks about making it easy to delete code where it's no longer useful.

    4. An untrusted program must inspire confidence that it can be run without causing damage.

      I don't think you can get this with anything other than a sandbox, cf Web browsers.

      I'm not sure I understand what it means to say that programs must "tell the truth about their side effects". What if they don't? (How do you make sure what they're telling is the truth?)

    5. Most programs today yield insight only after days or weeks of unrewarded effort. I want an hour of reward for an hour (or three) of effort.
    1. to allow ample space for editing the source code of scripts, the form overflows the page, which requires scrolling down to find the "save" button and avoid losing changes

      The "Edit" button should morph into a "Save" button instead.

    2. bag

      In honor of Haketilo's original name, Hachette, bags might be called "sachets" instead.

    1. it's worthwhile to learn the ins and outs of coding a page and having the page come out exactly the way you want

      No. Again, this is exactly what I don't want. (I don't mean as an author—e.g. burdened with hand-coding my own website—I mean as a person whose eyes will land on the pages that other people have authored with far more frequency than I do look at my own content.)

      As I mentioned—not just in my earlier responses to this piece, but elsewhere—the constant hammering on about how much control personal homepages give the author over the final product is absolutely the wrong way to appeal to people. In the first place, it only appeals to probably about a tenth of as many people as the people advocating think it does. In the second place, the results can and usually are not good, cf MySpace.

      The best thing about Facebook, Twitter, etc? Finally getting people to separate content from presentation. Hand-coded CSS doesn't get you there alone. That's basically a talisman (false shibboleth). Facebook content is in a database somewhere. The presentation gets layered on through subsequent transformations into different HTML+CSS views.

    2. The web is a mire of toxic and empty content because we, as users, took the easy path and decided to consume content instead of creating it.

      As I've said elsewhere about the fediverse: a funny thing to me is that Mastodon, while architecturally guilty of the same sort of things as the big social networks outlined in this post (centralization around nodes instead of small, personal, digital homes), is often touted as being able to deliver a lot of the same benefits, but I don't see them. For one thing, start a webring and get to know your virtual neighbors sounds a lot like "you get to pick your own instance". But secondly, and most relevant to the passage here, is that the types of people I run across in the fediverse for the most part all seem to be of a certain "type". As someone who doesn't use Twitter, joining Mastodon put me in contact with a lot more toxicity than not. I don't see how followthrough on the grand vision here—which will certainly almost exclusively be undertaken by the same sorts of people you find in the fediverse (with this piece, the author shows they know their audience and go right in on the pandering)—won't result in much difference from what you can get on any typical Mastodon instance—i.e. exactly the sort of thing that's basically the template for pieces like this one.

    3. Tim Berners-Lee released the code for the web so that no corporation could control the web or force users to pay for it.

      While the precondition is true, there is an ahistorical suggestion/understanding of the history of the Web at play here.

    4. So seek out new and interesting sites, and link to them on your site. Reach out to them, and see if they'll link to you. Start a dialog. The way to build a better web is to build a better web of people.

      About half the touted benefits of the approach this piece advocates for could be better achieved by instead convincing people to be more forthcoming with making themselves available for contact by e-mail.

      Hand-coding websites (something that I actually for myself) does not inherently presage the sorts of things the author thinks that it does. It could be true that we would be just as well off—if not better—if everyone were to go out and sign up for Owlstown and treat it like a content depository.

    5. Building amateur web pages increases the quality of content on the web as well.
    6. The anti-vaccine and anti-mask groups are a prime example

      Anti-vax positions are the product of anonymity? Pretty sure the harbinger of the anti-vax nonsense are groups of Facebook moms...

    7. Hidden behind a veil of anonymity

      I'm not sure that modern conventions have resulted in heightened use (and abuse) of anonymity. The trend seems to be in the opposite direction. Most major social networks, incl. Facebook, Twitter, and GitHub, either require/favor use of real names or manage to cultivate a sense of obligation from most of their users to do so. Reddit is more old-style Web than any of the others, and anonymity is by far the norm. It's actually very, very weird to go on Reddit and use your real name unless you're a public figure doing an AMA, and it's only slightly less uncommon to post under a pseudonym but have your real-life identity divulged elsewhere, out-of-band. Staying anonymous on Reddit is almost treated as sacred.

      Likewise in the actual old Web, during the age of instant messengers and GeoCities, people pretty much did everything behind a "screen name".

    8. the real reason nobody was collecting massive amounts of data on users was that nobody had thought to do it yet. Nobody had foreseen that compiling a massive database of individual users likes, dislikes, and habits would be valuable to advertisers.
    9. This appeal would have a greater effect if it weren't itself published in a format that exhibits so much of what was less desirable of the pre-modern Web—fixed layouts that show no concern for how I'm viewing this page and causes horizontal scrollbars, overly stylized MySpace-ish presentation, and a general imposition of the author's preferences and affinity for kitsch above all else—all things that we don't want.

      I say this as someone who is not a fan of the trends in the modern Web. Responsive layouts and legible typography are not casualties of the modern Web, however. Rather, they exhibit the best parts of its maturation. If we can move the Web out of adolescence and get rid of the troublesome aspects, we'd be doing pretty good.

    1. Not realizing that you need to remove the roadblocks that prevent you from scaling up the number of unpaid contributors and contributions is like finding a genie and not checking to see if your first wish could be for more wishes.

      Corollary: nobody wants janky project infrastructure to be a roadblack to getting work done. It should not take weeks or days of unrewarded effort in the unfamiliar codebase of a familiar program before a user is confident enough to make changes of their own

      I used to use the phrase 48-hour competency. Kartik says "an hour (or three)". Yesterday I riffed on the idea that "you have 20 seconds to compile". I think all of these are reasonable metrics for a new rubric for software practice.

    1. except its codebase is completely incomprehensible to anyone except the original maintainer. Or maybe no one can seem to get it to build, not for lack of trying but just due to sheer esotericism. It meets the definition of free software, but how useful is it to the user if it doesn't already do what they want it to, and they have no way to make it do so?

      Kartik made a similar remark in an older version of his mission page:

      Open source would more fully deliver on its promise; are the sources truly open if they take too long to grok, so nobody makes the effort?

      https://web.archive.org/web/20140903010656/http://akkartik.name/about

    1. CuteDepravity asks:

      With minimal initial funding (ex: $10k) How would you go about making 1 million dollars in a year ?

      My response is as follows:

      Buy $10k of precious metals (or pick your preferred funge, e.g. cryptocurrency), and then turn around and sell it for a net loss of $1. Repeat this 100 more times, using the sale price of your last sale as the capital for your next purchase and selling it for a $1 net loss over the purchase price. A year should give you enough time to make 202 trades. Your last trade should take you from under a million to just over a million in revenue—from $994,950 to $1,004,849 and with your balance at EOY being $9,899. Deduct your losses from the reward you collect from whomever you made this million dollar bet with. (Hopefully the wager was for greater than $101 and whatever this does to your taxes.)

    1. a complex problem should not ~be regarded immediately in terms of computer instruc- tions, bits, and "logical words," but rather in terms and entities natural to the problem itself, abstracted in some suitable sense

      Likewise, a program being written (especially one being written anew instead of by adapting an existing one) should be written in terms of capabilities from the underlying system that make sense for the needs of the greater program, and not by programming directly against the platform APIs. In the former case, you end up with a readable program (that is also often portable), whereas in the latter case, what you end up writing amounts to a bunch of glue between existing system component that may not work together in any comprehensible way to half the audience who is not already intimately familiar with the platform in question, but no less capable of making meaningful contributions.

    2. must not consist of a bag of tricks and trade secrets, but of a general intellectual ability

      I often think about how many things like Spectre/Meltdown are undiscovered because of how esoteric and unapproachable the associated infrastructure is that might otherwise better allow someone with a solid lead to follow through on their investigation.

    3. In short, it became clear that any amount of efficiency is worthless if we cannot provide reliability

      Ousterhout says:

      The greatest performance improvement of all is when a system goes from not-working to working

      https://hypothes.is/a/yfo-zAB-EeyUTqcA1iAC2g#https://web.stanford.edu/~ouster/cgi-bin/sayings.php

    4. The law of the "Wild West of Programming" was still held in too high esteem! The same inertia that kept many assembly code programmers from ad- vancing to use FORTRAN is now the principal obstacle against moving from a "FORTRAN style" to a structured style.
    5. The amount of resistance and prejudices which the farsighted originators of FORTRAN had to overcome to !gMn acceptance of their product is a memorable indication of the degree to which programmers were pre- occupied with efficiency, and to which trick- ology had already become an addiction
    1. Why not add one more category — Personal Wiki — tied to nothing specific, that I can reuse wherever I see fit?

      This is insufficiently explained.

    2. why the heck am I going to write a comment that is only visible from this one page? There are hundreds (maybe thousands) of pages on the internet making use of the fact that there is no clear explanation of this on the web.

      I.

      In You can't tell people anything, Chip Morningstar recalls, 'People loved it, but nobody got it. Nobody. We provided lots of explanation. We had pictures. We had scenarios, little stories that told what it would be like. People would ask astonishing questions, like “who’s going to pay to make all those links?”'

      II.

      Ted's vision for Xanadu was that all things connected would be shown to be connected.

      III.

      The people we're supposed to laugh at—the ones asking who's going to create all the links—had it right. There's just no way to enforce Ted's vision—no way for all links to be knowable and visible. That's because Ted's vision is fundamentally at odds with reality, just like the idea of unbreakable DRM. The ability of two people to whisper to one another in the corner of a cozy pub about a presentation you gave to the team during work hours, without your ever knowing that commentary is being exchanged (or even the existence of the pub meeting where it happens), is Xanadu's analog hole.

    1. Meanwhile I’ve been humming dackolupatoni to myself. Haven’t come up with a song yet but it feels like it has “Giacomo fina ney” potential.

      Prisencolinensinainciusol is probably more appropriate.

    2. But I feel like once you truly discover the web, you can’t turn your back on it.

      Empirically, this appears to be untrue.

    3. it would nearly impossible to understand the web if you have never used it

      But TBL is talking about "finding out about" the Web by experiencing it, which is not the same thing as understanding it. As the personal reflection in that this post opened up with showed, people are capable of experiencing the Web without understanding it.

    1. You need to be in the triplescripts.org group to see this annotation.

      Membership is semi-private, but only as a consequence of current limitations of the Hypothes.is service. Anyone can join the group.

  3. www.research-collection.ethz.ch www.research-collection.ethz.ch
    1. Ihavelearnttoabandonsuchattemptsofadaptationfairlyquickly,andtostartthedesignofanewprogramaccordingtomyownideasandstandards

      I have learnt to abandon such attempts of adaptation fairly quickly, and to start the design of a new program according to my own ideas and standards

    1. The two notable exceptions are the Lispmachine operating system, which is simply an extension to Lisp (actually a programminglanguage along with the operating system) and the UNIX operating system, which providesfacilities for ‘patching’ programs together.

      Oberon is a better example of this than UNIX. Internally, the typical UNIX program (written in C) uses function calls and shared access to in-memory "objects", but must use a different mechanism entirely (file descriptors) for programs to communicate.

    1. Explain clipboard macros as one use case for bookmarklets. E.g.:

      navigator.clipboard.writeText(`Hey there.  We don't support custom domains right now, but we intend to allow that in the future.`)
      

      Explain that they can be parameterized, too. E.g.:

      let name = String(window.getSelection());
      navigator.clipboard.writeText(`Hey, ${name}...`);
      

      They can create a menu of bookmarklets for commonly used snippets.

    1. we're susceptible to CSP if we try to write an inline script into the manager doc

      Too late! We're already doing that to post the key during the handshake...

    Tags

    Annotators

    1. Feature request (implement something that allows the following): 1. From any page containing a bookmarklet, invoke the user-stored bookmarklet בB 2. Click the bookmarklet on the page that you wish to be able to edit in the Bookmarklet Creator 3. From the window that opens up, navigate to a stored version of the Bookmarklet Creator 4. Invoke bookmarklet בB a second time from within the Bookmarklet Creator

      Expected results:

      The bookmarklet from step #2 is decoded and populates the Bookmarklet Creator's input.

      To discriminate between invocation type II (from step #2) and invocation type IV (from step #4), the Bookmarklet Creator can use an appropriate class (e.g. https://w3id.example.org/bookmarklets/protocol/#code-input) or a meta-based pragma or link relation.

    2. <title>Bookmarklet Creator</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">

      We should perhaps include a rel=canonical or rel=alternate here. But what are the implications for shares and remixes? This also perhaps exposes a shortcoming in Hypothes.is's resource equivalence scheme, cf:

      Maybe the document itself, when the application "wakes up" should check the basics and then insert them as appropriate?

    3. output.addEventListener("click", ((event) => { if (event.target == output.querySelector("a.bookmarklet")) { alert("It's not an ordinary link; you need to bookmark it");

      This should use the registered control service pattern (or something). It's too hard to override this behavior. For example, I could remix the page and remove it, but I should also be able to write a bookmarklet that achieves the same effect.

    1. they walked into that cafe, looked around, and decided I was the easy prey.

      I had a similar feeling in 2016 after a recruitment attempt matching exactly the process described in theogravity's comment on this post. After first coming into contact at the grocery store, exchanged numbers, and met up at a coffee shop (a meeting that was arranged via text message within a couple days/weeks). I was surprised to find out that it was a MLM something-or-other, then I made it clear that I wasn't interested (and subtly made him feel that he should leave while I stayed and finished the hot cider I ordered). But for weeks afterward, I felt embarrassed and insulted that I must have been giving off some vibe as easily dupable. I couldn't reconcile it with the fact that in our first "chance" meeting, I mentioned that I'd left Samsung a few weeks earlier and didn't exactly have any concrete work plans and wasn't especially worried about it. Somehow, though, I guess I was perceived as being susceptible to some get-rich-quick nonsense...?

      Having said that—the author here recounts four meetings and still no revelation about the specifics of what their agenda was. That doesn't sound like "easy" prey to me. That's a pretty involved trap, assuming it is one.

    2. And again, there were these repeated implications that we were special, that we were deeper than other people.

      Maybe they were Fourth Dimensionists. (NB: Not actually a book I can recommend.)

    1. “I didn’t mean to back into your car in the parking lot.” Or, “I didn’t intend to hurt you with my remarks.”

      Worth noting that one of these is concrete (and concretely bad) and avoidable, the other one less so. It even permits claims that are unfalsifiabie.

    2. tenet

      should be "tenant"

    3. “We judge ourselves by our intentions and others by their behavior.” Stephen Covey This statement, made by author Stephen Covey

      It doesn't look like it; seems that Covey actually wrote "motives", not "intentions", and the maxim is not original to him https://quoteinvestigator.com/2015/03/19/judge-others/

    1. Pop up with a mouse hover, effortless

      Mmm… no, it just takes moderately less effort than clicking. And it doesn't apply to mouseless form factors.

    1. You don’t need semantic triples, you just need links

      What can triples do that you couldn't do with pairs?

    1. Reading the nodes straight through from top to bottom of the index will result in one kind of landscape for the text, but not the only or probably the best
    1. In Firefox, click "Bookmarks," then select "Bookmark this Page"

      Fails if the page is already bookmarked.

    1. This is pretty much a requirement if you intend to implement something like custom tooltips on your site which need to be dynamically absolutely positioned.

      wat

    2. Unless unsafe-inline is set on style-src, all inline style attributes are blocked.

      This is where things really go off the rails. There is no legitimate use case for this regime no matter how much people have looked for a reason to justify it or use sleight of hand to make it seem appropriate.

    1. So far it works great. I can now execute my bookmarklets from Twitter, Facebook, Google, and anywhere else, including all https:// "secure" websites.

      In addition to the note above about this being susceptible to sites that deny execution of inline scripts, this also isn't really solving the problem. At this point, these are effectively GreaseMonkey scripts (not bookmarklets), except initialized in a really roundabout way...

    2. work-around

      Bookmarklets and the JS console seem to be the workaround.

      For very large customizations, you may run into browser limits on the effective length of the bookmarklet URI. For a subset of well-formed programs, there is a way to store program parts in multiple bookmarklets, possibly loaded with the assistance of a separate bookmarklet "bootloader", although this would be tedious. The alternative is to use the JS console.

      In FIrefox, you can open a given script that you've stored on your computer by pressing Ctrl+O/Cmd+O, selecting the file as you would in any other program, and then pressing Enter. (Note that this means you might need to press Enter twice, since opening the file in question merely puts its contents into the console input and does not automatically execute it—sort of a hybrid clipboard thing.) I have not tested the limits of the console input for e.g. input size.

      As far as I know, you can also use the JS console to get around the design of the dubious WebExtensions APIs—by ignoring them completely and going back to the old days and using XPCOM/Gecko "private" APIs. The way you do is is to open about:addons by pressing Ctrl+Shift+A (or whatever), opening or pasting the code you want to run, and then pressing Enter. This should I think give you access to all the old familiar Mozilla internals. Note, though, that all bookmarklet functionality is disabled on about:addons (not just affecting bookmarklets that would otherwise violate CSP by loading e.g. an external script or dumping an inline one on the page`).

    3. CSP is taking away too much of the user's power and control over their browser use
    4. Apparently there is a CSP ability to stop inline scripts from executing. I have not come across any sites that use that feature and/or the browser I am using does not support it.

      There're lots.

    1. The SE server is also responsible for building ebooks when they get released or updated. This is done using our ebook production command line toolset.

      It would be great if these tools were also authored to be a book—a comprehensive, machine-executable runbook.

    2. inscrutible

      Should be "inscrutable".

    3. There's way too much excuse-making in this post.

      They're books. If there's any defensible* reason for making the technical decision to go with "inert" media, then a bunch of books has to be it.

      * Even this framing is wrong. There's a clear and obvious impedance mismatch between the Web platform as designed and the junk that people squirt down the tubes at people. If there's anyone who should be coming up with excuses to justify what they're doing, that burden should rest upon the people perverting the vision of the Web and treating it unlike the way it's supposed to be used—not folks like acabal and amitp who are doing the right thing...

    4. Everything is rendered server-side before it reaches your browser.

      How about removing the "rendering" completely and make it a static site?

    1. This article is crazy. Ostensibly, it's about a somewhat disappointing Emacs package (Emacs menus), but it's filled with all sorts of asides that are treasures.

    1. future software development should increasingly be oriented toward making software more self-aware, transparent, and adaptive

      From "Software finishing":

      once software approaches doneness[...] pour effort into fastidiously eliminating hacks around the codebase [...] presents [sic] the affected logic in a way that's clearer [...] judiciously cull constructs of dubious readability[...]

      one of the cornerstones of the FSF/GNU philosophy is that it focuses on maximizing benefit to the user. What could be more beneficial to a user of free software than ensuring that its codebase is clean and comprehensible for study and modification?

    2. When something goes wrong with a computer, you are likely to be stuck. You can't ask the computer what it was doing, why it did it, or what it might be able to do about it. You can report the problem to a programmer, but, typically, that person doesn't have very good ways of finding out what happened either.
    3. Another obstacle is the macho culture of programming.
    4. Lieberman, H., Guest Ed. The debugging scandal special section. Commun. ACM 40, 3 (Mar. 1997).

      That should be CACM 40, 4.

      The article can be found here: https://cacm.acm.org/magazines/1997/4/8423-introduction/abstract

      Also available through Lieberman's homepage: https://web.media.mit.edu/~lieber/Lieberary/Softviz/CACM-Debugging/#Intro

    5. Fry's Law states that programming-environment performance doubles once every 18 years, if that.
    6. software will work only if we provide the tools to fix it when it goes wrong
    1. Knowledge work should accrete

      Related: I'm fond of the position that technological progress should be cumulative—we should avoid churn.

    1. I end up with responsibility (friends complaining to me about this, that, and the other) without control (I can't affect any of those things)

    Tags

    Annotators

    1. Email is local-first.Email is social. You own your own social graph.

      See also Email is your electronic memory. (NB: Not an endorsement of Fastmail. The company is apparently full of assholes-in-tech-type dudes.)

    2. What if every button in your app had an email address? What if apps could email each other?

      Also, what if every app could email you? i mean the app itself—not the product team.

    1. it's minified before encoding the link (with encoding) is only 224 characters, instead of 337

      Not even close to the dumbest thing I've ever read, but still very, very dumb.

    1. quietly interesting
    2. I think there are some systems design insights here that might be valuable for p2p, Web3, dweb, and other efforts to reform, reboot, or rethink the internet.

      Indeed. Why did Dat/Beaker/Fritter fizzle out? Answer: failure to exapt.

    3. Software has more cybernetic variety than hardware
    4. This is possible because the internet isn’t designed around telephone networking hardware. It isn’t designed around any hardware at all. Instead, the internet runs on ideas, a set of shared protocols. You can implement these protocols over a telephone, over a fiberoptic cable, or over two tin cans connected with string.
    5. Infrastructure has decades-long replacement cycles
    1. Scale is also killing open source, for the record. Beyond a certain codebase size or rate of churn, you need to be a mega-corp to contribute to a nominally open-source project.

      What Stephen ignores is that the sort of software that this applies to is pretty much limited to the sort of software that only megacorps are interested in to begin with. As Jonathan pointed out in the post, most "user-facing" software is still not open source. (Side note: that's the real shame of open source.)

      Who cares how hard it is to contribute to the kinds of devops shovelware that GitHubbers have disproportionately concerned themselves with?

    1. The difficulty encountered by authors today when they create metadata for hypertexts points to the risk that adaptive hypermedia and the semantic Web will be initiatives that fit only certain, well-defined communities because of the skills involved
    2. The cognitive overhead that is introduced when the user needs to input a formal representation that collides with his immediate task-dependent needs has to be minimized.
    1. Here's a xanalink to a historical piece-- Here's a xanalink to a scientific work-- Here's a xanalink to an early definition of "hypertext"

      No, no, and no. This is an interesting fallout from Ted's fatwa that links be kept "outside the file".

    1. tools for thought

      The origin of this phrase, as I understand it, is that it comes from Rheingold's 1985 book "Tools For Thought", which includes a chapter on Engelbart (among other people e.g. Licklider, etc.).

      The book is available online: http://www.rheingold.com/texts/tft/

  4. Mar 2022
    1. Why spend time, effort and ultimately money on improving productivity when you can just get stuff for free?

      As a rejoinder, have you ever undertaken any serious endeavor to work out exactly how difficult it would be to pay for the software you use that happens to be open source? Massively.

      It's hard enough getting people to who sell SaaS/PaaS stuff to let you pay them a fair price for letting you use their "free" tier if none of their other (often pricey, enterprise-/B2B-oriented) plans offer you anything of value—and these are entities that already have payment processing infrastructure set up!

    2. This is a surprisingly bad piece of writing (i.e. thought), given the author.

    3. Open source strongly favors maintenance and incremental improvement on a stable base. It also encourages cloning and porting. So we get an endless supply of slightly-different programming languages encouraging people to port everything over.

      Huh? The argument here is that what's killing the progress of software development is... forks of programming languages? We live in different worlds.

    4. Open source is the ideology that all software should be free.

      It's weird that in 2021 this canard associated with conflating libre and gratis is still showing up.

    1. she became wise for a world that's no longer here

      This might just be true for Tim's grandma, but something people neglect generally is that you can find just as much bad advice elsewhere—where wisdom from another time is not the reason. Some people just have no idea what they're talking about—and they're never going to be subjected to a fitness function that culls the nonsense they dispense.

    1. Many of the items in the docuverse are not static, run-of-the-mill materials, i.e. unformatted text, graphics, database files, or whatever. They are, in fact, executable programs, materials that from a docuverse perspective can be viewed as Executable Documents (EDs). Such programs run the gamut from the simplest COBOL or C program to massive expert systems and FORTRAN programs. Since the docuverse address scheme allows us to link documents at will, we can link together compiled code, source code, and descriptive material in hypertext fashion. Now, if, in addition, we can prepare and link to an executable document an Input-Output Document (IOD), a document specifying a program's input and output requirements and behavior, and an RWI describing the IOD, we can entertain the notion of integrating data and programs that were not originally designed to work together.
    2. Real-world Interpretation (RWI)
    1. If you happen to annotate page three, and then weeks or years later visit the single page view wouldn’t you want to see the annotation you made? If the tool you are using queries for annotations using only the URL of the document you are viewing you won’t see it.
    2. A primary challenge is that documents, especially online ones, change frequently.

      But they don't. They get re-issued (https://hypothes.is/a/FWEDwInLEeyhtVeF1gArUw) and their associated identifier (URI) gets re-used for the new version, and the original issues get routinely "disappeared". This is a failure to carry out TBL's original vision for the Web in practice–where every article is given a name that can be systematically (mechanically) resolved, or if not that, then at least used as a handle to unambiguously refer to the thing.

    3. anarchic and populist
    4. this page
    5. On-Demand Web Archiving Project

      Perversely, this link is a 404.

    6. With Hypothesis, you can add suggestions and additions as an overlay on current content easily and quickly.  For example, you can provide proper citations or additional information on a topic, note grammatical errors or factual inaccuracies. Experienced Wikipedia editors can then follow up and work with you to add your recommendations to the article.

      The problem with this, generally, but esp. affecting wikis in particular, is that you end up with orphaned and irrelevant/out-of-date annotations.

      Hypothes.is should select an appropriate link relation (in the vein of what it now does with canonical) and scope the annotation appropriately—even if the user does not actually have his or her browser pointed at the exact revision that is "current".

    1. I wish education was based around this principle.

      This is a recurring grievance of mine with the way most people approach writing documentation. Closely related: code comments should explain the why, not the what (or worse the how—which just ends up rehashing the code itself).

      Too many people try to launch right in and explain what they're doing—their solution to a problem—without ever adequately outlining the problem itself. This might seem like too much of a hassle for your readers, but often when the problem is explained well enough, people don't actually need to read your explanation of the solution.

    1. wabac.js 1.0 also included a built-in UI component. This version is still available at https://wab.ac/

      Nah.

    1. What is a document in this world? A list of block addresses.If I read it right, this is more or less how Roam’s block-based hypertext works.

      This is also, more or less, how Git works, albeit it at a different granularity. In Git, you want your working directory to include a series* of files. The Git "database" knows about a bunch of different file contents (blobs) which could be in your working directory, then the Git command-line tool puts such a collection together from a tree of those parts. With the Nelson-inspired "thought legos" described here, the files themselves comprise a list of more granular objects further still, arranged in a specific order* to form a document.

      * This might be considered another difference: we generally don't think of a directory of files as being ordered—we do make use of various machine-sorted views, e.g. inside a file explorer, but that ordering is not inherent to the model itself, just how a given client chooses to display it.

    1. While these two annotations dependupon one another in order to make sense and point to differentparts of the documentation, Hypothesis does not allow for theseannotations to reference one another, suggesting a need for bettertooling support for multiple anchors for annotations
    1. Native Cocoa interface for macOS Qt based interface for Linux.

      Let's just all agree that Cocoa should be the cross-platform interface toolkit. The time is better than ever, given the Oracle v. Google decision, and considering that SwiftUi has materialized, Apple is less likely to regard it as "theirs", anyway.

    1. Streaming services made long days and nights of confinement more bearable, delivering content to our devices with video streaming services reaching 1.1 billion global subscribers in 2020. Netflix alone added 36 million subscribers.

      Netflix is not particularly "Webby", though...

  5. citeseerx.ist.psu.edu citeseerx.ist.psu.edu
    1. The complete overlapping of readers’ and authors’ roles are important evolution steps towards a fully writable web, as is the ability of deriving personal versions of other authors’ pages.
    2. Writing for the web is still a complex and technically sophisticated activity. Too many tools, languages, protocols, expectations and requirements have to be considered together for the creation of web pages and sites.
    1. I chose HTML not to be a programming language

      This sort of thing comes up a lot and results in really stupid discussions—almost as stupid as the late '90s/early '00s tendency to distinguish between "programming languages" and "scripting languages". Say hello to category error. (Worse, i once ran across someone insisting that inert sequences written in e.g. declarative languages cannot be referred to as "code".)

      The distinction people want to make in the case of HTML versus other languages tends to be *markup languages" as distinct from languages that are regularly used for doing von Neumann-style computation. Sometimes, then, people will offer up up a reference to the term "Turing-complete", but this is off, because often things are accidentally Turing-complete.

      I think this can be tamped down by using a more descriptive and more accurate term, and by avoiding "programming language" altogether when trying to make categorical distinctions. We could call them "algorithmic languages", for example, which would be shorthand for "languages whose primary intent is to be used to encode stored programs featuring algorithmic control".

    1. Forexample, Figure 1 shows a screenshot of a news site. If we look at the visual rendering ofthe page, the segments are clearly differentiated. For example, on can easily look at the vi-sual rendering and can differentiate the header which has red background colour, top menuwhich has black and grey background colour, headline news which has a bigger image anda larger text for the header, etc. However, when we look at the source code we cannotsee such kind of clear segmentation

      Something to consider regarding TBL's comments in the Axioms of Web Architecture:

      If, for example, a web page with weather data has RDF describing that data, a user can retrieve it as a table, perhaps average it, plot it, deduce things from it in combination with other information. At the other end of the scale is the weather information portrayed by the cunning Java applet. While this might allow a very cool user interface, it cannot be analyzed at all. The search engine finding the page will have no idea of what the data is or what it is about. This the only way to find out what a Java applet means is to set it running in front of a person.

      https://www.w3.org/DesignIssues/Principles.html#PLP

    1. This paper is from the Proceedings of the Eighteenth Conference on Hypertext and Hypermedia (HT '07) https://doi.org/10.1145/1286240.1286303.

    2. One document is assumed to be one file-- a strange requirement, but it reinforces the one-way linkage restriction currently in place on the World Wide Web.

      See "From PDF to PWP: A vision for compound web documents" https://blog.jonudell.net/2016/10/15/from-pdf-to-pwp-a-vision-for-compound-web-documents/

      I'd argue that the property described is not a property of the Web, to its discredit. Udell's piece gives a good overview of why it would be desirable if it were, contra Nelson et al.

    3. (Since the HTML file contains the links, they can only point outward.)

      Huh?

    4. The standard paradigm of files, unquestioned almost everywhere, drastically affects how everything works.

      See more of Ted on files in "The Tyranny of the File", Datamation, 1986 December 15.

    1. Why Is HCI Deprioritized?

      The author omitted cognitive errors and stupid macho bullshit.

    2. the complexity is not intellectually rich. Substantial coordination between tools is required to perform simple operations. This busywork draws from a finite pool of cognitive resources
    1. if you can grant semi-permanent access to a folder - then web applications can also be local-first applications: you can own your data, in spite of the cloud

      I've looked a lot into the file-saving problem over the years. Tom describes the problem in a succinct enough way:

      Let’s say you’re building a graphics editor and someone can save their work as file.svg. If this were a desktop application, they could press ⌘s to save again. Or you might even auto-save to the file. You can’t do this on the web: every time you save a file, it’ll go through the same “Save as…” workflow, which asks you to choose a location and confirms if you overwrite an existing file.

      The reality is that this is cumbersome, but it's not a showstopper. Any application author caring to let you run local-first applications where you own the data can do so.

      The effect that the new file system API is likely to have is to see it abused to the point that lots of people lose even more control of their data and (because of that) others are wary of granting permissions at all, due to overbroad privilege requests. The status quo, even cumbersome as it is, is actually more of an enabler for user-controlled, local-first applications and strong data ownership "rights".

      The API designers should step back from the current kitchen sink design for file access and narrowly address the specific ergonomics issues described in the quoted bit relating to ⌘s and file overwrites.

      (One workaround that I've come up with to the file overwrite problem is to pack up opaque "checkpoint" files and educate the user that the correct way to work is to save them all to a common directory. You generate a unique name each time, so while even though the user must explicitly do a little bit of a dance with the system filepicker dialog, they are never prompted with the "[Are you sure you want to overwrite...?]" warning. The dance with the filepicker is also minimal, since the browser remembers the last location, so it more or less amounts to an explicit notification that the app is saving new data with an optional way for the user to seize control and write it elsewhere if they choose. When it comes time to open up a previous project, rather than picking a single file, they point at the directory containing all the checkpoint files. The app reads these in and reconstructs the state. [Side note: the ZIP format's internal data structures work passably for this. The concatenation of the content of all the checkpoint files files comprises the ZIP bytestream.] It sounds janky, but on reflection this is really only true because it violates some expectations about how things should work based on familiarity with other workflows. Doing things this way means also that you can get version history almost for free—something that otherwise takes a lot of intentional and clever design and generally only shows up in mature apps with lots of engineering resources. But, to repeat, using this "ZIP-chunks" approach, you get it as a byproduct of having to work within the constraints of the browser sandbox. Sometimes constraints are useful for creation.)

    1. front-end dumpster fires, where nothing that is over 18 months old, can build, compile or get support anymore. In my day job, I inherit "fun" tasks as 'get this thing someone glued together with webpack4 and frontend-du-jour to work with webpack5 in 2022
    1. Literate programs allow you to answer these questions naturally.

      And in environments without tangle and weave, you can get pretty far by just making sure to write code top-down.

    1. An element in XML (in particular XUL and HTML elements) can point to a specific binding in an XBL file. The location of this file is specifiable using CSS.

      Very unfortunate design choice.. This was always a rough spot dealing with XBL.

    1. A third quality that we think is necessary for end-user programming is an in-place toolchain. The user should be able to edit their programs without installing additional tools or programs.
    1. it's usually due to the misapplication of healthy open source principles

      The effect of handling open source the way it's popularly practiced on GitHub does not get nearly enough scrutiny for its role in e.g. maintainer burnout. Pretty much every project I see on GitHub does things that are obviously bad—or at least it should be obvious*—and neither are they sustainable, nor even a particularly good way to try to get work done, assuming that that's your imperative. It is probably the case, however, that that assumption is a bad one.

      * I've slowly come to realize and accept that this stuff is not obvious to lots of people, because it's all they know—whether that means that that should affect whether its negative consequences are obvious is something that I'm inclined heavily to argue that "no, it shouldn't affect it; it should still be obvious that it's bad even then", but empirically, it seems that my instinct is wrong.

    1. I tried building Firefox once but I wasn't able to, it's slightly outside of my competences at the moment
    1. Need better evidence for cosmic unfairness and against the just-world fallacy?

      's a shame that such a high earner can't/doesn't spot their error in associating $33,000 of "annual post-tax household income" with "$33,000 USD/month" (even after encountering surprising results that one would expect would cause them question themselves).

      I wonder how many people who could benefit from the suggested $39,600 donation wouldn't have made the same mistake (and could go on to create that much value many times over to humanity—maybe even greater than that amount to the donor themselves, indirectly).

    1. Don’t read the code before learning to build the project. Too often, I see people get bogged down trying to understand the source code of a project before they’ve learned how to build it. For me, part of that learning process is experimenting and breaking things, and its hard to experiment and break a software project without being able to build it.
    1. why would step 1 be "become a user"? I ask because I don't fully grok why I would want to contribute to a project I don't use

      Then it sounds like you're in full agreement that step 1 should be "become a user". So why is this comment written as if it dissents? (The question should be worded "Under what circumstances would step 1 be anything other than 'become a user'?", to avoid sounding like criticism of what is actually a shared belief.)

    1. YAGNI (You Ain’t Gonna Need It) trumps DRY (Don’t Repeat Yourself)

      When people err on the side of advising for things to be overly DRY, I prefer to counter it by advocating for TRY (Try Repeating Yourself). Only after you've actually tried the copy-and-paste approach and have run into actual (vs imagined) issues should you refactor more for the DRY side.

      See also: "A little copying is better than a little dependency." * https://go-proverbs.github.io/ * https://www.youtube.com/watch?v=PAAkCSZUG1c&t=9m28s

  6. people.csail.mit.edu people.csail.mit.edu
    1. Lesson: avoid use of this; work around by defining that

      No. Actual lesson: don't do crazy shit; if you're trying to code defensively against this, then your code is too hard to understand, anyway, and you've got bigger problems. (See also: overuse of triple equals and treating it like an amulet to ward off unwanted spirits.)

    2. Lesson: design constructors that don't use new

      wat

    3. Lesson: use === and !==

      No. Actual lesson: know your types.

    1. this study found that only 25% of Jupyter notebooks could be executed, and of those, only 4% actually reproduced the same results

    Tags

    Annotators

    1. but they would have to find it for it to be of any use to them, and this is something I only have so much energy to advertise.

      This is where widespread awareness of annotations would be useful. A service like Hypothes.is inherently functions as a sort of hub for relaying gossip about a given resource.

    2. That’s a time savings of several orders of magnitude, but what would it take to also relieve me (or whoever) of this burden? Probably not much more than the initial effort, if it was done in the right place.

      the need for an ombudsman or viable "fanclub economy"

    3. Indeed, it almost certainly took longer to write this code than it would have taken to just transcribe the information by hand directly into iCal.
    4. the department knew it was happening and almost certainly has a master calendar somewhere that features months worth of programming for every facility under its control. So a couple questions:Why so stingy with that data?Why not publish it in a format that people will actually use?This information is not very useful on some random, deep-linked webpage
    5. Linked data makes it possible to completely decouple computable information from the system that ordinarily houses it.
    1. Isn't this piece a longwinded way of saying that the adage "If you want something done right you have to do it yourself" is accurate?

    2. you'll often see people claiming "that's just how the world is" and going further and saying that there is no other way the world could be
    3. for the most part, people bought the party line and pushed for a migration regardless of the specifics

      Tailorism requires Taylorism. If you fail at the latter, you'll never be able to reason accurately about the former.

    4. he assumed that someone whose job it is to work on air conditioners would give him non-terrible advice about air conditioners

      He made the mistake of not realizing that ninety percent of everything is crap. ("Everything" includes jobs filled by people who may or may not have any competence in spite of their insider status.)

    5. there also isn't a mechanism that allows actors in the system to exploit the inefficiency in a way that directly converts money into more money
    6. It's a truism that published results about market inefficiencies stop being true the moment they're published because people exploit the inefficiency until it disappears.

      I learned from Scott Alexander that these systems are "anti-inductive".

    7. markets enforce efficiency, so it's not possible that a company can have some major inefficiency and survive

      A memo to myself last week on this topic: people—even smart ones—still have trouble wrangling the implications of natural selection.

      Natural selection does not mean survival of the fittest. It means survival of the good enough.

      There was more to it than that, but funnily enough, there's this part:

      Dan Luu writes of Twitter and companies generally that they should continue to hire so long as it makes financial sense, and what makes financial sense is defined by an equation where savings at the margin is one component

      Understanding this is important, not just for the reasons that Dan mentions in the linked ("I could do that in a weekend!") piece, but because it gives companies a budget for inefficiency.

      Companies can make non-optimal decisions. (Deleterious ones, even.a^1 a^2)

      Instant death is not really a thing, although most misunderstandings demand that it is (as in the case of the parable of the "non-existent" 20-dollar bill). This is the timeless fallacy.

      When i was working this into a starter for a potential standalone piece, i drafted the following passage, hoping to use it as an intro:

      This is not a piece about biology. It is not even a piece about humans' understanding of biology. It is a piece about humans and humans' inability to grapple with particularly counterintuitive concepts (like natural selection).

      i also jotted down the phrase "cognitive fluency".

    1. if you're dominating such a market, you can make a massive number of bad decisions

      Market domination is not even a precondition. Of course, non-market-dominance plus bad decisions won't beget dominance, but so-so organizations can continue to be so-so for a really long time.

    1. should the CSS WG even care that a third-party code base’s syntax gets trampled on

      No.

    1. A stupid person is a person who causes losses to another person or to a group of persons while himself deriving no gain and even possibly incurring losses.

      This is a great piece of writing with an extremely unfortunate title. On review, it's shown to describe a model with fairly rigorous definitions.

    1. The truth of the matter is that people do most of their computing on phones.People conduct serious business on phones. Mobile apps and sites are more important to products that we would all assume are “sit down at a desktop computer” type of products.Any effort you put into a desktop-only app is basically leaving most of the money on the table
    1. There was at one time a leaderboard.

      Key word "was". Perversely, now it's a 404.