3,465 Matching Annotations
  1. Jul 2022
    1. I have 35 MB of node_modules, but after webpack walks the module hierarchy and tree-shakes out all module exports that aren't reachable, I'm left with a couple hundred kilobytes of code in the final product.

      This directly contradicts the earlier claim that irreducible complexity is the culprit behind the size of the node_modules directory.

      35 MB → "a couple hundred kilobytes"? Clear evidence of not just reducibility but a case of actual reduction...

    1. The early phase of technology often occurs in a take-it-or-leave-it atmosphere. Users are involved and have a feeling of control that gives them the impression that they are entirely free to accept or reject a particular technology and its products. But when a technology, together with the supporting infrastructures, becomes institutionalized, users often become captive supporters of both the technology and the infrastructures.

      the illusion of preference-revealing actions

    1. In terms of this analogy, a lot of objections to end-user programming sound to me like arguing that Home Depot is a waste of time because their customers will never be able to build their own skyscrapers. And then on the other side are the people arguing that people will be able to build their own skyscrapers and it will change the world. I just think it would be nice if people had the tools to put up their own shelves if they wanted to.
    2. It took me an hour to rewrite my ui code and two days to get it to compile. The clojurescript version I started with miscompiles rum. Older clojurescript versions worked with debug builds but failed with optimizations enabled, claiming that cljs.react was not defined despite it being listed in rum's dependencies. I eventually ended up with a combination of versions where compiling using cljs.build.api works but passing the same arguments at the command line doesn't.
    1. I recently started building a website that lives at wesleyac.com, and one of the things that made me procrastinate for years on putting it up was not being sure if I was ready to commit to it. I solved that conundrum with a page outlining my thoughts on its stability and permanence:

      It's worth introspecting on why any given person might hesitate to feel that they can commit. This is almost always comes down to "maintainability"—websites are, like many computer-based endeavors, thought of as projects that have to be maintained. This is a failure of the native Web formats to appreciably make inroads as a viable alternative to traditional document formats like PDF and Word's .doc/.docx (or even the ODF black sheep). Many people involved with Web tech have difficulty themselves conceptualizing Web documents in these terms, which is unfortunate.

      If you can be confident that you can, today, bang out something in LibreOffice, optionally export to PDF, and then dump the result at a stable URL, then you should feel similarly confident about HTML. Too many people have mental guardrails preventing them from grappling with the relevant tech in this way.

    2. if I died today, thoughts.page would probably only last until my credit card expires and DigitalOcean shuts down my servers

      I've noted elsewhere that NearlyFreeSpeech.Net has a billing system where anyone can deposit funds for a hosted account. That still leaves the matter of dealing with breakage, but for static sites, it should work more or less as if on autopilot. In theory, the author could die and the content would remain accessible for decades (or so long as fans have the account ID and are willing to continue to add funds to it), assuming the original registrant is also hosting their domain there and have auto-renewal turned on.

    3. Trying to keep websites around forever is struggling against the nature of the web.

      As above, I think this is more a consequence of struggling against the nature of the specific publishing pipelines that most people opt for. Many (most) Web-focused tech stacks are not well-suited to fulfill the original vision of the Web, but people select them as the foundation for their sites, anyway.

    1. it's very easy to measure how many github back and forths people have

      Bad example. The way most GitHub-adjacent subjects are handled and the overheads involved is already evidence that most people are not interested in operational efficiency, let alone measuring it to figure out how to do it better.

    2. computation it's the most important cost the e right how much does it cost to execute this thing on the end user's computer

      It's very hard to take this seriously from someone who's main endeavor is making video games. Essentially every CPU cycle ever spent running a game was superfluous.

    3. This is as good an example as any of why I'm not a fan of Casey Muratori.

      I'm 25% of the way through this video (a "lecture")—10+ whole minutes—and he's hasn't said anything insightful or of any substance whatsoever. He certainly communicates that he has strong opinions, and expresses them (I guess?) in a very emphatic way, but holy shit, dude. What is your point? Say something that makes sense. Hell, just say anything at all.

  2. Jun 2022
    1. A story: when I wanted to meet with a really busy friend of mine in SF, I first sent him 2 twitter DMs, then 2 emails, and then 3 text messages, letting him know that I will keep sending one text a day, until an email from him finally landed in my inbox letting me know that he would love to get lunch.

      This whole piece is filled with this, but this story in particular comes across strongly as "I'm happy to impose my habits upon you." It's obnoxious.

    1. There’s not much implementations can do, and it’s up to the debugger to be smarter about it.

      This is fatalistic thinking.

      Here's what both implementations and debuggers can do together: 1. implementations can document these things 2. debuggers can read the documentation and act accordingly

    1. This is a great, ancient browser feature, but as developers we often engineer it away.

      Right. Leading to such questions as, "If you're putting all this effort in just to get things wrong and ending up with something that's worse than what you get for free—worse than just doing nothing at all—then what is your actual contribution?"

    1. What they didn’t consider is that Google had a crack team of experts monitoring every possible problem with SPAs, right down to esoteric topics like memory leaks.

      I've had conversations where I had to walk other people through why garbage collection doesn't mean that memory use is automatically a solved problem—that you still have to be conscious of how your application uses memory and esp. of the ownership graph. They were fully under the illusion that steady growth of memory was just a non-issue, an impossibility in the world of garbage collectors.

    2. web dev culture war

      What the linked piece is not: an analysis of the web dev's self-interested perspective on fairness, economic balance, and the implicit societal mandate for their skillset/work product (and the associated subsidies they benefit from and fight to maintain).

    3. Want to animate navigations between pages? You can’t (yet). Want to avoid the flash of white? You can’t, until Chrome fixes it (and it’s not perfect yet). Want to avoid re-rendering the whole page, when there’s only a small subset that actually needs to change? You can’t; it’s a “full page refresh.”

      an impedance mismatch, between what the Web is (infrastructure for building information services that follow the reference desk model—request a document, and the librarian will come back with it) versus what many Web developers want to be (traditional app developers—specifically, self-styled product designers with near 100% autonomy and creative control over the "experience")—and therefore what they want the Web browser to be (the vehicle that makes that possible, with as little effort as possible on the end of the designer–developer)

    1. First thing I noticed is that I spent a bunch of time writing tests that I later deleted. I would have been better off writing the whole thing up-front and just doing end-to-end tests.

      need for cheaper throwaway tests

    1. the expected lifespan of even very successful SaaS companies is typically much shorter than the lifespan of personal data

      A strength of boring tech that relies on the traditional folders-of-files approach, incl. e.g. the byproducts of using office suites.

    2. I suspect because most software is optimized for industrial use, not personal use. For industrial uses the operations overhead is not a big deal compared to the development and operational efficiency gained by breaking things up into communicating services. But for personal uses the overwhelming priority is reducing complexity so that nothing fails.
    1. Yesterday evening (London, UK time) something amazing happened which can best be described in a single picture:

      That doesn't describe anything. It's a grid of GitHub avatars. What am I supposed to be seeing?

    1. In saner bugtrackers like e.g. Bugzilla the community is empowered to step in and triage bugs. GitHub infamously chose to go for a "simpler" system with fewer frills, pomp, and circumstance. Perversely, this has the opposite of the intended effect. The net result is that for the community to have these powers, the project owner has to explicitly grant them to individual users, which is considered to be a lot more heavyhanded/ceremonial than how it works on bugzilla.mozilla.org.

      I'd have no problem, for example, stepping in and enforcing these things if it weren't the case that it were such a chore to go through the ceremony and getting approval to police this kind of stuff. GitHub's lackluster approach to user privacy, of course, doesn't help.

    1. Signal, which is damn good crypto work, announced MobileCoin support, and I stopped donating, bummed.

      Signal trades on some other stuff of dubious* merit, like the "guarantees" of SGX, and does other user-hostile stuff: requiring a PIN, doing user data backups without permission, locking out third-party clients... (What's worse is that the latter is excused as being the only way to reliably enable the undesirable "enhancements").

      * Even calling it merely "dubious" here is pretty generous.

    1. I've come to much the same conclusion, I think: to wit, most people secretly enjoy their problems and suffering.

      See also, from Brooke Allen's "How to hire good people instead of nice people" https://brookeallen.com/2015/01/14/how-to-hire-good-people-instead-of-nice-people/:

      I won’t get between you and your dreams. If you have a dream, I need to know what it is so we can figure out if this job gets you closer. If you don’t have a dream then that’s fine, as long as you really want one and you’re not addicted to wishing and complaining.

  3. buckyworld.files.wordpress.com buckyworld.files.wordpress.com
    1. enormous an ounce of energy;

      I can't parse this. Ravasio says any typo is probably her fault. The best I can come up with is "an enormous amount of energy", which doesn't make sense as a typo, but does sort of sound the same.

    1. Rephrasing Brian Smith: Some thing is on the Web such that if the Web itself was destroyed, that thing would also be destroyed. If not, it's not fully on the Web. If someone destroyed the Web, this would not damage me if I were being denoted by a URI, but my homepage at that URI would be up in smoke if that what's people were using to refer to me by. I am not on the Web in a strong sense, but my homepage sure is.

      I don't think this is a good definition. The example, at least, is a bad one. That resource could still exist (the same way a .docx that lives in the Documents directory and has been uploaded but later had the file host go down would still exist)—it just wouldn't be resolvable by URL.

    2. In theory, through content negotiation a news website could communicate with your browser and determine where you live and then serve your local news. This rather simple example shows that the relationship between resource and representation can not be one-to-one.

      I don't think this is a good example. I'd call it bad, even. It's self-defeating.

    1. > If I understand your critique, it's this: "How dare you critique their use of Ra? You have no standing! You have no right!" Which is basically an ad hominem attack that doesn't address any of the substance of my complaint.Sorry, no, making up your own caricature of what I said isn't an effective way of responding to it.

      Yeah, why has this become so normalized? It's gotten to the point where people will respond to something by posting nothing but an attempt at false attribution by rewording the other—typically in the most convenient, pithy, hackneyed, and strawmannish way—and then putting quotes around it while drowning in plaudits from those who already agree—often for reasons no better than shameless tribal affiliation.

      The basic precondition to summarizing the other's position in order to refute it is that the other side actually agrees that it's an accurate summary of their position. If you don't have that, then you don't have anything.

    1. How can you let people know that you’re “in the market”? How can you assemble a portfolio or set of case studies?

      Yes, those are the questions for people interested in pieces like this one. What are the answers?

    1. There are plenty of troublesome assumptions and unanswered pragmatic issues in that sketch.

      Here's one: suppose Alice reads a book and adds it to her library then learns through this system that Tom, Dick, and Mary have it in their libraries, too. Thing is, Tom finished reading it 6 weeks ago, Dick read it last summer, and Mary read it 12 years ago as an undergrad. Would this be a good reading group?

    1. the physical shape and color of each command limits makes clear where you can and can't put it

      As a sort of "gutter bumper" approach, guiding you on what is and isn't accepted in the language, they're nice, but I can imagine as a child that it would have annoyed me when I wanted to express something and found that the "only" reason why I couldn't follow through on something was because they made something the wrong shape, keeping me from doing the thing, and not understanding.

      Contrast this with the remark below about pattern languages being "a set of design rules that loosely define how a system should work, rather than a strict specification".

    1. There's a data layer. There's a security layer. There's a visual design layer. There's a hypertext layer that links to other locations. There's an authentication layer. There's an algorithmic layer.

      Is "layer" the right choice here?

    2. Out of all of these metaphors, the second most dominant after paper is physical space.

      NB: these two metaphors are at odds. You can see this in the way that authors treat user agent overlays as intrusions into their space—a place for them to control, instead of e.g. a note clipped to the copy that belongs to the reader.

      The owned-space one is definitely worse, but I fear that for many people it's now the default—for both those seeking control and those who are the indirect objects of control.

    1. In 2010, we didn’t have ES modules, but once it was standardized it should have been brought into Node.

      Fun fact: the amount of time between 2010—the year Dahl mentions here—and ES2015—aka ES6, where modules appeared—is less than the amount of time between ES2015 and today. And yet people act like modules are new (or worse, just over the horizon, but still not here). It's a people problem.

    2. If you have some computational heavy lifting, like image resizing, it probably makes sense to use Wasm rather than writing it in JS. Just like you wouldn’t write image resizing code in bash, you’d spawn imagemagick.

      This is misleading/hyperbolic. The performance characteristics of WASM vs JS are nothing like a native binary vs the Bash interpreter.

    1. It's the ethos of html-energy, combining the minimalism of htmldom.dev programming

      The fact that the author is describing it in these terms is really evidence that the main achievement here is overcoming the limitations of his or her own mental blocks.

    1. Developer documentation is incredibly important to the success of any product. At Cloudflare, we believe that technical docs are a product – one that we can continue to iterate on, improve, and make more useful for our customers.One of the most effective ways to improve documentation is to make it easier for our writers to contribute to them.

      Ibid

    1. The interconnectivity features need a server though, and that involves either using a third-party service, or spinning up your own VPS, which means added cost, and you’ll probably have to do that at some point anyway if you choose the third-party option at first.
  4. May 2022
    1. @5:50:

      In this portion of the machine, we keep these brass matrices.

      Throughout, "matrices" can be heard to be pronounced as "mattresses". Could this have something to do with the origin of the phrase "going to the mattresses"? It seems more likely than the conventional explanation that involves safehouses and literal mattresses, which strikes me as really dubious.

      Compare "capicola" → "gabagool".

    1. Until sometime last year I'd been coding socii in the open

      NB: I'm pretty sure this is referring to fact that the site was live, and it had open registration. The source code was not being worked on in the "open", even under lax definitions of that word.

    1. Our build bots do it by parsing your HTML files directly at deploy time, so there’s no need for you to make an API call or include extra JavaScript on your site. # HTML forms Code an HTML form into any page on your site, add data-netlify="true" or a netlify attribute to the <form> tag

      gross

    1. JS is plenty fast and can be used for "large, complicated programs"[1] just fine. The problem with most JS written today is in programmer practices—the way that the community associated with NodeJS pushes one other to write code (which is, ironically, not even a good fit for the JavaScript language). It turns out that how you write code actually matters
    1. I feel like the point of the article isn't so much "how do I solve this specific issue" as "this is the general state of JS packaging", and the solution you present doesn't work in the general case of larger, less trivial dependencies

      Much of the (apparent) progress (i.e. activity—whether it constitutes busywork is another matter) in the world of "JS Prime" (that is, pre-Reformation, NodeJS/NPM-style development) is really about packaging problems.

    1. Anyway: I think the underlying problem is that it has been hidden that Node is NOT JavaScript. It uses (some) of the JavaScript syntax, but it doesn't use its standard library, and its not guaranteed that a Node package will also run on the browser.

      NodeJS development is often standards-incompatible with JS-the-language and other (actually standardized) vendor-neutral APIs.

    1. Today I tried to help a friend who is a great computer scientist, but not a JS person use a JS module he found on Github. Since for the past 6 years my day job is doing usability research & teaching at MIT, I couldn’t help but cringe at the slog that this was. Lo and behold, a pile of unnecessary error conditions, cryptic errors, and lack of proper feedback.
    1. okay how about ruby? oh I have old ruby , hmmm , try to install new ruby, seems to run, but it can't find certain gems or something. oh and this other ruby thing I was using is now broken ? why do I have to install this stuff globally? You don't but there are several magic spells you must execute and runes you must set in the rigtt places.
    1. I've watched a bunch of very smart, highly-competent people bounce off JS; that's not a fault with them but with the ever-changing and overly-complicated ecosystem.

      It helps to be accurate (if we ever want to see these things fixed):

      They didn't "bounce off JS". They "bounced" after seeing NodeJS—and its community's common practices.

    1. > Movim is easy to deploy> Movim is lightweight (only a few megabytes) and can be deployed on any server. We are providing a Docker image, a Debian package or a simple installation tutorial if you want to deploy it yourself.I imagine a typical Tumblr user landing on this page and not getting a single word of the "easy to deploy" section.

      Related: comments about deployability of PHP over in the comments about "The Demise of the Mildly Dynamic Website".

    1. Knuth recommended getting familiar with the program by picking up one particular part and "navigating" the program to study just that part. (See https://youtu.be/D1jhVMx5lLo?t=4103 at 1:08:25, transcribed a bit at https://shreevatsa.net/tex/program/videos/s04/) He seems to find using the index (at the back of the book, and on each two-page spread in the book) to be a really convenient way of "navigate" the program (and indeed randomly jumping through code, as you said), and he thinks that one of the convenient things about the "web" format is that you can explore it the way you want. This is really strange (to us) as the affordances we're used to from IDEs / code browsers etc are really not there

      I can't help but think that currentgen programmers are misunderstanding Knuth and anachronizing him, with their being a product of the current programming regime where most never lived in a world without structured programming, for example, when we hear "literate programming", we attempt to understand it by building off our conception of current programming practices and try to work out what Knuth could mean given widespread modern affordances as a precondition, when really Knuth is just advocating for something that approximates (with ink and paper) currentgen tooling, and is therefore in fact more primitive than our reference point which we are trying to understand as being capable of being improved upon, but Knuth's LP is an improvement nonetheless of something even more primitive further still.

    1. I still stick by the fact that web software used to be like: point, click, boom. Now it is like: let's build a circuit board from scratch, make everyone learn a new language, require VPS type hosting, and get the RedBull ready because it's going to take a long time to figure out.
    1. I always joke that I'm fluent in jQuery, but no absolutely no javascript[...] There are still banks and airlines that run COBOL

      The joke being that both jQuery and COBOL were crummy right from the beginning, and now they're crummy and old, right?

    1. i noted first that the headline performance gain was 10% & 11% this for a team given far more time and resources to optimize for their goal than most

      What does this even mean?

      I also find it laughable/horrifying the comparison between jQuery and greybeardism (or, elsewhere, a bicycle[1]). jQuery is definitely not that. It is the original poster child for bloat.

      For all the people feigning offense at the "greybeard" comments at the thread start, it's much more common, unfortunately, to find comments like this one, with people relishing it when it comes to jQuery, because it confers an (undeserved) sense of quasi-moral superiority, wisdom, and parsimony, even though—once again—jQuery represents anything but those qualities.

      1. https://news.ycombinator.com/item?id=31440670
    1. I’d just like to point out that the problem with jQuery today is it was a library built to smooth over and fix differences between browser JavaScript engines

      jQuery is primarily a DOM manipulation library and incidentally smoothed over differences in browsers' DOM implementations. To the extent that there were any significant differences in browser's JS implementations, jQuery offered little if anything to fix that.

    1. jQuery-style syntax for manipulating the DOM

      This is 70+% of the reason why I end up ripping out jQuery from old/throwaway projects when I start trying to hack on them. The jQuery object model is really confusing (read: tries too cute/clever), and the documentation sucks, relatively speaking, and the code is an impenetrable blob, which means reading through it to figure out WTF it's supposed to do is a non-option.

    1. The problem is that a lot of old school website devs can write jQuery and very very little actual JavaScript.

      This happens to be true of many of the new/up-to-date Web developers I see, too.

      Anecdote: I never really did StackOverflow, either as a reader or a contributor. One day several years ago (well after StackOverflow had taken off), I figured that since I see people complain about JS being confusing all the time and since I know JS well, then I'd go answer a bunch of questions. The only problem was that when I went to the site and looked at the JS section, it was just a bunch of jQuery and framework shit—too much to simply ignore and try to find the ones that were actually questions about JS-the-language. "I know," I thought. "I'm in the JS section. I'll just manually rewrite the URL to jump to the ECMAScript section, which surely exists, right? So I did that, and I just got redirected to the JS section...

    Tags

    Annotators

    1. I'm not going to write document.querySelector every time I have to select some nodes, which happens quite often.

      This remark manages to make this one of the dumbest comments I've ever read on HN.

    1. now something breaks elsewhere that was unsuspected and subtle. Maybe it’s an off-by-one problem, or the polarity of a sign seems reversed. Maybe it’s a slight race condition that’s hard to tease out. Nevermind, I can patch over this by changing a <= to a <, or fixing the sign, or adding a lock: I’m still fleshing out the system and getting an idea of the entire structure. Eventually, these little hacks tend to metastasize into a cancer that reaches into every dependent module because the whole reason things even worked was because of the “cheat”; when I go back to excise the hack, I eventually conclude it’s not worth the effort and so the next best option is to burn the whole thing down and rewrite it…but unfortunately, we’re already behind schedule and over budget so the re-write never happens, and the hack lives on.

      I'm having real difficulty understanding what is going on here and in what situations such cascading problems occur.

      Is it a case of under-abstraction?

    1. Building and sharing an app should be as easy as creating and sharing a video.

      This is where I think Glitch goes wrong. Why such a focus on apps (and esp. pushing the same practices and overcomplicated architecture as people on GitHub trying to emulate the trendiest devops shovelware)?

      "Web" is a red herring here. Make the Web more accessible for app creation, sure, but what about making it more accessible (and therefore simpler) for sharing simple stuff (like documents comprising the written word), too? Glitch doesn't do well at this at all. It feels less like a place for the uninitiated and more like a place for the cool kids who are already slinging/pushing Modern Best Practices hang out—not unlike societal elites who feign to tether themself to the mast of helping the downtrodden but really use the whole charade as machine for converting attention into prestige and personal wealth. Their prices, for example, reflect that. Where's the "give us, like 20 bucks a year and we'll give you better alternative to emailing Microsoft Office documents around (that isn't Google Sheets)" plan?

    2. as if the only option we had to eat was factory-farmed fast food, and we didn’t have any way to make home-cooked meals

      See also An app can be a home-cooked meal along with this comment containing RMS's remarks with his code-as-recipe metaphor in the HN thread about Sloan's post:

      some of you may not ever write computer programs, but perhaps you cook. And if you cook, unless you're really great, you probably use recipes. And, if you use recipes, you've probably had the experience of getting a copy of a recipe from a friend who's sharing it. And you've probably also had the experience — unless you're a total neophyte — of changing a recipe. You know, it says certain things, but you don't have to do exactly that. You can leave out some ingredients. Add some mushrooms, 'cause you like mushrooms. Put in less salt because your doctor said you should cut down on salt — whatever. You can even make bigger changes according to your skill. And if you've made changes in a recipe, and you cook it for your friends, and they like it, one of your friends might say, “Hey, could I have the recipe?” And then, what do you do? You could write down your modified version of the recipe and make a copy for your friend. These are the natural things to do with functionally useful recipes of any kind.

      Now a recipe is a lot like a computer program. A computer program's a lot like a recipe: a series of steps to be carried out to get some result that you want. So it's just as natural to do those same things with computer programs — hand a copy to your friend. Make changes in it because the job it was written to do isn't exactly what you want. It did a great job for somebody else, but your job is a different job. And after you've changed it, that's likely to be useful for other people. Maybe they have a job to do that's like the job you do. So they ask, “Hey, can I have a copy?” Of course, if you're a nice person, you're going to give a copy. That's the way to be a decent person.

    1. Before deploying, we need to do one more thing. goStatic listens on port 8043 by default, but the default fly.toml assumes port 8080.

      When I created a blank app with flyctl launch, it gave me a fly.toml with 8080. The fly.toml cloned from the repo, however, already has it set to 8043.

      It's possible that the quoted section is still correct, but it's ambiguous.

    1. To keep tiny mistakes from crashing our software or trashing our data, we write more software to do error checking and correction.

      This is supposed to be the justification for increasing code size. So what's the excuse for projects today? Software of today is not exactly known for adding more "error checking and correction". It feels more like growth for growth's sake, or stimulating some developer's sense of "wouldn't it be cool if [...]?".

    1. because of the "LP will never be mainstream" belief, I'm still thinking of targeting mainstream languages, with "code" and "comments"

      No need to constraint yourself to comments, though. Why comments? We can do full-fledged, out-of-line doclets.

    2. in other words, there would be no "weave" step

      Well, there could still be a weave step—same as there is like with triple scripts (to go from compilation form back to the original modules) or with Markdown, which should be readable both as plain text and in rendered form.

    1. in an ideal LP system, the (or at least a) source format would just simply be valid files in the target language, with any LP-related markup or whatever in the comments. The reason is so that LP programs can get contributions from "mainstream" programmers. (It's ok if the LP users have an alternative format they can write in, as long as edits to the source file can be incorporated back.)

      (NB: the compilation "object" format here would, much like triple scripts, be another form of human readable source.)

  5. geraldmweinberg.com geraldmweinberg.com
    1. Code can't explain why the program is being written, and the rationale for choosing this or that method. Code cannot discuss the reasons certain alternative approaches were taken.

      Having trouble sourcing this quote? That's because some shithead who happens to run a popular programming blog changed the words but still decided to present it as a quote.

      Raskin's actual words:

      the fundamental reason code cannot ever be self-documenting and automatic documentation generators can’t create what is needed is that they can’t explain why the program is being written, and the rationale for choosing this or that method. They cannot discuss the reasons certain alternative approaches were taken. For example:

      :Comment: A binary search turned out to be slower than the Boyer-Moore algorithm for the data sets of interest, thus we have used the more complex, but faster method even though this problem does not at first seem amenable to a string search technique. :End Comment:

      From "Comments Are More Important Than Code" https://dl.acm.org/ft_gateway.cfm?id=1053354&ftid=300937&dwn=1

    1. This is a good case study for what I talk about when I mean the fancub economy.

      Wouldn't it be better if gklitt were a willing participant to this aggregation and republishing of his thoughts, even if that only meant that there were a place set up in the same namespace as his homepage that would allow volunteers ("fans") to attach notes that you wouldn't otherwise be aware of if you made the mistake of thinking that his homepage were his digital home, instead of the place he's actually chosen to live—on Twitter?

    1. I loved the Moto G (the original—from during the brief time when Google owned Motorola). I used it for 6 years. It's not even an especially small phone. Checking the dimensions, it's actually slightly smaller (or, arguably, about the same size) when compared to either the iPhone 12 Mini and the iPhone 13 Mini—which I'd say makes those deceivingly named. They're nothing like that one that Palm made, which is called, uh... "Palm", I guess. (Described by Palm as about the size of a credit card.)

    1. memory usage and (lack of) parallelism are concerns

      Memory usage is a concern? wat

      It's a problem, sure, if you're programming the way NPMers do. So don't do that.

      This is a huge problem I've noticed when it comes to people programming in JS—even, bizarrely, people coming from other languages like Java or C# and where you'd expect them to at least try to continue to do things in JS just like they're comfortable doing in their own language. Just because it's there (i.e. possible in the language, e.g. dynamic language features) doesn't mean you have to use it...

      (Relevant: How (and why) developers use the dynamic features of programming languages https://users.dcc.uchile.cl/~rrobbes/p/EMSE-features.pdf)

      The really annoying thing is that the NPM style isn't even idiomatic for the language! So much of what the NodeJS camp does is so clearly done in frustration and the byproduct of a desire to work against the language. Case in point: the absolutely nonsensical attitude about always using triple equals (as if to ward off some evil spirits) and the undeniable contempt that so many have for this.

    1. You give up private channels (DMs)

      Consider ways to build on the static node architecture that wouldn't require you to give this up:

      • PGP blobs instead of plain text
      • messages relayed through WebRTC when both participants are online
      • you could choose to delegate to a message service for your DMs, to guarantee availability, just like in the olden days with telephones
  6. www.mindprod.com www.mindprod.com
    1. local a (e.g. aPoint) param p (e.g. pPoint) member instance m (e.g. mPoint) static s (e.g. sPoint)

      This is really only a problem in languages that make the unfortunate mistake of allowing references to unqualified names that get fixed up as if the programmer had written this.mPoint or Foo.point. Even if you're writing in a language where that's possible, just don't write code like that! Just because you can doesn't mean you have to.

      The only real exception is distinguishing locals from parameters. Keep your procedures short and it's less of a problem.

    2. Show me a switch statement as if it had been handled with a set of subclasses. There is underlying deep structure here. I should be able to view the code as if it had been done with switch or as if it had been done with polymorphism. Sometimes you are interested in all the facts about Dalmatians. Sometimes you are interested in comparing all the different ways different breeds of dogs bury their bones. Why should you have to pre-decide on a representation that lets you see only one point of view?

      similar to my strawman for language skins

    3. We would never dream of handing a customer such error prone tools for manipulating such complicated cross-linked data as source code. If a customer had such data, we would offer a GUI-based data entry system with all sorts of point and click features, extreme data validation and ability to reuse that data, view it in many ways and search it by any key.

      This old hat description captures something not usually brought up by CLI supremacists: GUIs as ways validate and impose constraints on structured data.

    1. I'd have to set up the WP instance and maintain it.

      (NB: this is in response to the question Why not just use wordpress + wysiwyg editor similar to *docs, and you're done?.)

      This is a good of an explanation as any for local-first software.

      A natural response (to potatolicious's comment) is, "Well, somebody has to maintain these no-code Web apps, too, right? If there's someone in the loop maintaining something, the question still stands; wouldn't it make more sense for that something to be e.g. a WordPress instance?"

      Answer: yeah, the no-code Web app model isn't so great, either. If service maintenance is a problem, it should be identified as such and work done to eliminate it. What that would look like is that the sort of useful work that those Web apps are capable of doing should be captured in a document that you can copy to your local machine and make full use of the processes and procedures that it describes in perpetuity, regardless of whether someone is able to continue propping up a third-party service.

    1. software engineers who do web development are by far among the worst at actually evaluating solutions based on their engineering merit

      There's plenty of irrationality to be found in opposing camps, too. I won't say that it's actually worse (because it's not), but it's definitely a lot more annoying, because it usually also carries overtones that there's a sort of well-informed moral and technological high ground—when it turns out it's usually just a bunch of second panel thinkers who themselves don't even understand computers (incl. compilers, system software, etc.) very well.

      This is what makes it hard to have discussions about reforming the practices in mainstream Web development. The Web devs are doing awful things, but at least ~half of the criticism that these devs are actually exposed to ends up being junk because lots of the critics unfortunately just have no fucking idea what they're talking about and are nowhere near the high ground they think they're standing on—often taking things for granted that just don't make sense when actually considered in terms of the technological uppercrust that they hope to invoke. Just a kneejerk "browser = bad" association from people who can't meaningfully distinguish between JS (the language), browser APIs, and the NPM corpus (though most NPM programmers are usually guilty of exactly the same...).

      It's a very "the enemy of my enemy is not my friend" sort of thing.

    1. Instead of being parsed, it was `import`-ed and `include`-d

      Flems does something like this:

      To allow you to use Flems with only a single file to be required the javascript and the html for the iframe runtime has been merged into a single file disguised as flems.html. It works by having the javascript code contained in html comments and the html code contained in javascript comments. In that way if loaded like javascript the html is ignored and when loaded as html the javascript part is ignored.

      https://github.com/porsager/flems#html-script-tag

    1. I've referred to a similar (but-unrelated) architecture diagram for writing platform-independent programs as being a Klein bowtie. There is a narrow "waist" (the bowtie know), with either end of the bowtie being system-specific routines (addressed by a common interface in the right half of the diagram).

    1. wrt the sentiment in the McHale tweet:

      See Tom Duff's "Design Principles" section in the rc manual http://doc.cat-v.org/plan_9/4th_edition/papers/rc, esp.:

      It is remarkable that in the four most recent editions of the UNIX system programmer’s manual the Bourne shell grammar described in the manual page does not admit the command who|wc. This is surely an oversight, but it suggests something darker: nobody really knows what the Bourne shell’s grammar is.