2,977 Matching Annotations
  1. Aug 2021
    1. Copy and pastethe module’s code into your system and make whatever changes you find nec-essary.This is usually the right thing to do for a small component
    2. the skills of IT staff
    3. Their first step was to spend several weeks watching their customers
    4. there’s no spec for a search engine, since youcan’t write code for “deliver links to the 10 web pages that best match the customer’s intent”
    1. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algo-rithms
    1. The three most powerful words for building credibility are "I don't know"
    1. Data which is accessible through the web relies on upkeep of paying for domain names and server costs. Data which is contained in a widely shareable, open format, such as PDF, on the same level as the ‘contents’ and which connects using the citation method of specifying the bibliographic details of a source so that it can be located and used from any location (like a traditional printed journal) rather than only from a web addressed repository, makes for a robust, long term solution

      Really, this is an argument for self-containedness and not an argument against HTML and HTTP.

      Granted, the Web doesn't handle compound documents very well (embedded graphics, unless they're SVG—and even then...). See https://blog.jonudell.net/2016/10/15/from-pdf-to-pwp-a-vision-for-compound-web-documents/.

    2. Shouldn't the attribution info on this page ("admin", "July 26, 2021"...) be presented instead as Visual-Meta? Is Visual-Meta on the Web not important?

    3. as the content of the document is available, the metadata will also be available, even to the point of printing the document, then scanning it and performing OCR on it
    1. on the topic of interoperability one idea that I'm excited about is thinking about better ways to synchronize across existing cloud applications. I think there's a way in which if you're using one app and I'm using a different app and if we can establish a bridge between them, where let's say I'm editing a doc in google docs and your using Dropbox Paper or your preferred editor[...] that starts to create this more flexible feeling where the data's not locked in any individual app and it more kind of lives between the apps. And so one new project that I'm sort of embarking on now is trying to create tools that mediate that kind of synchronization across tools.

      What if we used... files?

    1. Abstraction and Reuse Increases Code Productivity

      There is a such thing as inappropriate use of abstraction.

      For more on this https://news.ycombinator.com/item?id=27662074, and refer, for example, to NPM.

    2. In software development itself, if you unleash a bunch of programmers on a problem and allow them to pursue their whims, you will observe that code tends toward bloat. This is not necessarily widely recognized. More broadly understood is the corollary to this that goes by the name Wirth's law, which states that software gets slower faster than hardware gets faster.

      Outside of computer programming, there is a general awareness that organizations become less efficient with size. For the same reason that this happens, procedural bloat afflicts SOP just like code bloat happens with programmers.

      There's a widespread belief that capitalism seeks out efficiency. With most organizations being capitalist enterprises, so the belief continues, they are an extension of this. You can see this show up in arguments about the gender pay gap. If we could cut costs just by hiring women to do the same job, they say, then we would. The veracity of the claims about the size of the pay gap notwithstanding, the claim that corporations would seize the opportunity to cut costs like this doesn't jibe with reality. Corporations are not observed to be a perfect extension of the law of capitalist efficiency. A corporation as an entity is not a perfectly rational actor operating in its own self interest, following both from the irrationality of the people who make it up and from instances of where they do behave rationally operating in their own individual self interests, counter to the organization's.

      There is hardly ever a Taylor-like figure assigned to the problem of wrangling inefficiency.

    1. when you're reading some fresh code in your browser, do you really want to stop to configure that test harness

      Running the tests should be as easy as opening something in the browser.

    1. It is passed as the second parameter to the 'request' event.

      How do we keep people from falling into the kind of rut that results in documentation like this?

      I think it comes from an imposed milestone to document everything, so people end up phoning it in like this. In Graham Nelson's Narrascope talk (the one that was a followup to his broken promise that Inform 7 would be open sourced), you can see in his screenshots various passages filled with similar kinds of (frankly unhelpful) "prose".

    1. everybody knew this. Everybody saw this formula and yet nobody thought to do with it the thing that Newton did

      See also, McCarthy's "ho, ho..." moment. https://hypothes.is/a/A_hmqoHZEeuEaEfF5uimPw

    1. The reason we keep using email is that for that set of tasks requiring more than plaintext but less than an app we have nothing. MS Word maybe.
    2. chances are it’s a worthless piece of junk to you compared to the email method
    3. When Nicole shops, she writes it out on a sheet of paper
  2. astralcodexten.substack.com astralcodexten.substack.com
    1. The institutions through which Americans build have become biased against action rather than toward it. They’ve become, in political scientist Francis Fukuyama’s term, “vetocracies,” in which too many actors have veto rights over what gets built. That’s true in the federal government. It’s true in state and local governments. It’s even true in the private sector.

      Antidotes:

      • Carefully entrusting veto power to those who know the distinction between thoughtful caution as in the case of Chesterton's fence versus the kneejerk antidisestablishment response—similar to the case of the case of Rhesus ladders
      • teaching the value of yes-anding over no-don'ting.
    1. However, "scientific management" came to national attention in 1910 when crusading attorney Louis Brandeis (then not yet Supreme Court justice) popularized the term.[3] Brandeis had sought a consensus term for the approach with the help of practitioners like Henry L. Gantt and Frank B. Gilbreth.
    1. preserved the sequence of elements with the same primary key

      Is smoothsort stable? This quote seems to be saying it is, but elsewhere people say otherwise.

    2. EWD209

      "A Constructive Approach to the Problem of Program Correctness."

    1. I have to say that now I regret that the syntax is so clumsy. I would like http://www.example.com/foo/bar/baz to be just written http:com/example/foo/bar/baz

      Agree with the sentiment, disagree with the "remedy". (I realize it's not actually being proposed as such.)

      colon-slash-slash was great in hindsight because there's sufficient entropy that if you want to do URL sniffing, you can get by with writing a really naive implementation that just looks for that sequence—you don't even have to worry about the "http" or "https" parts. In fact, I think it would be great if user agents came to grips with the dominance of HTTP and allowed links of the form starting with "://"—where the scheme name can be omitted entirely, and it is just presumed to be HTTP/HTTPS. (Use the same discovery method that browser address bars already use for working out whether "http" or "https" is the correct to go with.)

    2. his demo of a recent smalltalk re-implementation

      Huh?

    1. This is really just making a strong case for the Web's notion of content negotiation, which receives pitifully little attention. The idea is that there should be some nugget comprising a cruft-free version of the resource's content itself, and it should be possible for you to specify, e.g., "No really, just elide all the JS and other accoutrements of the modern Web from the representation that you send me; basically, just give me the meat of it."

    1. A thought by way of the Nelsonian school of hypertext:

      Tim is quoted several times in this piece, which is to say that there is a larger body of content (say a recording of the interview, whether audio or a full transcript) from which this piece is borrowing snippets. Within the WWW worldview, that full record comprises a resource. Within the Nelsonian worldview of visible connections, at every place in the document where Tim is quoted, there should be an edge which the reader can traverse to reveal the unabridged record.

    2. "Whether those slashes were forward slashes or back slashes didn't affect how the Web worked," he says, "but it does affect how other developers react to it
    1. If believing that people shouldn't live in fear of their tech betraying them is ideological, I'll take it.

      Not exactly an if-by-whisky, but (unrelated to the content of this article) worth serving as the launch pad for a series of examinations about why people feel compelled to couch their messages in this way—usually because the other "side" is using an appeal to emotion or appeal to reflex.

    1. One thing I'd like forward-looking hypertext toolmakers to keep in mind is the ability for the tools to help people answer questions like "What led to legalanthology.ch hosting a copy of this document? Given a URL from one organization, is it possible to look at the graph of internal backlinks (let me focus narrowly on incoming edges originating from the same host)?"

    1. If we find ourselves needing this pattern in more than one places in our codebase, we could abstract it into a helper

      This is where people with a tendency to participate in JS development frequently start to go off the rails. Exercise some restraint and tell that voice in your head that has been influenced by years in the community "no".

      (PS: typeof checks need not and should not be written to use triple equals. Again, this is an example of where the dominant culture of modern JS development is a bad influence—pushing people towards doing things poorly. It's like radio and TV announcers who go out of their way to say "backslash" in URLs—"stop! you're going through extra effort just to get it wrong.")

    1. The World Wide Web is the most powerful medium for information sharing and distribution in the history of humankind

    Annotators

    1. Pluggable view system

      See also the sepsis inspector project.

    2. Storage: HTML form like server POST, or annotation server protocol maybe.

      or BYFOB.

    3. The web should be a two-way thing
    4. Establish a local gateway on the user's machine. This would not be easy to do portably

      With S4, we can do better.

    5. the problem that the script is only allowed to access content on the same server
    1. Firefox security won't in general let a script from a given DNS domain (like www.w3.org) read web data from a different domain. To change this,

      How not to futureproof your work.

    1. I joined Caldera in November of 1995, and we certainly used "open source" broadly at that time. We were building software. I can't imagine a world where we did not use the specific phrase "open source software". And we were not alone. The term "Open Source" was used broadly by Linus Torvalds (who at the time was a student...I had dinner with Linus and his then-girlfriend Ute in Germany while he was still a student)

      From Linus Torvalds Remembers the Days Before ‘Open Source’:

      Torvalds counters that “I wouldn’t trust Lyle Ball’s recollection 100% about me… since my girlfriend-at-the-time (now wife) name was Tove, not Ute.”

    1. it indicates that the situation will occur again, elsewhere
    2. The incident described in this post provides a good case study for why the GitHub pull request process is not a good substitute for wikis.

    1. Referenced from http://info.cern.ch/hypertext/WWW/Bibliography.html.

      This page is now a 404, and the Wayback Machine (whose first access attempt is dated 2013) doesn't have a copy. (It was already a 404 by then.)

    1. All 2 versions

      Actually just 1 version here; the two are the same, with one URL available over plain HTTP and the other over HTTPS.

      I have found reference, however, to "udi1.ps" and "udi2.ps". Unclear at this point what the difference is between them.

    1. The quote that begins about halfway through this trailer in full is, "He tells a lie, and people go to track this down, and by the time you've responded to that, he's told three others. It's a sheer exercise in fatigue."

      Speaker is Jelani Cobb.

    1. millions of people that “using it wrong”

      Dubious claim. It doesn't even appear that those using it wrong are a substantial minority — as much as those with a commercial interest in manufacturing alternative facts would like for people to believe there are.

    1. 211= 2,097,152

      "2,097,152 = 2^21"

    2. asignificant barrier to progress in computer science was thefact that many practitioners were ignorant of the history ofcomputer science: old accomplishments (and failures!) areforgotten, and consequently the industry reinvents itself every5-10 years

    Tags

    Annotators

    1. Funnily enough, I've been on an intellectual bent in the other direction: that we've poisoned our thinking in terms of systems, for the worse. This shows up when trying to communicate about the Web, for example.

      It's surprisingly difficult to get anyone to conceive of the Web as a medium suited for anything except the "live" behavior exhibited by the systems typically encountered today. (Essentially, thin clients in the form of single-page apps that are useless without a host on the other end for servicing data and computation requests.) The belief/expectation that content providers should be given a pass for producing brittle collections of content that should be considered merely transitory in nature just leads to even more abuse of the medium.

      Even actual programs get put into a ruddy state by this sort of thinking. Often, I don't even care about the program itself, so much as I care about the process it's applying, but maintainers make this effectively inextricable from the implementation details of the program itself (what OS version by which vendor does it target, etc.)

    1. you interpret the past as "the present, but cruder"

      Like people who can't wrap their head around the idea that evolution has nothing to do with any kind of purpose to produce humans. (Even starting from the position that humans are the "end result" is fundamentally flawed.)

    1. the Web has graduallyevolved from the original static linked document modelwhose language was HTML, to a model of intercon-nected programming environments whose language isJavaScript
  3. Jul 2021
    1. We didn’t watch a few seconds of a TV show and then click a remote and watch a few seconds of another.

      This is overall a good piece, although it does contain some errors, but this claim is probably the weirdest and wrongest.

    1. See also a lightly differing follow-up (billed as a crosspost): https://www.w3.org/People/Berners-Lee/1991/08/art-6692.txt

    2. real or virtual

      interesting taxonomy; useful for communicating about a concerted effort towards a more document-oriented correction to the modern Web?

    3. 6484@cernvax.cern.ch
    1. This information was previously posted to <alt.hypertext> and to <comp.sys.next>, but popular request prompts this cross-posting.
    1. over Years
    2. in toe

      "in tow"

    3. Cornel

      "Cornell"

    4. It also typically goes hand in hand with another concept known as scratching your own itch. Building a product or service or website that you yourself would like to see in the world.

      Lots of punctuation mistakes in these two "sentences"—one being that this isn't really two sentences, since the second is a fragment, but it could be fixed by swapping the period for a colon or em dash.

    5. it;s

      "it's"

    1. 1'-"'+,..._:h'v~ ...-·~/...-1..-.f ... ;~ ~ fS f.v(. .;.. bt ~t-tno..;..,· .... ) ~ tv fA. '"""V"o-..1 t,..;-{_ ea.... ~# ;.r ,.; ~ !vl ., ue......~ 1 ~;lk...) g~~t. "\ '""'"~""<

      Actually reads:

      I like the browsing style this should make possible. Intuitively right and potentially user-friendly. But it could be frustrating—how do we know we can get it right? Demo of existing systems? Invaluable tool for services

    2. Content also available (including the original word processor file) from https://www.w3.org/History/1989/proposal.html

    1. Other versions which are available are:

      From CERN, a PDF scan of the original (includes the infamous handwritten note "Vague but exciting...": https://cds.cern.ch/record/1405411/files/ARCH-WWW-4-010.pdf

    2. The original document file (I think - I can't test it)

      Referenced in an HN thread:

      https://news.ycombinator.com/item?id=12793157

      In the thread, William Woodruff mentions that LibreOffice is capable of displaying this file.

    1. I might as well note that I'm forced to link to a special pseudo bboard system from here as a workaround for the moronic robots.txt standard.

      Huh?

    1. while still holding tab down

      Alt, you mean? (No Windows here; can't test, but it seems like that would be the right way to do it.)

    1. I'm partial to the "Principle of Least Power" in the Axioms of Web Architecture document cited in the bibliography. (The language there better captures the thought and presents it more convincingly, in my opinion.)

      Shortcut: https://www.w3.org/DesignIssues/Principles.html#PLP

    1. Most users crave pleasant UX

      I don't even think that's it. Plenty of people are willing to do with poor UX. (Look at GitHub.) The overriding factor is actually a consistently familiar interface. (Look at GitHub.)

      Related: https://www-archive.mozilla.org/unity-of-interface.html

    2. In the end, nobody came.

      Makes sense. As I've said before, you should not fool yourself—"you have to create a compelling product before you can ever realistically even start thinking about selling people on a general platform".

      The thing that most of these projects' fans think that these projects have going for them is the technology, but that's not really interesting to anyone except enthusiasts who spend all their time talking amongst themselves.

    1. “But how can I automate updates to my site’s look and feel?!”

      Perversely, the author starts off getting this part wrong!

      The correct answer here is to adopt the same mindset used for print, which is to say, "just don't worry about it; the value of doing so is oversold". If a print org changed their layout sometime between 1995 and 2005, did they issue a recall for all extant copies and then run around trying to replace them with ones consistent with the new "visual refresh"? If an error is noticed in print, it's handled by correcting it and issuing another edition.

      As Tschichold says of the form of the book (in The Form of the Book):

      The work of a book designer differs essentially from that of a graphic artist. While the latter is constantly searching for new means of expression, driven at the very least by his desire for a "personal style", a book designer has to be the loyal and tactful servant of the written word. It is his job to create a manner of presentation whose form neither overshadows nor patronizes the content [... whereas] work of the graphic artist must correspond to the needs of the day

      The fact that people publishing to the web regularly do otherwise—and are expected to do otherwise—is a social problem that has nothing to do with the Web standards themselves. In fact, it has been widely lamented for a long time that with the figurative death of HTML frames, you can no longer update something in one place and have it spread to the entire experience using plain ol' HTML without resorting to a templating engine. It's only recently (with Web Components, etc.) that this has begun to change. (You can update the style and achieve consistency on a static site without the use of a static site generator—where every asset can be handcrafted, without a templating engine.) But it shouldn't need to change; the fixity is a strength.

      As Tschichold goes on to say of the "perfect" design of the book, "methods and rules upon which it is impossible to improve have been developed over centuries". Creators publishing on the web would do well to observe, understand, and work similarly.

    2. Let the browser vendorskeep developing forever more.

      Odd choice of a pairing between context and link destination. Again, this seems to come down to a misconception (or—less charitably, a misrepresentation—of how Web standards progress).

      CADT most aptly describes the NPMers on GitHub, not the rudiments of the Web platform. (Or if anything, the types of folks pushing such misguided efforts as Gemini, ironically enough...)

    3. set up a website

      Something which should be standardized, by the way. Signing up for an account on Neocities or Netlify should be just as readily available over a neutral, non-HATEOAS client as their bespoke APIs for updating content. (Their APIs, for that matter, should be deprioritized where vanilla HTTP would suffice.)

      Furthermore, it's nice that reading from DNS is standardized, but proprietary control panels are anathema to the general accessibility (that is, to the general public) of this aspect of Internet infrastructure. The mechanisms for writing/editing DNS records should be just as standardized as the ones for doing lookups.

    4. But stable standards are incredibly important.

      Right. Which is why the folks working on Web standards have endeavored to make stability a goal up to this point and beyond; the Web is one of (if not the) most stable piece of widely adopted computing infrastructure that exists. The author's conception of Web standards is at odds with reality.

    5. it is impossible to build a new web browser

      Perhaps it's not possible. (Probably not, even.) It would be very much possible to build a web browser capable of handling this page, on the other hand, and to do so in a way that produces an appreciable result in 10 minutes of hacking around with the lowliest of programming facilities: text editor macros—that is, if only it had actually been published as a webpage. Is it possible to do the same for if not just this PDF but others, too? No.

    6. Taking my own advice, this document was written in the world’s greatestweb authoring tool: LibreOffice Writer.

      Great. This is something that I advocate for technical people to put forth as a "serious" solution more often than I see today (which is essentially never). But next time, save it as HTML. (And ditch the stylistic "rubbish"; don't abuse "the sanctity of the written word by coercing it to serve the vanity of a graphic artist incapable of discharging his duty as a mere lieutenant".)

    7. Eh, they look alright to me.

      I have a rule that any response that begins with someone having typed out "Eh" is empty and/or junk. (The response here is no proof by contradiction.) In other words, one is free—or perhaps obligated—to meet the zero-effort dismissal with a similarly dismissive response.

    8. weshould use PDF/A instead, which forbids interactive content

      (The author purports to address the following, but just uses some rhetorical flourishes and misdirection. In an effort to not let that go unnoticed and to hold his or her feet to the fire...)

      How does this type of "we should" differ at all from saying "we should use HTML 4 with no JS" or "we should use EPUB"?

    9. There used to be an internet middle class, of non-commercial users whowere not overtly technical, but were still able to self-publish.

      This is probably the least flawed claim in the entire piece.

    10. Overall, I'm pretty happy with the level of scrutiny the claims here are being subjected to over on HN. https://news.ycombinator.com/item?id=27880905

      (One of the rare times on HN in recent memory where a potentially attractive position on what could have been a contentious issue involving techno-pessimism related to the Web seems to thankfully be overwhelmingly opposed, in recognition that the arguments are not sound.)

    11. fake plastic human-interest

      The formatting of this link is messed up.

    12. PDFs used to be large, and although they are still larger thanequivalent HTML, they are still an order of magnitude smaller than thetorrent of JavaScript sewage pumped down to your browser by mostsites

      It was only 6 days ago that an effective takedown of this type of argument was hoisted to the top of the discussion on HN:

      This latter error leads people into thinking the characteristics of an individual group member are reflective of the group as a whole, so "When someone in my tribe does something bad, they're not really in my tribe (No True Scotsman / it's a false flag)" whereas "When someone in the other tribe does something bad, that proves that everyone in that tribe deserves to be punished".

      and:

      I'm pretty sure the combination of those two is 90% of the "cyclists don't obey the law" meme. When a cyclist breaks the law they make the whole out-group look bad, but a driver who breaks the law is just "one bad driver."¶ The other 10% is confirmation bias.

      https://news.ycombinator.com/item?id=27816612

    1. [Huh? Pass-by-reference ALWAYS requires passing a reference by value. That's how it works. The question is whether the referenced object is a COPY of the caller's object, or an ALIAS for the user's value. Most modern languages pass by reference for non-primitive types.]

      This person is confused, though it's obvious and understandable to those who've been down the road before how it happens. Most mainstream languages that are taught to be "pass by reference" are actually of the "pass by reference value" sort. Lack of exposure to languages that actually implement pass by reference is the culprit. Of course if your experience is limited to C, C++, Java, and others that use the "pass by reference value" approach, then you'll come away thinking that "pass by reference value" is what "pass by reference" means and this is what any and every language "ALWAYS requires" when setting out to implement "pass by reference"—you just don't have the appropriate frame of reference to see how it could be otherwise.

    1. body script, body style {

      This doesn't work well with scripts (and style elements) injected by the Hypothesis bookmarklet or the Wayback Machine's toolbar. On that note, it's pretty poor hygiene on their part to (a) inject this stuff in the body to begin with, and (b) not include at the very least a class attribute clearly defining the origin/role of the injected content. As I described elsewhere:

      set the class on the injected element to an abbreviated address like <style class="example.org/sidebar/2.4/injected-content/">. And then drop a page there explaining the purpose and requirements (read: assumptions) of your injected element. This is virtually guaranteed not to conflict with any other class use (e.g. CSS rules in applied style sheets), and it makes it easier for other add-ons (or the page author or end user) to avoid conflicts with you.

    2. * Monospace fonts always render at 80% of normal body text for some * reason that I don't understand but is still annoying all the same.

      Dealing with it this way is a mistake. The only reasonable thing to do is to tell the user to adjust their browser's default font settings or deal with it. (This seems to only affect Firefox's default UA stylesheet/preferences, not Chrome.)

      Check out how the most recent iteration of the w2g streamline "client" https://graph.5apps.com/LP/streamline approaches styling.

    1. It comes down to "what is the work?"Take one of Ray Charles' records..Is it just the music itself? Does it include the physical media and its condition?

      This is important. See for example, the recent revelation that even with all the book digitization efforts, archive.org does not necessarily have imagery for a given volume's spine, even if the front cover and back cover were photographed!

  4. greggman.github.io greggman.github.io
    1. One other thing is that many libraries seem bloated. IMO the smaller the API the better. I don't need a library to try to do 50 things via options and configuration.
    1. The world could benefit from a curated set of bookmarklets in the style of Smalltalk ("doIt", "printIt", etc buttons) that you can place in your bookmarks bar (or copy into a bookmarks document and open in it in your browser), where the purpose would be to allow you to:

      1. access a new scratch area (about:blank) for experimentation
      2. make it editable, or make any given element on a page editable
      3. let you evaluate any code written into the scratch space

      scratch.js aims for something something similar, and though laudable it falls short of what I actually crave (and what I imagine would be be most beneficial/appreciated by the public).

    1. put people first

      Putting people first or persons first (i.e. egos first)? Because the latter is what you get under the current social paradigm.

    2. we have discovered a game-changing way of structuring cyberspaces: the Social Web, where content orbits the author like planets orbit a star

      I've also articulated this point, but in a negative context. This piece speaks of actor-indexed content in a positive light. I maintain that actor-indexed content is on the whole negative. It's a direct and indirect cause of the familiar social media ills and has wide-reaching privacy implications, which is bad in and of itself, but especially so when you realize that this in turn leads to a chilling effect where people simply opt not to play because it's the only remedy.

      We'd do well socially to abandon the "game-changing" practice of indexing based on actor and return to topic-based indexing, which better matches the physical world.

    3. The real threat comes with giant closed cyberspaces that disguise themselves as public spaces.

      Useful analysis.

    1. they don't get counted towards the total amount of friction in the system

      You can say the same for any number of things that GitHub natives usually put their thumb on scale for in order to not count it among the costs of using GitHub. This type of "blindness" is a recurring issue that has come up every time I've tried to have a discussion about the costs of GitHub.

    1. "I don't want to interact with anyone who uses GitHub" is developer-hostile

      In fact, the reverse ("I won't interact with anyone who isn't using GitHub") is the default for many (most?) projects hosted on GitHub.

    1. I see way too many projects that have a mess of unwrapped (or purely auto-wrapped) wall of text for commit messages.

      You can blame "GitHub-Flavored" "Markdown" for that.

    1. being provably terminating is a problem dealing with the full body of C programs written in the world. The OP is dealing with their own self-published content. That's a different problem

      Far too few programmers understand this, which creates a conversational nuisance. I'm not sure why, though. Charged with writing out explicitly the explanation for why it's true, you might come away thinking it's so because the issue is deceptively nuanced.

      ... but it's not that nuanced.

      This should really not pose as big of a problem as it does, and yet I see the "cache miss" occur way too often.

    1. I mean, over 40M devs from over 41 countries on GitHub? Pretty amazing.

      Is it, though? From where I'm sitting, GitHub has been good for exactly two things. Getting the uninteresting one out of the way first: free hosting. Secondly, convincing the long tail of programmers who would not otherwise have been engaged with the FOSS scene to free their code, by presenting a shimmering city on the horizon where cool things are happening that you'd be missing out on if you were to remain stuck in the old ways that predate the GitHub era's collective mindset of free-and-open by default. That is, GitHub strictly increases the amount of free software by both luring in new makers (who wouldn't have created software otherwise, free or not) and rooting out curmudgeons (who would have produced software of the non-free variety) with social pressure.

      I'm less convinced of the positive effects on "the choir"—those who are or were already convinced of the virtues of FOSS and would be producing it anyway. In fact, I'm convinced of the opposite. I grant that it has "changed the way [people] collaborate", but not for the better; the "standard way of working" referred to here by my measures looks like a productivity hit every time I've scrutinized it. The chief issue is that undertaking that level of scrutiny isn't something that people seem to be interested in...

      Then you have GitHub's utter disregard for the concerns of people interested in maintaining their privacy (by not exposing a top-level index of their comings and goings to anyone who asks for it —and even those who don't—in the way that GitHub does, again, whether you asked for it or not).

    1. I deliver PDFs daily as an art director; not ideal, but they work in most cases. There's certainly nothing rebellious or non-commercial about them

      It reminds me of The Chronicle's exhorting ordinary people to support the then-underway cause intended to banish Uber and Lyft from Austin, on ostensibly populist grounds, when in reality the cause was aimed at preserving the commercial interests of an entrenched, unworthy industry. I saw a similar thing with the popular sentiment against Facebook's PATENTS.txt—a clever hack on par with copyleft which had the prospect of making patent trolls' weapons ineffective, subverted by a meme that ended with people convinced to work against their own interests and in favor of the trolls.

      Maybe it's worth coining a new term ("anti-rebellion"?) for this sort of thing. Se also: useful idiot

    1. It's great to enhance the Internet Archive, but you can bet I'm keeping my local copy too.

      Like the parent comment by derefr, my actual, non-hypothetical practice is saving to the Wayback Machine. Right now I'm probably saving things at a rate of half a dozen a day. For those who are paranoid and/or need offline availability, there's Zotero https://www.zotero.org. Zotero uses Gildas's SingleFile for taking snapshots of web pages, not PDF. As it turns out, Zotero is pretty useful for stowing and tracking any PDFs that you need to file away, too, for documents that are originally produced in that format. But there's no need to (clumsily) shoehorn webpages into that paradigm.

      If you do the print-to-PDF workflow outlined earlier in the thread, you'll realize it doesn't scale well, requiring too much manual intervention and discipline (including taking care to make sure it's filed correctly; hopefully you remember the ad hoc system you thought up last time you saved something), that it's destructive, and that it ultimately gives you an opaque blob. SingleFile-powered Zotero mostly solves all of this, and it does it in a way that's accessible in one or two clicks, depending on your setup. If you ever actually need a PDF, you can of course go back to your saved copy and produce a PDF on-demand, but it doesn't follow that you should archive the original source material in that format.

      My only reservation is that there is no inverse to the SingleFile mangling function, AFAIK. For archival reasons, it would be nice to be able to perfectly reconstruct the original, pre-mangled resources, perhaps by storing some metadata in the file that details the exact transformations that are applied.

    1. it seems that most of these links are rehash of ES6 spec, which is pretty technical

      Yes. The problem also with relying on programmers' blogged opinions for advice and understanding is that a lot of the material is the result of people trying to work things out for themselves in public—hoping to solidify their own understanding by blogging—and it's not expert advice. Aspiring programmers further run the risk of mistaking any given blogger's opinion for deep and/or widely accepted truths. (And JS in particular certainly has lots of widely accepted "truths" that aren't actually true. Something about intermediate JS programmers has led to an abundance of bad conventional folk wisdom.) Indeed, spot-checking just a few of the links collected in the list here reveals plenty of issues—enough to outright recommend against pointing anyone in its direction.

      On the other hand, the problem with the ECMAScript spec is that it has gotten incredibly complicated (in comparison to the relative simplicity of ed. 3). There is a real need for something that is as rigorously correct as the spec, but more approachable. This was true even in the time of the third edition. In the early days of developer.mozilla.org, the "Core JavaScript Reference" filled this hole, but unfortunately editorial standards have dropped so low in the meantime that this is no longer true. Nowadays, there is not even any distinction between what was originally the language reference versus the separate, looser companion for learners that was billed as the JavaScript guide. The effect that it has had is the elevation of some of the bad folk wisdom to the point of providing it with a veneer of respectability, perhaps even a "seal of approval"—since it lives on MDN, so it's gotta be right, right?

    1. this._figureElement = this._page.querySelector("script");

      Better to use querySelectorAll and then filter based on figure designation (e.g. nodeValue.startsWith(" (Fig. 1) ")).

    2. document

      Unused!

      (Caller should also probably look more like:

      let system = new SystemB(document);
      system.attach(document.querySelectorAll("script"))
      

      ... where the querySelectorAll return value's role is to act as a list of document figures, for the reasons described elsewhere https://hypothes.is/a/-n-RYt4WEeu5WIejr9cfKA.)

    1. something called federated wiki which was by ward cunningham if anyone knows the details behind that or how we got these sliding panes in the first place i'm always interested

      it looks like my comment got moderated out, and I didn't save a copy. Not going to retype it here, but the gist is that:

      • Ward invented the wiki, not just the sliding panes concept.
      • Sliding panes are a riff on Miller columns, invented by Mark S. Miller
      • Miller columns are like a visual analog of UNIX pipes
      • One obvious use case for Miller columns is in web development tools, but (surprisingly) none of the teams working on browsers' built-in devtools at this point have have managed to get this right!

      Some screenshots of a prototype inspector that I was working on once upon a time which allowed you to infinitely drill down on any arbitrary data structures:

      https://web.archive.org/web/20170929175241/https://addons.cdn.mozilla.net/user-media/previews/full/157/157212.png?modified=1429612633

      https://web.archive.org/web/20170929175242/https://addons.cdn.mozilla.net/user-media/previews/full/157/157210.png?modified=1429612633

      Addendum (not mentioned my original comment): the closest "production-quality" system we have that does permit this sort of thing is Glamorous Toolkit https://gtoolkit.com/.

    2. if you have any doubts about a plug-in use the most popular ones because

      Possibly true, but dubious.

  5. www.dreamsongs.com www.dreamsongs.com
    1. changing the base code can be expensive and dangerous
    2. The primary feature for easy maintenance is locality: Locality is that characteristic of source code that enables a programmer to understand that source by looking at only a small portion of it.
    3. mistake

      Should be "mistakes".

    1. I only allowed smaller closures in the code and refactored the rest into separate top-level functions. This is a deliberate move against the common practice of js programmers. Why? Because I noticed closures make code harder to read.
    1. as a more experienced user I know one can navigate much more quickly using a terminal than using the hunt and peck style of most file system GUIs

      As an experienced user, this claim strikes me as false.

      I often start in a graphical file manager (nothing special, Nautilus on my system, or any conventional file explorer elsewhere), then use "Open in Terminal" from the context menu, precisely because of how much more efficient desktop file browsers are for navigating directory hierarchies in comparison.

      NB: use of a graphical file browser doesn't automatically preclude keyboard-based navigation.

    1. object-orientation offers a more effective way to let asystem make good use of parallelism, with each objectrepresenting its own behavior in the form of a privateprocess

      something, something, Erlang

    2. Functional programming implies much more thanavoiding goto statements, however.It also implies restriction to localvariables, perhaps with the excep-tion of a few global state variables.It probably also considers the nest-ing of procedures as undesirable.
    1. although it probably needs a longer essay with more worked examples and better explanations of the various moving parts to really work for that
    1. http://weblog.infoworld.com/udell/gems/ju_mynewgig.mp3
    2. http://weblog.infoworld.com/udell/gems/ju_mcgraw.mp3
    3. http://weblog.infoworld.com/udell/gems/ju_burbeck.mp3
    4. http://weblog.infoworld.com/udell/gems/ju_gemignani.mp3
    5. http://weblog.infoworld.com/udell/gems/ju_rosenfeld.mp3
    6. http://weblog.infoworld.com/udell/gems/ju_frost.mp3
    7. http://weblog.infoworld.com/udell/gems/ju_idehen.mp3
    8. http://weblog.infoworld.com/udell/gems/ju_HillMcFarland.mp3
    9. http://weblog.infoworld.com/udell/gems/ju_linqMay06.mp3
    10. http://weblog.infoworld.com/udell/gems/ju_singleton.mp3
    11. http://weblog.infoworld.com/udell/gems/ju_martinez.mp3
    12. http://weblog.infoworld.com/udell/gems/ju_rodgers.mp3
    13. http://weblog.infoworld.com/udell/gems/ju_rayhill.mp3
    14. http://weblog.infoworld.com/udell/gems/ju_dcstat.mp3"
    15. http://weblog.infoworld.com/udell/gems/ju_hudack.mp3
    16. http://weblog.infoworld.com/udell/gems/ju_glushko.mp3
    17. http://weblog.infoworld.com/udell/gems/ju_patrick.mp3
    18. http://weblog.infoworld.com/udell/gems/ju_mayfield.mp3
    19. http://weblog.infoworld.com/udell/gems/ju_xbrl.mp3
    20. http://weblog.infoworld.com/udell/gems/ju_suber.mp3
    21. http://weblog.infoworld.com/udell/gems/ju_fielding.mp3
    22. http://weblog.infoworld.com/udell/gems/ju_windley.mp3
    23. http://weblog.infoworld.com/udell/gems/ju_navizon.mp3"
    24. http://weblog.infoworld.com/udell/gems/ju_fahlberg.mp3
    25. http://weblog.infoworld.com/udell/gems/ju_ullman.mp3
    26. http://weblog.infoworld.com/udell/gems/ju_ericson.mp3
    27. http://weblog.infoworld.com/udell/gems/ju_liu.mp3
    28. http://weblog.infoworld.com/udell/gems/ju_schneider.mp3
    29. http://weblog.infoworld.com/udell/gems/ju_russell.mp3
    30. http://weblog.infoworld.com/udell/gems/ju_securent.mp3
    31. http://weblog.infoworld.com/udell/gems/ju_wilkin.mp3
    1. You can use LibreOffice's Draw

      Nevermind LibreOffice Draw, you can use LibreOffice Writer to author the actual content. That this is never seriously pushed as an option (even, to my knowledge, by the LibreOffice folks themselves) is an indictment of the computing industry.

      Having said that, I guess there is some need to curate a set of templates for small and medium size businesses who want their stuff to "pop".

    1. it is also clear that there would be no need for copyleft licences to govern the exercise of copyright in software code by third-party developers at all if copyright did not guarantee rightsholders such a high degree of exclusive control over intellectual creations in the first place

      This is simply not true. The unique character of software under the conventions that most software is published (effectively obfuscated, albeit not for the purpose obfuscation itself, but for the purposes of producing an executable binary) means that reciprocal licenses like the GPL are very much reliant on the existing copyright regime. Ubiquitous and pervasive non-destructive compilation would be a prerequisite for a world where copyright's role on free software were nil.

    1. Simply select some text on this page and make a comment!

      Nope, doesn't work:

      Loading failed for the <script> with source “http://assets.annotateit.org/annotator/v1.2.10/annotator-full.min.js”. annotatorjs.org:242:1

      (assets.annotateit.org is unreachable, and the annotateit.org landing page contains a farewell message "July 14, 2019. The service has now been fully shut down.")

      It would be nice if these were still functional, for historical reasons.

    1. I’m not confident I’ll be able to keep a server running to serve up my notes, so I bundled them up into an archive of pregenerated HTML, which anyone who has a copy can unpack and read, without requiring any online resources.
    1. The array prototype is syntax sugar. You can make your own Array type in pure JavaScript by leveraging objects.

      At the risk of saying something that might not now be correct due to recent changes in the language spec, this has historically not been true; Array objects are more than syntax sugar, with the spec carving out special exceptions for arrays' [[PutValue]] operation.

    1. Of course not. Reading some copyrighted code can have you entirely excluded from some jobs

      There is a classic mistake being committed here. Private policy does not the law make.

      The Wine project wants to exclude you if you've seen laid eyes on Windows sources? That's fine; that's their right. But the Wine devs are neither writing legislation nor issuing binding opinion.

      We see this everywhere. Insurance company wants to have their adjusters follow some guideline when investigating/settling claims? Again, that's fine, but their stance may or may not have anything to do with actual law. Shop proprietor wants to exclude you from the store if you are (or are not) wearing a mask? Okay, just don't infer that this necessarily has any bearing on what is law in the eyes of the courts. Imposing keto on yourself except one cheat meal on Sundays? Fine again, but not law.

    1. it could make sense for it to start a completely new browser based on WebKit

      Indeed. I've argued for awhile that WebKit is an excellent basis for a new, community-developed browser, both strategically and technically. That is, it makes sense to start a browser project based on WebKit even now, whether Hachette exists or not.

    2. The repository software has to be released under a libre license.

      An alternative, don't create a service. Let the Web be the platform.

      If I'm a contributor to the project, then the extension's about.html (or whatever) should both credit me and at my option include a link to the place where I publish or link to useful scripts. This goes for all contributors. The way to share would be to (a) publish their scripts somewhere online and (b) contribute to the project and add their entry to about.html. Alternatively, for people not interested in contributing (and to maintain code quality and avoid poorly aligned incentive structures), the project itself can keep a simple static page up to date, linking to various others' pages as the requests to do so come in.

  6. Jun 2021
    1. They are artifacts of a very particular circumstance, and it’s unlikely that in an alternate timeline they would have been designed the same way.

      I've mentioned before that the era we're currently living in is incredibly different from the era of just 10–15 years ago. I've called the era of yesterdecade (where the author of this piece appeared on Colbert a ~week or so after Firefox 3 was released and implored the audience to go download it and start using it) the "Shirky era", since Shirky's Here Comes Everybody really captures the spirit of the times.

      The current era of Twitter-and-GitHub has a distinct feel. At least, I can certainly feel it, as someone who's opted to remain an outsider to the T and G spheres. There's some evidence that those who haven't aren't really able to see the distinction, being too close to the problem. Young people, of course, who don't really have any memories of the era to draw upon, probably aren't able to perceive the distinction as a rule.

      I've also been listening to a lot of "old" podcasts—those of the Shirky era. If ever there were a question of whether the perceived distinction is real or imagined these podcasts—particularly shows Jon Udell was involved with, which I have been enjoying immensely—eliminate any doubts about its existence. There's an identifiable feel when I go back and listen to these shows or watch technical talks from the same time period. We're definitely experiencing a lowpoint in technical visions. As I alluded to earlier, I think this has to do with a technofetishistic focus on certain development practices and software stacks that are popular right now—"the way" that you do things. Wikis have largely fallen by the wayside, bugtrackers are disused, and people are pursuing busywork on GitHub and self-promoting on social media to the detriment of the things envisioned in the Shirky era.

    1. The Future's Here, But Unevenly Distributed

      The title here is a reference to the William Gibson quote "The future is already here — it's just not very evenly distributed."

    1. This piece is syndicated here from its original appearance as a post in Wired Insights https://www.wired.com/insights/2013/02/rebooting-web-comments-wire-them-to-personal-clouds/.

      Jon's followup is here http://jonudell.net/tpc/28.html.

    2. Some of the best customers of such a service will be academics.

      Indeed. Web literacy among the masses is pitifully low. Browsermakers are certainly to blame for being poor stewards. Hot Valley startups are responsible as well. (See https://quoteinvestigator.com/2017/11/30/salary/.)

    3. But here's the twist. That edit window is wired to your personal cloud. That's where your words land. Then you syndicate your words back to the site you're posting to.

      This is more or less how linked data notifications work. (And Solid, of course, goes beyond that.)

    4. If they did I think there would actually be some quality of discussion, and it might be useful

      I used to think this. (That isn't to say I've changed my mind. I'm just not convinced one way or the other.)

      Another foreseeable outcome, relative to the time when the friend here was making the comment, is that it would lead to people being nastier in real life. Whether that's true or not (and I think that it might be), Twitter has turned out to be a cesspool, and it has shown us that people are willing to engage in all sorts of nastiness under their real name.

    1. For some reason, this page doesn't link back to the item's overview/collection entry. It can be found here:

      https://archive.org/details/conversationsnetwork_org