2,446 Matching Annotations
  1. Last 7 days
    1. The only thing that might be unfamiliar is the local keyword, which is used to declare variables. Variables are global by default, so local is used to declare a variable as local to the current scope.

      awful

    1. EscapeHtml(str) → str Escapes HTML entities: The set of entities is &><"' which become &amp;&gt;&lt;&quot;&#39;.

      Terrible name choice! EscapeText (or EscapeTextForHTML if you want to be really verbose) is the most correct (and less footgun-y) way to refer to this.

      I wonder how many security problems the name EscapeHtml will lead to...

    2. your private version of the Internet

      The Web, you mean.

  2. www.iastatedigitalpress.com www.iastatedigitalpress.com
    1. Another Microsoft related problemwith WebDAV and Windows 2000involves an “unchecked buffer” thathandles the DAV protocol.

      This is an ordinary bug and not worthy of note in a document about the relative strengths and weaknesses of WebDAV (or any other protocol); it's not an inherent consequence of WebDAV as designed...

    2. 1=23189,00.asp

      should have been "[...]a=23189,00.asp"

    3. www.pcmag.com/print_article/0,3048,1=23189,00.asp
    4. IETC

      should be "IETF"

    5. IETP

      should be "IETF"

    6. This page is dense with references. It's an excellent candidate for marking up and connecting to the other documents it references, Xanadu-style

    7. authoring-suitable representation

      what the FSF (GPL) calls "preferred from for modification"

    8. WC3

      should be "W3C"

    9. accomplish things the originalauthors of HTTP never imagined at thetime of its invention” (www.swdi.com/WebDAV-Report.pdf)

      except TBL did imagine them -- browsermakers (outside CERN) didn't

    10. headds, “simplicity also leads to limita-tions.”

      that's a feature (not a bug)

    11. Floyd calls HTML “an extremelysimple protocol”

      should be "HTTP"

    12. 1=23189,00.asp

      should have been "[...]a=23189,00.asp"

    13. IETP

      should be "IETF"

    1. Files which render themselves when published (e.g. templates or other scripts) will be rendered when accessed from a mounted WebDAV volume. This is because WebDAV clients issue a GET (it's an extension of HTTP, after all) to hand you your data. You can't simply mount a WebDAV share and start editing PHP files, for example. Until a data type is provided for source code-based documents, this will remain a problem.

      This is a node-/organization-level information architecture problem.

      If /foo.php is a script that generates a Web page, then separate identifiers need to be assigned for each resource (one for the document itself, and one for the script that generates it). It is a failure of the node not to distinguish between the two. A separate content type would not solve this problem—it would just appear to cover it up (as well as create new ones).

    2. Locking: Locking protects web team members from overwriting each other’s changes. When two or more people are working on the same file, WebDAV ensures that they compare or merge changes before writing to the file.

      Git solved this problem in a much cleaner and respectable way: it's an append-only log. The only thing left is to "flatten" the two namespaces into a single one. On first inspection, this seems like it would re-introduce the need for rewrites (and locking?), but this can be cleverly solved by preregistration of forward references. Locating the current "head" can be resolved in log2(n) time wrt number of "revisions" (and can also be avoided in many cases, besides).

    3. How does WebDAV work? Each of your colleagues — whether they’re in Bangalore or Bangor — mounts a WebDAV volume located on the shared web server to his or her desktop. They can then access its files as they would any other networked volume.

      This is a pretty unsatisfying description of what WebDAV (and the underlying spirit) actually offers.

      Mounting a remote host like a local disk is not without its (limited) use and convenience, but it's neither the intent nor the extent behind the spirit of WebDAV.

    1. A good example for the motivation of this work is the website of the Hypertext Conference in 2008. The original URI http://ht2008.org is not accessible anymore and returns a 404 error today.
    1. we show how we have used theseprotocols in the design of a new Resource Migration Protocol(RMP), which enables transparent resource migration acrossstandard web servers. The RMP works with a new resourcemigration mechanism we have developed called the ResourceLocator Service (RLS), and is fully backwards compatible withthe web’s architecture, enabling all web servers and all webcontent to be involved in the migration process

    Tags

    Annotators

  3. May 2023
    1. They assume that it's worth someone's time to keep up with updates every month or two. But situated software doesn't have those kinds of surpluses. It requires programs to just work once built, with little maintenance

      I think there's a category error here.

    2. Here is a study
    3. Here is a talk
    4. This is a headline
    5. There are several points where there is a gesture/reference to something on the screen ("This is a headline", "Here is a talk"...) where what's shown on the screen is doing the heavy lifting.

      Don't just say, "here [it is]", though. Mention the presentation and author by name when speaking. The screen is a complement.

      Consider: there's no reason in principle why this entire presentation couldn't be a "blogcast" (a blog post in audio form).

    6. incompetence

      This accounts for ~all programmers. It's not just a matter of people presenting themselves as having credentials that are just backed up by monkey-see-monkey-do IT cargo cultism. Programmers write bugs.

      It's not really clear what this section has to do with the overall talk. How is this (specifically the part about all programmers creating bugs) mitigated or addressed by the contents of this talk?

    7. What if there was zero accidental complexity to modifying a app, and all you had to focus on was understanding how it worked?

      There's a hard transition after this point, but it comes in such a way that makes it seem like what's next is something that will expound in some detail upon this point.

    8. Deciding when and how an organization should break compatibility is a thorny problem. But fortunately we're not concerned with organizations here. When we're just individuals

      This def. needs to be more fully developed.

      Maintaining access to past works due to any number of reasons including file format lock-in (even when the software in question is open source) is actually a huge deal.

    9. I remember when I started out programming, I wanted my programs to look “professional,” like the programs other “real” programmers made.

      (It is my contention that this accounts majorly for a bunch of the problems in e.g. the NPM ecosystem.)

    10. When I show it to other programmers, often the first question I get is about what the file format for drawings is, and how to get other editors to support that file format. But does it make sense to so privilege interoperability? You have a tool to draw pictures as you write. There are situations where it can be useful, even if nobody else can read it. Trying to stay close to other people and tools makes things more complex and less supple. A tiny, open-source tool that is easy to run and works with a simple file format allows me (and you) to focus on the writing experience without precluding others from taking on additional concerns.

      This whole thing could stand to be more fully developed. I don't have any specific advice, it just seems unclear and feels like a nonsequitur upon reaching the end and moving on to the next topic.

    11. These languages foster a culture of depending on lots of libraries

      I have a lot to say about this, but I'll be succinct and just say, "don't conflate a language with a particular community that uses that language—even if that community has achieved culturally dominance".

    12. it's easy to build

      What does this mean, though? Concretely, I mean. (Could stand to qualify/quantify this.)

    13. This is the LÖVE game engine. It's based on Lua

      I was surprised to find about a ~month ago that LÖVE is written in C++. Its scripting support is Lua-based.

    14. As a result, it doesn't tend to get used in products of conquest that try to be all things to all people.

      Not sure that this is a good argument. The fact that someone is using $LANG in such a way (for "conquest") doesn't have any bearing on how you have to use the language.

      Coming from a place of evaluating an argument on soundness and rationality, the conclusion/advice given (use Lua) is only about half a step removed from PL hipsterism—at least when evaluating the reasons actually stated. (There may be a stronger argument here, but it's obscured by this one.)

    15. that spawns lots of forks

      Maybe not necessary for the intended audience but might benefit from distinguishing between bonafide forks vs what GitHub has done to the word.

    16. I'm always looking for ways to make them useful to others.

      Isn't that a contradiction?

    1. If you doubt my claim that internet is broad but not deep, try this experiment. Pick any firm with a presence on the web. Measure the depth of the web at that point by simply counting the bytes in their web. Contrast this measurement with a back of the envelope estimate of the depth of information in the real firm. Include the information in their products, manuals, file cabinets, address books, notepads, databases, and in each employee's head.
    1. to understand its construction and intentions and to change it comfortably

      Note that these two things are at odds. Literate programming (to a high standard—higher than Knuth's standards, viz [1]) is probably the single best thing that enables the former, but it works against the latter.

      1. http://akkartik.name/post/literate-programming
    1. people who are functioning in an underground manner in plain sight with knowing intention of being watched and findable, just doing so in a double-switchback

      huh?

    1. No representation is made about the persistence policies for any other information on the site.

      lame

    2. Should the W3C be disbanded, then any Web site will be granted the right to make a copy (at a different URI) of all public persistent resources so long as they are not modified and are preserved in their entirety and made available free of charge, and provided the same persistence policy is applied to these "historical mirrors." In such event, the original https://www.w3.org web site will be handed over for management to another organization only if that organization pledges to this policy or one considered more persistent.
    1. I think that TANGLE-style reordering is a lot less important with modern programming languages: they don't do one-pass compilation and so can deal with forward references. Note that most of the cross-references in Knuth's program could be replaced with function calls or constant names
    1. rsync.net would be really great for repo hosting if you could trivially pair it with something else to get public repos. You can't, though.

    1. It also includes some advice that might be obvious to professional programmers, but that might not be obvious to researchers or others who are just starting out
    1. in literate programming, the problem and its should be described in the manner best suited to the problem and the intended readership. That means non-linear, in a compiler's sense. Fragments of procedures presented out of order, procedures presented in any order, data structures introduced in any order. Perhaps objects give us a handle on the non-linearization, because they can be created in any order, and methods are very short. Webbed descriptions could also be fine, but of course people do read one sentence at a time, and when you convert to paper there is an order to the paper. But that order should be ordered to the reader, not the compiler (until it is time to compile)!
    2. I felt he had just written the program where the default was comment without delimiters (i.e. most just changed the syntax for the compiler).
    3. I second everything Kent said. Perhaps my misunderstanding about Knuth's writings, but the literate programs of his I read looked like the program was still sequenced for his compiler, with lots of English written around it. That meant the English read to me like it was sequenced for his compiler. I should like to see an example sequenced for me, with the pre-compiler so adapted as to straighten the code out for the compiler.

      This pretty much the basis for Kartik's criticism in Literate Programming: Knuth is doing it wrong.

    4. I can feed my Literate Programs (and I do virtually everything significant that way, since it helps me think better about the code) to hypertext-style index generators (a.k.a. "documentation generators", which I think is a dangerously misleading term).

      Thought experiment: what if you elevated the documentation to first-class status rather than as low-stakes generated artifacts that can be blown away and regenerated? What if you modified your compiler to consume the documentation and produce the same binary as the one produced by what you now consider to be your source code?

    1. “One thing that could be challenging is being able to tell how many API calls are being made - since many APIs are charged by the number of API calls”
    1. Web sites often design their APIs to optimize performance forcommon cases. Their main object-reading methods may return onlycertain “basic” properties of objects, with other methods availablefor fetching other properties. ShapirJS hides this performanceoptimization complexity from the user.

      In other words, it risks undermining the intent of the API design.

    1. Cox published a copy of this text on his homepage:

      https://web.archive.org/web/20021109120553/http://www.virtualschool.edu/cox/pub/92ByteWhatIfSilverBullet/index.html

      Confusingly, he labels it there "What if there's a Silver Bullet... And the Competition Gets it First?"—which seems to be the title of an ~~entirely different article (... and which itself is confusingly named 92ByteWhatIfSilverBullet, even though it is described as "An editorial for the Journal of Object-oriented Programming in June 1992. Republished in Dr. Dobb's Journal in Oct 1992.")~~

  4. robotsinplainenglish.com robotsinplainenglish.com
    1. In the officers' mess they had arranged yellow flowers on the tables for the cadets to eat, which he discovered were boiled cut eggs.

      Huh?

    1. Both systematic and natural“soldiering” were identified as sources of inefficiencies in the worker

      One of the recurring things that needed to be pointed out in the fab (still does), is that Samsung is not in the business of selling workers who will suck it up and show that they're willing to ignore all the broken things around them while doing a bunch of dumb day-in-and-day-out stuff undeterred in order to not look lazy* and plausibly prove that they're badass It's in the business of fabbing chips.

      * Side note: everyone who needed this pointed out was, in reality, lazy; the entire mode of attack came from obvious self-loathing.

    1. Google, who primarily makes its money advertising on the web, is incentivised to make websites more like apps

      More accurately: incentivized to make the browser runtime accommodate apps that assume a Web platform rich in APIs not dissimilar to mobile app platforms

    1. The paper had the "Artifacts available" badgea in the ACM Digital Library, highlighting the research in the paper as reproducible. Yet, the instructions to get the dataset required several steps rather than just a link: log in, find the paper, click on a tab, scroll, get to the dataset.
    1. Mastodon, currently seeinga large influx of users in the wake ofMusk’s Twitter takeover, creates a dif-ferent set of challenges because differ-ent users can select the same handleon different instances.

      Except not, because the domain after the @ sign is part of your handle, and it's not "creating" any challenges that weren't already a thing with email...

    1. the dreaded PDF favored by academics

      This definitely needs to be corrected.

    2. The ephemeral and non-standardized way that individuals operate their own blogs and social media means that not only might something move or cease to exist (a findability problem) but there is also an honesty problem when contents change or update without record

      Ibid. There is nothing about HTTP which makes your URIs unstable. It is your organization.

    3. While blogging and tweeting is cheap and fast and encourages ideas to be shared, these aren’t trustworthy archives.

      There's nothing in principle that makes blogging untrustworthy. It comes down to, as Elavsky says just a bit later, "higher [or lower] standards" for longevity. But there's a sleight of hand here. The people who are receptive to the proposal in this paper are already almost by definition selected for those who have high standards. So this is not a robust argument that there's anything superior to this approach vs "Just put it on your blog and take the same amount of care that you would when following the archival advice in this paper."

    1. results of database queries using POST (rather than GET) are not addressable

      This is just a misuse of HTTP.

    2. idea: a link type where a document says "I call myself X". In combination with a back-link service, it's a nifty URN idea.

      Prior art to the (surprisingly late entry) rel=canonical link introduced by Google.

    1. Annotate

      See also: Linked Data Notifications

    2. In a drag-and drop world, every window should have an icon for the document it holds which can be dragged to make a link. (Later versions of NeXTStep had this with alt/click on the titlebar).

      It's embarrassing that this isn't supported by freedesktop.org-affiliated environments.

    3. Somewhere near the "draft" end of the scale is its use a hypertext communal or personal notebook which is very close to a major original use of the Web in 1990. In this mode I can browse over notes made by people in my group, and rapidly contribute new ideas.

      Related: w2g/graph.global

    4. If you have had to switch to edit mode, and think of a local filename in which to save the file, then you have lost doubly, If you have had to answer lots of difficult questions about where to save absolute or relative links, you have lost yet again and probably messed up the file already! You should not have to think about "where" things are.

      And if you have to look up API docs to write a plugin to publish to a given service, then you have lost, too.

      Things like Neocities's out-of-band WebDAV gateway are some of the most pointless things in the world. WebDAV is HTTP! Just allow a PUT to the intended URL!

    1. Most of the installed base of web client software allows users to view link address. But ironically, that option is not available for printing in many cases. It would be a straightforward enhancement, well within the bounds of the existing architecture.

      There's more going on in the quoted requirements than these comments suggest.

    2. View control can be achieved on the Web by custom processing by the information provider. A number of views can be provided, and the consumer can express their choice via links or HTML forms. For example, gateways to database query systems and fulltext search systems are commonplace. Another technique is to provide multiple views of an SGML document repository[dynatext, I4I]. Another approach to view control is client-side processing

      Server-side processing for view control is gross—it's a conceptual abomination.

    1. each manageable in isolation

      Consider the phenomenon of people screwing others over as as matter of course in what they consider ways small enough to be acceptable—on the assumption that the other person seems to be positioned in such a way that they can absorb the shock.

    1. This is: Berners-Lee, Tim. “World-Wide Computer.” Communications of the ACM 40, no. 2 (February 1997): 57–58. https://doi.org/10.1145/253671.253704

    2. World-Wide Brain

      See also: the brains in Futurama's Infosphere.

    Tags

    Annotators

    1. The Web does not yet meet its design goal as being a pool of knowledge that is as easy to update as to read. That level of immediacy of knowledge sharing waits for easy-to-use hypertext editors to be generally available on most platforms. Most information has in fact passed through publishers or system managers of one sort or another.

    2. Apart from being a place of communication and learning, and a new market place, the Web is a show ground for new developments in information technology.

      The eternal tyranny of the milieu of the Web

    1. a really good Windows no-code is still very important, though, because three-quarters of all PCs still run Windows

      It is important that it work on Windows, but three-quarters of all PCs do not run Windows. Three-quarters of all traditional laptops and desktops? Sure, but most personal computers these days are mobile phones.

    1. @17:11

      The idea of portability is that you take a fully running system that is compliant with the expectations of that host system, pick it up, put it on the other platform, and it takes care of all the problems associated with living on that new platform and just works.

      Presages containerization in the 2010s.

    2. @17:03

      The idea of portability is not that you take your C code and recompile it and hope it compiles and hope the compilers have the same bugs in them.

    1. A contrasting experience was to learn how to use the tools to turn my programs into executable. It was a painfully slow and deeply unpleasant process where knowledge was gathered here and there after trial, errors, and a lot of time spent on search engines.
    1. Shepard writes to Boring (yes, Boring again) at this point that his “only real source of anxiety now is the realization that much of my life would be lost if I don’t get my maze results published.”

      Echoes of Darwin.

    1. @54:06:

      Host: So in a way it's a regulation to drive change, or...

      Anderson: Or, regulation to stop change that would upset existing safety standards social expectations, social norms.

    1. almost all beginners to RDF go through a sort of "identity crisis" phase, where they confuse people with their names, and documents with their titles. For example, it is common to see statements such as:- <http://example.org/> dc:creator "Bob" . However, Bob is just a literal string, so how can a literal string write a document?

      This could be trivially solved by extending the syntax to include some notation that has the semantics of a well-defined reference but the ergonomics of a quoted string. So if the notation used the sigil ~ (for example), then ~"Bob" could denote an implicitly defined entity that is, through some type-/class-specific mechanism associated with the string "Bob".

    1. configuring xterm

      Ugh. This is the real problem, I'd wager. Nobody wants to derp around configuring the Xresources to make xterm behave acceptably—i.e. just to reach parity with Gnome Terminal—if they can literally just open up Gnome Terminal and use that.

      I say this as a Vim user. (Who doesn't do any of the suping/ricing that is commonly associated with Vim.)

      It is worth considering an interesting idea, though: what if someone wrote a separate xterm configuration utility? Suppose it started out where the default would be to produce the settings that would most closely match the vanilla Gnome Terminal (or some other contemporary desktop default) experience, but show you the exact same set of knobs that xterm's modern counter part gives you (by way of its settings dialog) to tweak this behavior? And then beyond that you could fiddle with the "advanced" settings to exercise the full breadth of the sort of control that xterm gives you? Think Firefox preferences/settings/options vs. dropping down to about:config for your odd idiosyncrasy.

      Since this is just an Xresources file, it would be straightforward to build this sort of frontend as an in-browser utility... (or a triple script, even).

    1. @~8:00 one quote says:

      With web articles, I think the biggest problem is that it is hard to create interactive elements in it. Some people are able to make very fancy web articles. But it takes efforts you know.

      Related to ho-hoism? Or at least the Torvaldsian sentiment about not being "big and professional like gnu[sic]".

    1. This is:

      Torvalds, Linus torvalds@klaava.helsinki.fi. Reply to "What would you like to see most in minix?"; Google Groups 2005 November edition. Message-ID 1991Aug26.110602.19446@klaava.Helsinki.FI. comp.os.minix, Usenet. 1991 August 26.

    2. (just a hobby, won't be big and professional like gnu)

      Is this (self-directed/-inflicted) ho-hoism? (Maybe even a consequence of Ra?)

  5. Apr 2023
    1. The word “entered” should not be used on the line above the Judge’s signature to show thedate on which a judgment or order is signed.

      Huh? Hard to understand the wording here.

    1. Wow, this is me. A friend once analogized it to being like a light source. I am a laser, deeply penetrating a narrow spot, but leaving the larger field in the dark while I do so. Other people are like a floodlight, illuminating a large area, but not deeply penetrating any particular portion of it.

      This way of thinking should be treated with care (caution, even), lest it end up undergirding a belief in a false dichotomy.

      That can be a sort of "attractive people are shallow and dumb and unattractive people are intelligent and deep"-style mindtrap.

    2. I can't seem to code and engage in an ongoing human interaction at the same time. It has to be one or the other. I also really hate having someone looking over my shoulder while I'm typing.

      This doesn't sound to me like they have actually been doing pair programming as I have always understood it. Neither participant needs to "engage" in those (admittedly distracting) things—least of all the person at the keyboard.

      In pair programming as I have had it laid out—and not as a consequence of hearing "pair programming" and extrapolating or assuming what it involves—one person is writing the code just like when they're alone, except they're not actually controlling the computer. That's the other person's job. The first person is controlling the person who is controlling the computer. Part of the job of the second person involves shutting the fuck up and just following what the other person is saying to do. This pattern only ever breaks when the pair decides to switch places or the person dictating runs into an issue, at which point the person at the keyboard (who has been thinking all the while as an observer of what the two have been producing and is expected to know what the problem is, having already recognized the problem the first time around) should speak up. When switching roles or after reaching milestones, the two can confer about high-level concerns,immediate and distant plans to deal with things overlooked or set aside in the last round, etc.

      I am aware that "two people working at a single computer" is how most people understand pair programming (and that there seems to be academic work covering the topic which lays it out in a way that contradicts what I've described here), but I regard that as wrong—for all the obvious reasons, including and especially the ones described by the commenter here...

    1. One of them has a meeting? Sorry can't do any work.

      One of them has a meeting? They both have a meeting—this one.

      To put it another way: what if you received an email informing you of meeting A and then were later informed of meeting B? How would you ordinarily handle this conflict?

    2. forcing two people to do a specific thing at the same time

      Being able to show up to work on time is a basic requirement for people who are, culturally, widely viewed as being not very responsible—grocery store workers, gas station attendants, etc. Being able to satisfy a similar expectation should not be difficult, then, for well-educated and well-paid folks on a regular basis.

    1. It sounds like the non-enthusiast “reimplement everything in my favorite language” answer is that Go’s FFI is a pain, even for C.

      Relative to the experience that Golang developers are used to, yes, it's a pain.

      But that isn't to say it's any more or less painful on an absolute scale, esp. wrt what comprises typical experiences in other ecosystems.

    1. consumes more CPU and memory to simplify the logic and improve reliability.

      Candid! I propose that this interpretation of "Modern" receive widespread recognition.

    1. These systems provide quite powerful tools for automaticreasoning, but encoding many kinds of knowledge using their rigid formal representations requiressignificant- -and often completely infeasible-amounts of effort.
    2. This is:

      Malone, Thomas W., Keh-Chiang Yu, and Jintae Lee. 1989. “What Good Are Semistructured Objects? : Adding Semiformal Structure to Hypertext.” Working Paper. Cambridge, Mass. : Sloan School of Management, Massachusetts Institute of Technology. https://dspace.mit.edu/handle/1721.1/49393

    Tags

    Annotators

    1. Stevens, W. Richard 1994. TCP/IP Illustrated, Volume 1: The Protocols. Addison-Wesley, Reading, Massachusetts
    2. Ondaatje, Michael 1992. The English Patient. Vintage International, New York
    3. Mockapetris, P.V. 1987b. "Domain Names: Concepts and Facilities," RFC 1035
    4. Mockapetris, P.V. 1987a. "Domain Names: Concepts and Facilities," RFC 1034
    5. Malone, Thomas W., Grant, Kenneth R., Lai, Jum-Yew, Rao, Ramana, and Rosenblitt, David 1987. "Semistructured Messages are Surprisingly Useful for Computer-Supported Coordination." ACM Transactions on Office Information Systems, 5, 2, pp. 115-131.
    6. Malone, Thomas W., Yu, Keh-Chaing, Lee, Jintae 1989. What Good are Semistructured Objects? Adding Semiformal Structure to Hypertext. Center for Coordination Science Technical Report #102. M.I.T. Sloan School of Management, Cambridge, MA
    7. Martin Gilbert 1991. Churchill A Life Henry Holt & Company, New York, page 595
    8. There are a few obvious objections to this mechanism. The most serious objection is that duplicate information must be maintained consistently in two places. For example, if the conference organizers decide to change the abstracts deadline from 10 August to 15 August, they'll have to make that change both in the META element in the HEAD and in some human-readable area of the BODY.

      Microdata addresses this.

    1. With the Richer Install UI web developers have a new opportunity to give their users specific context about their app at install time. This UI is available for mobile from Chrome 94 and for desktop from Chrome 108. While Chrome will continue to offer the simple install dialogs for installable apps, this bigger UI gives developers space to highlight their web app.

      K, but the installation comes from a context that the app vendor already controls—the entire surface of their own site—so...

      Even so, do what you want with your browser UI, but like... there are crazy levels of navel-gazing here.

    1. not free as in beer." At this point, a discussion on software has turned into one about the price of an alcoholic beverage.

      This turn of phrase always struck me as confusing. (Still does, actually.) Maybe I lack the cultural context where "free [as in] beer" is illuminating rather than confounding. It raises questions, though—what cultural context is the one where this is a logical and widely understood sequence of words?

      Complimentary beer nuts, sure, but the beer isn't free—that's why the nuts are: because they're trying to sell more beer.

      When and where was "free [as in] beer" ever a thing?

    1. This is:

      Caplan, Priscilla. Support for Digital Formats. Library Technology Reports 44, 19–21 (2008). https://journals.ala.org/index.php/ltr/article/view/4227

    2. Adrian White

      Did Adrian White become Adrian Brown? That's what the byline in DPTP-01 actually says.

    Tags

    Annotators

    1. The present leadership, particularly from RMS, creates an exclusionary environment in a place where inclusion and representation are important for the success of the movement.

      Does this mean that Drew is going to step back, then? He is after all yet another white guy himself who has few (or none) of the characteristics described, so...

      The term "virtue signalling" was probably played out 5 years ago, but geez—it's hard to see this as anything except an opportunistic version of exactly that. Like greenwashing being wielded by people with ulterior motives, this seems like a straightforward case of an intellectually bankrupt attempt to conspicuously dress up one's argument in the most politically unimpeachable cause and claim that it comes from a place of wanting the best for society when ultimately a self-serving motive underlies the thing. This is very cheap and very tacky.

    2. His polemeic rhetoric rivals even my own, and the demographics he represents – to the exclusion of all others – is becoming a minority within the free software movement. We need more leaders of color, women, LGBTQ representation, and others besides. The present leadership, particularly from RMS, creates an exclusionary environment in a place where inclusion and representation are important for the success of the movement.

      I'm not a vanguard for the FSF per se, but when I think about the community norms and attitudes that are most exclusionary and turn people away, it's the sort of stuff that Drew and his fans are most often associated with. Stallman at least e.g. views Emacs as something that "secretaries" can be taught. Drew's circle tends to come across as having superiority complexes and holding strong opinions about computing that stops them just short of calling you a little bitch for not being as hardcore as they are...

    3. hip new software isn’t using copyleft: over 1 million npm packages use a permissive license while fewer than 20,000 use the GPL

      I didn't realize Mr. DeVault was such an admirer of NPM.

    4. Many people assume that the MIT license is not free software because it’s not viral.

      Their fault, really. What does the FSF have to do, and how much and how often do they have to do it, to make clear that this isn't their position? The culprit is right there in the sentence: the word "assume". It's not unforgivable to not be certain, but the number of people I've interacted with who insist that this is the FSF's position are exasperating.

    1. As far as I can tell, Google Takeout lists every Google service that stores data of some kind

      Not Google Podcasts.

    1. const EACH$ = ((x) => (this.each(x))); const SAFE$ = ((x) => (this.escape(x))); const HTML$ = ((x) => (x));

      In my port of judell's slideshow tool, I made these built-ins. (They're bindings that are created in the ContentStereotype implementation.)

      In that app, the stereotype body is just a return statement. Perhaps the ContentStereotype implementation should introspect on the source parameter and check if it's an expression or a statement sequence. The rule should be that iff the first character is an open param, then it's an expression—so there is no need for an explicit return, nor the escaped backtick...

      This still gives the flexibility of introducing other bindings—like the ones for _CSSH_ and _CSSI_ here—but doesn't doesn't penalize people who don't need it.

    Annotators

    1. Identifiers are an area wherethe needs of libraries and publish-ing are not well supported by thecommercial developmen
    2. Handleshave one serious disadvantage.Effective use requires the user’sWeb browser to incorporate specialsoftware. CNRI provides this soft-ware, but digital libraries havebeen reluctant to require theirusers to install it.
    1. amd [sic.]

      I'm having trouble determining the source of this purported error. This PDF appears to have copied the content from the version published on kurzweilai.net, which includes the same "erratum". Meanwhile, however, this document which looks like it could plausibly be a scan of the original contains no such error: https://documents.theblackvault.com/documents/dod/readingroom/16a/977.pdf

      I wonder if someone transcribed the memo with this "amd" error and that copy was widely distributed (e.g. during the BBS era?) and then someone came across that copy and inserted the "[sic]" adornments.

    1. The system also includes a searchable online database that will give a buyer instant information. "DOI will alsO provide a national directory of who owns what online," said Burns. "The system will give permissions, list rights fees and provide other articles by the same author and instantly put the buyer in contact with the publisher."

      A subset of Ted Nelson's envisioned transcopyright system

    1. we reported evidence indicating that static type sys-tems acted as a form of implicit documentation

      wat

      There's nothing implicit about it. Type annotations are explicit.

    1. Would I want to keep URLs of such draft/work-in-progress files stable, shall they be first-class citizens of the site, should they be indexed, how would I indicate freshness/state etc.?
    2. I've started thinking in the direction of serving on-going writing in a separate folder as raw plain text. That would be quite frictionless
    1. The end-user thinks, "Ah, it was only a dollar, I got my money's worth," but the publisher has basically paid nothing for the work, adds a few hours of digital typesetting, and then makes 100% profit on the sale.

      I have real trouble seeing that as saddening.

      It isn't as if anyone is going around making the the free version arbitrarily defective. The reseller is putting in work to add value and getting paid a buck for it (literally).

      It would perhaps be upsetting, too, if they were going after folks somehow. But I don't see this in the rendition above.

      (In reality, a buyer would probably be fine if they took the Project Gutenberg version, bought the reseller's digitally reformatted one, extracted the TOC data and error corrections, and then slapped that onto the free version and sent it back upstream to Project Gutenberg or someone else who is distributing free copies. They would be legally in the clear, so the reseller, then, stands to make as little as $1 for their investment in that work, so in that case it seems imminently fairly priced.)

    1. Some filesystems (like ext2 specifically) complain if you have more than ~65k subdirectories in a directory, so my original plan of having tweets live at /{username}/status/{id}/index.html (and resolved to /{username}/status/{id}/) doesn't work on those filesystems. Instead all the files live at /{username}/status/{id}.html

      I'm not sure how this solves the problem specifically, since there will still be thousands of entries (one for each tweet) in the status/ directory...

      (Unless I'm not grasping something and the problem truly is the matter of having 2^16 subdirectories in particular—without similar concerns for ordinary files.

      That does raise questions about how someone would run into the original problem in the first place; vanilla ZIP has a fundamental limitation of 2^16 - 1 total files. Is Twitter using the ZIP64 extension?

    2. This won't work if your archive is "too big". This varies by browser, but if your zip file is over 2GB it might not work. If you are having trouble with this (it gets stuck on the "starting..." message), you could consider: unzipping locally, moving the /data/tweets_media directory somewhere else, rezipping (making sure that /data directory is on the zip root), putting the new, smaller zip into this thing, getting the resulting zip, and then re-adding the /data/tweets_media directory (it needs to live at "[username]/tweets_media" in the resulting archive). Unfortunately, this will include media for your retweets (but nothing private) so it'll take up a ton of disk space. I am sorry this isn't easier, it's a browser limitation on file sizes.

      Contra [1], the ZIP format was brilliantly designed and natively supports a solution to this; ZIP was conceived with the goal of operating under the constraint that an archive might need to span multiple volumes. So just use that.

      1. https://games.greggman.com/game/zip-rant/
    1. a printed book containing the 10000 best internet URLs

      The book is: Der große Report - Die besten Internetadressen. 2000. Data Becker.

    2. A few of the entries are pretty straightforward because I'm sure they'll be around for a long time and they're obviously important: Wikipedia and the Internet Archive.

      The context is 10,000 URLs, not "sites". The URL for "Wikipedia" leads to a document that is on its own not entirely interesting. It would be the URLs for individual articles that should make the cut, unless "URL" is being used as a euphemism here.

    3. something so ephemeral as a URL

      Well, they're not supposed to be ephemeral. They're supposed to be as durable as the title of whatever book you're talking about.

    1. The homepage is the most recent post which means you don't have to figure out if I posted something new since the last time you visited and I truly believe that is how a personal blog is supposed to be.

      That goes against the design of URLs and also confused/annoyed me when I first landed on this blog, so...

    1. I am extremely gentle by nature. In high school, a teacher didn’t believe I’d read a book because it looked so new. The binding was still tight.

      I see this a lot—and it seems like it's a lot more prevalent than it used to be—reasoning from a proxy. Like trying to suss out how competent someone is in your shared field by looking at their GitHub profile, instead just asking them questions about it (e.g. the JVM). If X is the thing you want to know about, then don't look at Y and draw conclusions that way. (See also: the X/Y problem.) There's no need to approach things in a roundabout, inefficient, error-prone manner, so don't bother trying unless you have to.

    1. Apple pointed out that this is apparently allowed by the spec, and that it was faulty feature detection on our part. Looking at the relevant spec, I still can't say, as a web developer rather than a browser maker, that it's obvious that it's allowed.

      C'mon. It's right there:

      Follow the instructions given in the WebGL specifications' Context Creation sections to obtain a WebGLRenderingContext, WebGL2RenderingContext, or null; if the returned value is null, then return null;

      (Not that it should even be necessary to resort to checking the spec—relying on an assumption of a non-null return value here should raise the commonsense suspicions of anyone.)

    2. In the end, they added a special browser quirk that detects our engine and disables OffscreenCanvas. This does avoid the compatibility disaster for us. But tough luck to anyone else

      I agree that this approach is bad. I hate that this exists. The differences between doctype-triggered standards and quirks mode was bad enough. This is so much worse—and impacts you even when you're in ostensible standards mode.

    3. I tried my best to persuade Apple to delay it, but I only got still-fairly-vague wording around it being likely to ship as it was.

      Huh? Why? Why even waste the time? Just go fix your code.

    4. preserves web compatibility

      "... you keep using that word"

    5. Safari is shipping OffscreenCanvas 4 years and 6 months after Chrome shipped full support for it, but in a way that breaks lots of content

      I don't think that has been shown here? The zip.js stuff breaking is one thing, but the poor error detection regarding off-screen canvas doesn't ipso facto look like part of a larger pattern.

    6. doesn't Apple care about web compatibility? Why not delay OffscreenCanvas

      Answer: because they care about Web compatibility. If they delay X because Y is not ready, then that's ΔT where their browser remains incompatible with the rest of the world, even though it doesn't have to be.

    7. Firstly my understanding of the purpose of specs was to preserve web compatibility - indeed the HTML Design Principles say Support Existing Content. For example when the new Array flatten method name was found to break websites, the spec was changed to rename it to flat so it didn't break things. That demonstrates how the spec reflects the reality of the web, rather than being a justification to break it. So my preferred solution here would be to update the spec to state that HTML canvas and OffscreenCanvas should support the same contexts. It avoids the web compatibility problem we faced (and possibly faced by others), and also seems more consistent anyway. Safari should then delay shipping OffscreenCanvas until it supported WebGL, and then all the affected web content keeps working.

      This is a huge reach.

      Although it's debatable whether having mismatched support is a good idea for a vendor, arguing that it breaks the commitment to compatibility is off. Construct broke not because something was removed, but because something was added and your code did not handle that well.

    8. MDN documentation mentioned nothing about inconsistent availability of contexts

      Two things: * Why would it have mentioned anything? It wouldn't have. It hadn't shipped yet. * MDN is not prescriptive; it's written by volunteers

    9. typeof OffscreenCanvas !== "undefined"

      The second = sign is completely superfluous here. Only one is necessary.

    10. Construct requires WebGL for rendering. So it was seeing OffscreenCanvas apparently supported, creating a worker, creating OffscreenCanvas, then getting null for a WebGL context, at which point it fails and the user is left with a blank screen. This is in fact the biggest problem of all.

      Well, the biggest problem is that anything can ever lead to a blank screen because Construct isn't doing simple error detection.

    1. In a resume-first hiring process, your resume is at best a raffle ticket that might pay off and grant you admission to the actual hiring process. That’s it. That’s all.

      This is why I don't get people who bristle at the thought of writing a cover letter. What the fuck. Just write a paragraph or two saying why you want this job, specifically—why you think you'll find it rewarding and how it fits with your professional interests. Is it that hard? I'll take this a thousand times over vs pruning, rearranging, and emphasizing line-item crap from my employment history, awards I was given 20 years ago, etc.

      We should be starting with the cover letter and handing that over to someone who's competent to review it (not a generic, know-nothing human pattern matcher) and then move on to hardcore testing for aptitude in a testing environment that matches as closely as possible the actual work environment where you're going to be expected to get things done on a day-to-day basis.

    2. I’m as convinced as a person can be that the resume-first hiring processes are just marginally worse than doing nothing at all. I spent 15 years tweaking resumes, writing cover letters, and generally taking all the very good advice I got only to have it never turn a cent of profit for me. What finally got me out of that pattern was a really odd situation where one of my articles got just enough heat on it that I was allowed to circumvent the middle part of the interview process and go straight to hiring manager interviews. And it was a whole different ballgame because I was now talking to someone who had both the power and desire to hire someone for a position, as opposed to someone whose biggest goal was keeping sufficient people away from that stage to keep them out of trouble.
    3. this isn’t supposed to be me calling out hiring managers and bosses everywhere

      Why not? Do it. It is literally their job.

    4. until someone invents an alternative, what’s to be done?

      The alternative: "smart" resumes that are something like contact cards plus an agreement from employers to put way less stock into resumes and less organizational infrastructure towards keeping classic HR droid positions filled with people that ultimately themselves don't do very much for the company.

      So from the applicant's perspective, you don't worry about creating a resume for this job. It feels more like handing out a business card with your contact info to someone who needs it, except in this case instead of it being contact info (or rather, in addition to the contact info), it contains other stuff, too.

    1. try to apply study into day-to-day, try to set a high personal bar so that even "easy" tasks are challenging

      Ugh.

    1. GIS files can be huge. Travis County's parcel file is 187M

      Surely that's meant to say "GB"?

    1. finding a way to do a "git pull" without having to write a commit message (does --rebase do that?) would help in a huge way

      It might "help" but it defeats the entire purpose of the recordkeeping endeavor.

      If you don't care about the recordkeeping aspect and are just using Git to sync stuff between machines, then you're not really using Git and should stop trying to use it and use something else. (A better option, of course, is to think about it long enough to understand why recordkeeping is good and then take the time to write commit messages that don't suck and not treat it as an arbitrary and pointless hurdle. It's not pointless; there's a reason it was put there, after all.)

    2. I tried adding some stuff to ".gitignore", but it did no good

      This is why git add is git add. The students should have been told not to add anything to the repo except for the source files they're actually changing. A good rule of thumb: if the change was made by a human, and the human was you, then you can commit it; if the change was made by a machine, then don't.

    3. Git made it easy to move students to a different computer, because their code was already there, but the git config for name and email remained that of the computer's previous resident.

      This is only a problem if they were doing git config --global. Considering these were shared machines, then they shouldn't have been.

    4. The class was sharp and realized there had to be a better way. I said git worked better if everyone took their turn and did check-ins one-at-a-time.

      Except, of course, branching and merging mean that this hurdle isn't a necessary one. Git was designed from the beginning so that this would be a non-issue (or at least not as bad as what this class experienced); that's where the D in DVCS comes from, after all...

      (And I thought that's where this was going—! Rather than just giving people the solution—in this case branches/remotes—and telling them to use it, then what you do is you let them experience the problem firsthand and then can appreciate the solution and why it's there. Really surprised that's not where this ended up.)

    5. The college we were at had locked down the networks crazy tight. Machines could not communicate with each other.
    6. I had a fever dream* in 2020 or 2021 that involved an epiphany for a clear way to integrate Git's data model into mass market computing systems (a la Mac OS and its Finder) in a way that was digestible to normal people. I've basically forgotten it. I think it was something like:

      1. use heuristics to figure out when someone is using the "[...]_draft", "[...]_final", etc, ad hoc versioning antipattern
      2. offer to make the directory a versioned one

      On systems like Mac OS where everything is tightly integrated, you wouldn't need to limit yourself to offering this in the Finder. Any time someone used the system-wide standard file save dialog in a way that exhibits the thing described in #1, the system could use the desktop notification subsystem to get the user's attention and offer to upgrade their experience. No interaction with the Git porcelain (as we know it) necessary.

      I fear that MS might do this first but bungle it (i.e. unthoughtfully) and also promote/upsell GitHub to you during the ride.

      * not really

    1. That means there can be any random data between records

      Yes, of course. That's another intentional feature.

    2. If you want to support reading from the front it seems required to state that the self extracting portion can't appear to have any records.

      Well, since you don't want to support it, then you aren't required to do that. (And good thing, because that would limit the format severely.)

    3. Does it mean the first time you see that scanning from the back you don't try to find a second match?

      It means you don't need to! (And why would you try? You have already found one, and you know there is only one, so to try to find more is to try to do something that you know is impossible.)

    4. But what does that mean?

      It means if you have a ZIP file (something that you know is a valid ZIP file) and you have found more than one end of central directory record, then there's something wrong with the method you used to find them (because there can by definition be only one).

    5. A forward scanner might fail to read these.

      Okay, fine. Don't use them (don't use broken software, generally—unless you're comfortable getting broken results).

    6. that contradicts 4.1.9 that says zip files maybe be streamed

      I don't take the spec to mean that you can reliably stream any arbitrary ZIP bytestream. If you are the producer and the consumer, though, then you can bend the format to your will to enable streaming.

      See Firefox's JAR handling for an example.

    7. Justine Tunney covers the genius of the ZIP format in her Redbean talk (@55:31) https://youtu.be/1ZTRb-2DZGs?t=3331

    1. If the data stream encodes values with byte order B, then the algorithm to decode the value on computer with byte order C should be about B, not about the relationship between B and C.

      See also: the brokenness of most schemes to cross-compile applications (including producing cross compilers).

      Rob's clear thinking here definitely had an influence on why Go's compiler is one of the few to have a sane cross-compilation story.

    1. This is:

      S. Mirhosseini and C. Parnin. “Docable: Evaluating the Executability of Software Tutorials”. 2020. https://chrisparnin.me/pdf/docable_FSE_20.pdf

    2. software decay

      See Hinsen, "software rot".

    3. Pimental et al. found that only 24% of Jupyter note-books could be executed

      This is the second time this appears in this paper.

      Previously: https://hypothes.is/a/Mm9whNQFEe2J6Y97btVQBQ

    4. The ambiguity (i.e. non-machine-readability) of tutorials described in this paper is a good example to demonstrate both what it means for something to be an algorithm and what it means to "code" something.

    5. Once I was *attempting* (Igave up) to install an application and the first tutorial allowed mea choice of 6 ways to install something and none worked.
    6. Our informants recognized this as a general problem with tu-torials: “There’s an implicit assumption about the environment”(I5) and “many tutorials assume you have things like a workingdatabase” (I4). If tutorials “were all written with *less* assumptionsand were more comprehensive that would be great
    7. Pimentel et al . [28] found that only 24% of Jupyter note-books could be executed without exceptions
    1. it's better than RSS but RSS just seems a better brand-name

      Isn't that pretty interesting? You'd think it would be the other way around.

      In fact, what if it is the other way around? What if the failure of classic/legacy Web feeds has to do with power users' insistence on calling it "RSS"?

    1. this post remindeds me of the initial comments to "Show HN: Dropbox"

      What? That's an insane comparison. This is like the total opposite of that comment; ActivityPub is super complicated.

    1. But for better or worse, ActivityPub requires a live server running custom software.

      This is bad protocol design. It violates (a variation of) the argument for the Principle of Least Power.

    1. This type of complexity has nothing to do with complexity theory

      Also not to be confused with the notion from the area of information theory of Kolmogorov complexity. (At least not directly—but that isn't to say there is no relation there.)

    1. Pretty nuts that Safari isn't open source. I thought for sure that Edge was going to be fully open source, both before and after the Blink conversion. Why even build closed source browsers in 2023?