today many “programs” are really just small parts of a greater, “living” network of programs and services
- Oct 2022
-
shalabh.com shalabh.com
-
-
www.se-radio.net www.se-radio.net
-
@1:10:20
With HTML you have, broadly speaking, an experience and you have content and CSS and a browser and a server and it all comes together at a particular moment in time, and the end user sitting at a desktop or holding their phone they get to see something. That includes dynamic content, or an ad was served, or whatever it is—it's an experience. PDF on the otherhand is a record. It persists, and I can share it with you. I can deliver it to you [...]
NB: I agree with the distinction being made here, but I disagree that the former description is inherent to HTML. It's not inherent to anything, really, so much as it is emergent—the result of people acting as if they're dealing in live systems when they shouldn't.
-
@48:20
I should actually add that the PDF specification only specifies the file format and very few what we call process requirements on software, so a lot of those sort of experiential things are actually not defined in the PDF spec.
-
-
www.colbyrussell.com www.colbyrussell.com
-
See also:
Alas, many things really must be experienced to be understood. We didn’t have much of an experience to deliver to them though — after all, the whole point of all this evangelizing was to get people to give us money to pay for developing the software in the first place! [...] When people ask me about my life’s ambitions, I often joke that my goal is to become independently wealthy so that I can afford to get some work done. Mainly that’s about being able to do things without having to explain them first, so that the finished product can be the explanation. I think this will be a major labor saving improvement.
From http://habitatchronicles.com/2004/04/you-cant-tell-people-anything/
-
-
habitatchronicles.com habitatchronicles.com
-
When people ask me about my life’s ambitions, I often joke that my goal is to become independently wealthy so that I can afford to get some work done. Mainly that’s about being able to do things without having to explain them first, so that the finished product can be the explanation. I think this will be a major labor saving improvement.
-
Alas, many things really must be experienced to be understood. We didn’t have much of an experience to deliver to them though — after all, the whole point of all this evangelizing was to get people to give us money to pay for developing the software in the first place!
-
-
archive.org archive.org
-
The metadata for this collection are not so great. Have a look here, instead: https://archive.org/details/pub_whole-earth
-
-
www.zoon.cc www.zoon.cc
-
High fidelity scans are available at the Internet Archive: https://archive.org/details/sim_whole-earth_spring-1987_54/page/n3/mode/1up
-
-
www.cage.ngo www.cage.ngo
-
‘future dangerousness’
This quote is introduced without a clear antecedent. What is it a reference to?
-
-
technium.transistor.fm technium.transistor.fm
-
Sri: [...] you can think about the possibility that we're actually going to do this with structured data but then properly incentivizing people in order to actually moderate and curate the set of facts about the world—
Will: Yeah, so I was gonna mention that, and I'm glad we're on the same wavelength here. What are the economic incentives that would help encourage the adding of correct, factual data to this knowledge graph and dissuade, I guess, spammers? [...]
Sri: Yeah, I think that there needs to be some compelling reason for people to want to add data to the knowledge graph. [...] I think that, "Can we get a knowledge graph that is expansive—as expansive as Wikipedia—that, you know, says all kinds of facts about the entire world?" Yeah, maybe[...]
Will: There are parts of the Web where people do that without financial incentives. I mean people list like every episode of, I dunno, Game of Thrones and annotate every time that people get killed or [...] all sorts of stuff. Fandom is like [a] huge thing and they just put out these... or like the—if you ever played Minecraft and looked at the Minecraft wiki, it's just so (chuckles) so detailed. Like, "Who spends all their time...?" [...]
Sri: The idea of fandom actually is very relevant here, because [...] I have so far been thinking about the idea that the incentives have to be backed by some type of economic value—
Will: Yeah, for a certain class of things [...] There are some things that are very well-tuned to economic incentives and the other stuff is well-tuned to fandom, right?
-
Video available here: https://www.youtube.com/watch?v=bjn5jSemPws
-
-
jackevansevo.github.io jackevansevo.github.io
-
This shifts the responsibility of checking which posts are new new/updated onto the parser
For checking which posts are new/updated, this is always the case. The only thing the HTTP cache-related headers can tell is that the feed itself has/hasn't changed.
-
a link to the RSS feed in the site meta
Huh?
-
by CTRL+F searching for different patterns.Viewing the page source to find RSS linksSo far I’ve come across the following common patterns:example.com/rss.xmlexample.com/index.xmlexample.com/feed.xmlexample.com/atom.xmlexample.com/feedexample.com/rss
Bluh? This is exactly what looking for
link[rel=alternate]is for—these "patterns" listed are arbitrary URLs...
-
-
vis.social vis.social
-
This costs about $650 USD to operate
Crazy! This underscores how badly Mastodon—and ActivityPub, generally—need to be revved to enable network participation from low-cost (essentially free) static* sites.
* quasi-static, really—in the way that RSS-enabled blogs are generally considered static sites
-
-
pointersgonewild.files.wordpress.com pointersgonewild.files.wordpress.com
-
The Code Rot Problem
⬑ what khinsen calls software collapse
-
Could we design programs so they will run in 20, 30, 50 years? How?
- limit capabilities it depends on (POLA)
- target the World Wide Wruntime (i.e. the browser), which is the only reliable platform that exists and can be expected to exist in the future
-
IMO: one of the biggest problems in modern softwaredevelopment• Code breaks constantly, even if it doesn’t change• Huge cause of reliability issues and time wasted• This is somehow accepted as normal
⬑ "The Code Rot Problem"
-
-
sinnbeck.dev sinnbeck.dev
-
protected static function resolveFacadeInstance($name)
This page has a neat effect, first apparent with this example, where a blur effect is used on most of the text in the code block, except for lines 11–13 which are shown in sharp focus. (You can mouse over the code block to eliminate the blur effect.)
.torchlight.has-focus-lines .line:not(.line-focus) { transition: filter 0.35s, opacity 0.35s; filter: blur(.095rem); opacity: .65; }Each line is dumped into a div and the line-focus class set on those which are supposed to be unblurred.
(For ordinary code blocks without any blur/focus effect, the has-focus-class line is simply not used.)
-
-
-
relaunched time and time again
This doesn't need to happen, and wouldn't happen if people treated blog posts for what they are—artifacts—instead of conceptualizing them as live systems.
-
-
gitlab.com gitlab.com新しいタブ2
-
This is the big hurdle; to leap over it you have to be able to create the program text somewhere, compile it successfully, load it, run it, and find out where your output went.
-
Hmm. We’re having trouble finding that site. We can’t connect to the server at research-compendia.org
That's not good.
-
-
blog.khinsen.net blog.khinsen.net
-
optional dependencies
The phrase "optional dependency" is an oxymoron.
-
-
blog.khinsen.net blog.khinsen.net
-
What is also sorely missing is a straightforward way to package an application program with all its dependencies in such a way that it can be installed with reasonable effort on all common platforms
The answer is ZIP and wget/curl and forgetting about sharing dependencies. Disk space is way cheaper than the time spent (often in frustration) trying to get something to work.
Not angling towards that sort of future is a lot like people from the punch card and paper tape era not allowing the computers do the stuff that computers do better than humans when it became cheap enough to let computers do it.
-
-
news.ycombinator.com news.ycombinator.com
-
Nobody ever saw a clean office and came to the conclusion, "Our cleaning staff must be lazing about because someone else is cleaning for them, we should fire them all!"
Good response overall, but with respect to this remark, I have been in different but not dissimilar situations where something like this does happen.
E.g. person notices that every time they enter a room (kitchen, let's say) everything looks just like it did the last time they saw it, and they aren't ever interrupted e.g. by my trying to use the space at the same time that they are. They then (incorrectly) conclude that it looks that way because I just never use that space (and it's okay for them to make a mess, or not worry about being mindful of how much time they're using the space for)—rather than registering the thought, "Gee, he really picks up after himself and tries to stay out of the way—a real life example of 'you won't even notice I'm here'. Maybe I should be that considerate."
-
-
www.omnibusproject.com www.omnibusproject.com
-
"This is how the working classes are robbed. Although their incomes are the lowest, they're compelled to buy the most expensive articles[...] the lowest priced articles. Everybody knows that good clothes, boots, or furniture are really the cheapest in the end, although they cost more money at first; but the working classes can seldom or never afford to buy good things. They have to buy cheap rubbish which is dear at any price."
-
@31:25 Robert Noonan's story.
-
-
www.paulgraham.com www.paulgraham.com
-
The most dangerous form of procrastination is unacknowledged type-B procrastination, because it doesn't feel like procrastination. You're "getting things done." Just the wrong things.
Type-B procrastination accounts for a lot of the junk I see on people's GitHub timelines—and that type of social network-backed gamified gratification is why I've adopted a stance where I impose a huge entry fee on any workflow that routes itself through GitHub's servers.
-
Good procrastination is avoiding errands to do real work.Good in a sense, at least. The people who want you to do the errands won't think it's good. But you probably have to annoy them if you want to get anything done.
Every time Bill Maher goes off on a tear about how American society should venerate old people the same way it happens in other countries, I can't help but think, "What 20 year old was Bill trying to fuck this week that led to these hurt feelings?" I'm picking up hints of that here, too.
Is this Paul's way of getting out of responsibilities? "No, I don't need to do that. See? I wrote a piece about it!"
Tags
Annotators
URL
-
-
technium.transistor.fm technium.transistor.fm
-
@55:39
I would feel a little bit sad, but at the same time, this is a pattern I've seen happen over and over again. And especially with these big ideas about computing and self-expression and things like that [...] people start with these huge, huge ideas, but I think that they are so different--the diff is so much between where society is now and what some of these people are thinking about--that the most that can be absorbed at one time is like one unit--one meme--from that whole big vision. So, we're just gonna take like one small step at a time, and [...] you kind of just have to accept that, you know, that's success.
-
-
bi-gale-com.atxlibrary.idm.oclc.org bi-gale-com.atxlibrary.idm.oclc.org
-
Garner, Bryan A. "Celebrating Plain English in Michigan." ABA Journal 107.5 (2021): 36. Business Insights: Global. Web. 7 Oct. 2022. URL http://bi.gale.com.atxlibrary.idm.oclc.org/global/article/GALE|A690034782/c13f1855872f0a231a7139cf729c45b6?u=txshrpub100020
Document Number: GALE|A690034782
-
That's an interesting point about empirical testing. If you just ask lawyers and judges in the abstract whether they'd like citations up in the body or down in footnotes, they'll vote for the former. But if you show them actual examples of well-written opinions in which the citations are subordinated, the results are very different.
-
-
dl.acm.org dl.acm.org28128034
-
we must acknowledgethe root of the scientific-repeatabilityproblem is sociological, not techno-logical
-
In the past whenwe attempted to share it, we foundourselves spending more time gettingoutsiders up to speed than on our ownresearch. So I finally had to establishthe policy that we will not provide thesource code outside the group
-
We next made two attempts to buildeach system. This often required edit-ing makefiles and finding and in-stalling specific operating system andcompiler versions, and external librar-ies.
-
Several hurdles must becleared to replicate computer systemsresearch. Correct versions of sourcecode, input data, operating systems,compilers, and libraries must be avail-able, and the code itself must build
-
-
jacksongl.github.io jacksongl.github.io
-
this level oftime commitment is likely to prevent a potential user with apassing interest from trying a programming language or tool
Tags
Annotators
URL
-
-
www.researchgate.net www.researchgate.net
-
Feature location (FL) is the task of finding the source code that implements a specific, user-observable func-tionality in a software system.
-
-
www.runmycode.org www.runmycode.org
-
404 Page Not Found
This does not bode well for the scope of the project.
-
-
withthewoodruffs.com withthewoodruffs.com
-
If you live in Texas then you understand and have experienced the mega H-E-B hype.
Stupid comment. HEB is a South and Central Texas thing.
-
-
debugagent.com debugagent.com
-
How does one make money off an open source compiler? Notice that this was a decade ago and there weren’t any precedents like Zig that we could look at as a template.
What about GCC? https://minnie.tuhs.org/pipermail/tuhs/2020-May/021225.html
-
-
ghuntley.com ghuntley.com
-
spamming open-source as a growth strategy is a super bad idea
Tags
Annotators
URL
-
-
ianleslie.substack.com ianleslie.substack.com
-
Boy, this was hard to read. I've noticed a lot of Substack pieces that I've come across are written like this—tenuous, self-contradictory, and written in this voice. Very weird.
-
-
catgirl.ai catgirl.ai
-
before that the support for parsing JSON in C was essential for using LSP servers
NB: the requirement wasn't actually "parsing JSON in C"; it's that for the JSON parsing the machine ultimately executes the same (or similar) instructions that it does when the JSON parsing is written in C and that C is compiled with GCC.
-
-
www.shopify.com www.shopify.com
-
you have only three digits for product numbers, which works out to be over 1,000 possible product numbers
-
-
simonwillison.net simonwillison.net
-
My tool needed a UI. To keep things as simple as possible, i didn’t want to host anything outside of GitHub itself. So I turned to GitHub Issues to provide the interface layer.
Lame. Esp. since GitHub Pages is a thing.
-
- Sep 2022
-
news.ycombinator.com news.ycombinator.com
-
It's wild that you have to set up Docker to contribute to 600 characters of JavaScript.
Current revision of README: https://github.com/t-mart/kill-sticky/blob/124a31434fba1d083c9bede8977643b90ad6e75b/README.md
We're creating a bookmarklet, so our code needs to be minified and URL encoded.
Run the following the project root directory:
$ docker build . -t kill-sticky && docker run --rm -it -v $(pwd):/kill-sticky kill-sticky
Tags
Annotators
URL
-
-
photomatt.tumblr.com photomatt.tumblr.com
-
you’d need to be web-only on iOS and side load on Android
Disclaimer: I don't give two shits about the topic that is the subject of this post. However...
It would be feasible to get around this by 1. Separating your existing mobile app cleanly between client and content 2. Converting your client into a general purpose Web browser... that Tumblr (let's say) happens to work really, really well with
(This concludes this special bonus episode of Nathan For You.)
More seriously...
Frankly, we need a lot more opinionated, intelligent user agents that are thoughtfully designed act on the content in a way that fits the user's desires—rather than trying to conform to what other Web browsers feel like today.
-
-
www.gotostage.com www.gotostage.com
-
It looks like gotostage.com keeps the Wayback Machine from getting a copy as a consequence of two things:
The server sends no real hypertext—it's all part of a JS bundle that builds the (scan) content in-place.
What the server does send is an invisible link to https://www.gotostage.com/honeypot, which the Wayback Machine's crawler will follow. Presumably, this flags it as a bot.
-
Mar 26, 2019 Tricky Issues - Civil CasesTo download the handouts, visit: https://bit.ly/3an6j92 To obtain credit once you have finished the webinar, visit: https://bit.ly/34DsPJB This webinar qualifies for 1.5 hours of judicial education credit, including 1.5 civil hours.
(The Wayback Machine has a copy.)
-
-
gitlab.com gitlab.com新しいタブ2
-
news.ycombinator.com news.ycombinator.com
-
let's be honest, many people who create sites for money will not necessarily coach the business to keep it simple, since they will earn less money from it
Tags
Annotators
URL
-
-
monoskop.org monoskop.org
-
because it is necessary to ,examine changes and new arrangements be-fore deciding to use or keep them, the system must not commit the user to a newversion until he is ready. Indeed, the system would have to provide spin-offfacilities, allowing a draft of a work to be preserved while its successor wascreated. Consequently the system must be able to hold several-- in fact, many--different versions of the same sets of materials. Moreover, these alternate ver-sions would remain indexed to one another, so that however he might have changedtheir sequences, the user could compare their equivalent parts.Three particular features, then, would be specially adapted to useful change.The system would be able to sustain changers in the bulk and block arrangements ofits contents. It would permit dynamic outlining. And it would permit the spin-off of many different drafts, either successors or variants, all to r e m a i n w i t h i nthe file for comparison or use as long as ~needed
Presaging version control systems.
-
Un-fortunately, there are no ascertainable statistics on the amount of time we wastefussing among papers and mislaying things
-
-
hypothes.is hypothes.is
-
Typo in the tag here; "transculsion" should be "transclusion".
-
-
news.ycombinator.com news.ycombinator.com
-
a complete lack of, "personal" bookmarks. The idea was that you might keep track of interesting links by keeping an index of them on your own personal site
Well, the whole point of the Web was that everything would be given a (world-wide) identifier. Your current list of bookmarks has an identifier, but it's a local one. Once you have a world-wide identifier, it's only short jump to making it resolvable so that your bookmarks list has a URL, and browsing your bookmarks would be as simple as visiting that list.
-
-
www.w3.org www.w3.org
-
Buy: from Amazon.com (paperback), Barnes & Noble (paperback), Booksamillion (paperback), Borders (paperback), Powells (paperback), or Wordsworth (paperback).
Or read it here: https://archive.org/details/weavingweb00timb/page/n9/mode/2up
-
-
archive.org archive.org
-
isbn_9780062515872
Pssh. 9-780062-515872 isn't the ISBN (which is 006251587X). It's the EAN-13.
-
-
sourceacademy.org sourceacademy.org
-
it is the ability of browsers to execute JavaScript programs that makes it an ideal language for an online version of a book on computer programs
No way. HTML is way better suited for it!
-
-
sourceacademy.org sourceacademy.org
-
The fact that this book is an SPA—instead of just, you know, a bunch of web pages—is very annoying.
Totally screws up my ability to middle click the "links" in the TOC.
It also messes up the browser scroll position when clicking back/forward.
Tags
Annotators
URL
-
-
www.science.org www.science.org
-
You're looking for https://www.science.org/doi/10.1126/science.98.2557.580.b, probably.
(This DOI, i.e. 10.1126/science.98.2557.580-a, is linked to from https://pubmed.ncbi.nlm.nih.gov/17835862/).
-
-
christianheilmann.com christianheilmann.com
-
Ever tried to look up some news from 12 years ago? Back in library days you were able to do that. On news portals, most articles are deleted after a year, and on newspaper web sites you hardly ever get access to the archives – even with a subscription.
This is a massive failure of infrastructure (and education/"professionalism"—by and large, most people whose careers are in operating or maintaining Web infrastructure don't haven't been inculcated into or adopted the sort of "code of ethics" that sees this as a failure).
The thing might just be for something like the Internet Archive to get into training or selling professional services for handling companies' "Web presence, done the right way". (This is def. take some organizational restructuring, however.) I'd like to see, for example, IA-certified partner organizations that uphold the principles described here and the original vision for the Web, and professional associations that work hard at making sure the status quo improves a lot over what's common today (and doesn't slide back).
-
-
news.ycombinator.com news.ycombinator.com
-
you can’t release a $300m AAA blockbuster movie directly on YouTube because you will never make your money back
Hmm. I'm skeptical of the certainty with which this is said.
Given a series of trials the claim here is that if you took a blockbuster and released it for "free" (supported by ads) on YouTube) then the ad revenue even when multiplied by the greater number of viewers would not only not surpass ticket sales from the subset of the same viewers who'd be willing to pay for tickets, but that it wouldn't even be able to cover the production budget. That's both a strong claim and a claim that I'm not sure is correct. For comparison, Netflix (and even ad-supported streaming services, albeit ones with lower budgets) seem to do pretty well with just a fraction of the <$10-per-viewer take that makes up monthly revenue.
-
-
www-jstor-org.atxlibrary.idm.oclc.org www-jstor-org.atxlibrary.idm.oclc.org
-
Those images are yours. They be-long to you and to you alone, and theyare infinitely better for you than thosewished on you by others
dubious
-
while we can't avoid using en-ergy, there is no value in using morethan we mus
principle of least (literal) power
-
-
-
f i had just read enough or watched enough talks my life would start to get better
The notion of "productive consumption" has, for some people, an almost irresistible appeal.
-
-
news.ycombinator.com news.ycombinator.com
-
it means that even when I do say hello or hold a door or whatever I don't get a response
There's a presumption here (and in the linked article) that those people want to talk to you—that they're just quietly suffering their desire to have some interaction with you, if only it were the case that you'd allow it. This is way more condescending than the thing that the article seeks to correct.
-
-
news.ycombinator.com news.ycombinator.com
-
often the love for open source often only goes as far as to say thanks when they're creating a bug report
Thankses simply do not belong in a bug report. It's an almost surefire indicator that there's something wrong culturally and that your bugtracker isn't so much being used for bug reports as it is filled with support requests.
-
-
mikemcquaid.com mikemcquaid.com
-
No to new features. No to breaking changes. No to working on holiday. No to fixing issues or merging pull requests from people who are being unpleasant. No to demands that something has to be fixed right now.
In other words, no to the rotten cultural expectations that are by far what you're most likely to encounter on GitHub. I promise—things really were so much better before it came along to try to be Facebook-for-software-development.
-
The general state of the open source ecosystem is that most maintainers are building software they want other people to use and find useful.
I think the default assumption that this is what's going on is a huge part of the problem. I see a similar thing happen on GitHub constantly, where project maintainers try to "upperhand" contributors, because they see the contribution as something deliberately undertaken to benefit the person who is e.g. submitting a bug report. This is a massive shift away from the spirit of the mid-to-late 2000–2010 era characterized by initiatives like Wikipedia (and wikis generally) and essays by Shirky on the adhocracy around the new digital commons.
-
Bob works for TechCorp and discovered a few years ago that using a tool installed from Homebrew results in a 90% speedup on an otherwise boring, manual task he has to perform regularly. This tool is increasingly integrated into tooling, documentation and process at TechCorp and everyone is happy, particularly Bob. Bob receives a good performance review
Directly related to a question I posed a few years ago about who should really be funding open source. My conclusion: professional developers who are are most directly involved with how the source is put to work—and who benefit from this (in the form of increased stature, high salaries and bonuses, etc., in comparison to the case where the FOSS solution hadn't been available). This runs counter to the popular narrative that frames the employer as a "leech" and silent on the social and moral obligations of the employee who successfully captured value for personal gain.
It's like this: the company has some goal (or "roadmap") that involves moving forward from point A to point B. The company really only cares about arriving at the desired destination B. They negotiate with a developer, possibly one who has already signed an employment contract, but someone who is made aware of the task at hand nonetheless. The developer agrees to do the work meant to advance the company towards its goals, which potentially involves doing everything manually—that is, handling all the work themselves. They notice, though, that there is some open source software that exists and that can be used as a shortcut, which means they won't have to do all the work. So they use that shortcut, and in the end their company is happy with them, and they're rewarded as agreed (not necessarily at the end, but rewarded nonetheless with e.g. regular paychecks, but also possibly receiving a bonus), and they advance in their career. Who's extracted value from the work of the open source creator/maintainer here? Is it really just the company?
McQuaid seems to agree with my view, going by the way he (later) identifies both Bob and TechCorp as benefitting from Reem's work; cf https://hypothes.is/a/MBN0aDnuEe2aF8s2kWTPrg
-
Bob and TechCorp are benefitting from the work done in Reem’s free time
-
-
news.ycombinator.com news.ycombinator.com
-
junior but very promising profiles
Tags
Annotators
URL
-
-
news.ycombinator.com news.ycombinator.com
-
You can check at uspto.gov. (The search engine is terrible, by the way.)
-
-
html.energy html.energy
-
I'll explain what it was inspired by in a second...
-
(I feel like I tweeted about this and/or saw it somewhere, but can't find the link)
visible-web-page looks to have been published and/or written on 2022 June 26.
I emailed Omar a few weeks earlier (on 2022 June 7) with with a link to plain.txt.htm, i.e., an assembler (for Wirth's RISC machine/.rsc object format) written as a text file that happens to also allow you to run it if you're viewing the text file in your browser.
(The context of the email was that I'd read an @rsnous tweet(?) that "stuff for humans should be the default context, and the highly constrained stuff parsed by the computer should be an exceptional mod within that", and I recognized this as the same principle that Raskin had espoused across two pieces in ACM Queue: The Woes of IDEs and Comments Are More Important Than Code. Spurred by Omar's comments on Twitter, I sent him a link to the latter article and plain.txt.htm, and then (the next day) the former article, since I'd forgotten to include it in the original email.)
-
-
alexn.org alexn.org
-
Java is good by modern standards, from a technical perspective, the platform having received a lot of improvements from Java 8 to 17. Unfortunately, it still stinks, and the problem is its "enterprise" culture.
JS engines are good from a technical perspective. The problem with JS is the Node/NPM culture.
-
-
www.deseret.com www.deseret.com
-
he would do chores for the neighbors
Again: mostly enabled by the fact that he was young.
-
A neighbor came to Billy asking for help with a valve job on another truck.
Another key ingredient. Society is participatory.
-
and some mentoring by kindly contractors
This is not to be discounted. Mentoring is a big deal generally, but a lot of what made this possible likely came from the novelty of dealing with a child. If you replace Kevin with, say, a 25 year old who is no more or less capable or committed, what changes is others' behavior in their interactions.
-
-
www.w3.org www.w3.org
-
in which generality and portability are more important than fancy graphics techniques and complex extra facilities
This design constraint is exactly what people are so bothered about 30 years later. Generality! Portability! That's why you don't get to exercise full control of the sort that your "native" stack would give you. It's also why the Web has not only endured, but has attained a level of ubiquity that is not matched by any other "platform".
-
Compound Document Architecture"
- https://foldoc.org/Compound+Document+Architecture
- https://en.wikipedia.org/wiki/Open_Document_Architecture
- https://www.osti.gov/servlets/purl/5671048
- http://bitsavers.informatik.uni-stuttgart.de/pdf/dec/vax/vms/vax_document/AA-PB6KA-TE_DEC_ODA_Compound_Document_Architecture_Gateway_Users_Guide_199104.pdf
- VMS Compound Document Architecture Manual, Digital Equipment Corporation, Maynard, MA., December 1988
-
the information was not naturally organised into a tree
This touches on the subtle, underrated brilliance of Wikipedia's (mostly) flat namespace.
Intertwingularity is inescapable.
-
When two years is a typical length of stay, information is constantly being lost.
Thirty years on, we're still losing stuff. (You could even argue that the Web—as it has been put in practice, at least—has exacerbated the problem.)
Tags
Annotators
URL
-
-
web.hypothes.is web.hypothes.is
-
anyone could publish anything
Lots of the problems that Hypothesis runs into (incl. those described here in this post!) could be attributed to this. They could probably be neatly described in a volume titled "The Perils of Self-Publishing", in a section dedicated to the consequences of non-uniform practices.
-
Here's the Anno pitch deck: https://docdrop.org/pdf/Hypothesis_deck_22Aug-4b7ys.pdf/
-
-
hypothes.is hypothes.is
-
website obesity crisis
Taken from Maciej Cegłowski https://idlewords.com/talks/website_obesity.htm.
-
-
portal.mozz.us portal.mozz.us
-
in spite of all the amazing innovations of the Oberon environment (everything is a command and such)
common misapprehension; only command procedures are commands (not "everything")
-
-
www.open.ac.uk www.open.ac.uk
-
note that this will show the historical content within the current template – so not necessarily exactly the same as the original page
-
-
helena-lang.org helena-lang.org
-
helena-lang.org should host a demo in the form of a copy of the "meat" of the project that can run directly in the browser from the open web (ideally launched via a single button here on the homepage) without having to install an extension (or at least anything more complicated than a bookmarklet)
-
-
oclc-research.github.io oclc-research.github.io
-
for both access and persistent identity
This highlights a problem with the Web: the fusion of document identifiers and the channel through which the (or "an") information/service provider makes it available.
Suppose stakeholder S wants to mirror a document D (published in an HTML-based format) originally distributed by authority A. This will probably involve having to touch D in order to rewrite links to point at S's copies of auxiliary resources instead of A's originals. Simply copying the representation's bytestream is not sufficient.
Tags
Annotators
URL
-
-
www.ibiblio.org www.ibiblio.org
-
a URI is like a proper name and baptism is given by the registration of the domain name, which gives a legally binding owner to a URI
"[... temporarily]"
-
-
geohot.github.io geohot.github.io
-
Before asking the question, how do I build AGI, you first must ask the question, what would I recognize as AGI?
Wrong. Here, I'll show you:
Forget about AGI for a moment. Instead, pretend we're talking about pecan pie. Most people probably can't answer the question, "What would I recognize as a pecan pie?" (or a sandwich). And yet thousands of people are able to make (and thousands more people are able to enjoy) pecan pie all the time—probably every day.
-
-
steve-yegge.blogspot.com steve-yegge.blogspot.com
-
If you're not 100% sure whether you know how compilers work, then you don't know how they work.
-
-
www.rfc-editor.org www.rfc-editor.org
-
This RFC lacks any acknowledgement of the text/uri-list media type.
-
In some cases, it is not straightforward to write links to the HTTP "Link" header field from an application. This can, for example, be the case because not all required link information is available to the application or because the application does not have the capability to directly write HTTP fields.
-
-
kevquirk.com kevquirk.com
-
The answer is simple – stop using these services and look for privacy respecting alternatives where possible.
Also: just don't take photos/videos and reduce your social media "footprint" generally.
Privacy-respecting alternatives don't address the issue raised earlier with the comparison to the celebrity iCloud leaks. Because, to reiterate, privacy and security are different things.
-
If someone came up to you in the street, said they’re from an online service provider and requested you store all of the above data about you with them, I imagine for many the answer would be a resounding NO!
See: Surveillance Camera Man * https://boingboing.net/2012/11/02/surveillance-camera-man-wants.html * https://www.youtube.com/watch?v=X9sVqKFkjiY
-
-
macwright.com macwright.com
-
I’ll read through a dependency, start refactoring, and realize that it’s going to be simpler to write it myself
-
a bigger source tree
Someone is going to need to eventually explain their convoluted reasons for labeling this a downside.
Sure, strictly speaking, a bigger source tree is bad, but delegating to package.json and
npm installdoesn't actually change anything here. It's the same amount of code whether you eagerly fetch all of it at the same time or whether you resort to late fetching.Almost none of the hypothetical benefits apply to the way development is handled in practice. There was one arguably nice benefit, but it was a benefit for the application author's repo host (not the application author), and the argument in favor of it evaporated when GitHub acquired NPM, Inc.
-
Vendoring means that you aren’t going to get automatic bugfixes, or new bugs, from dependencies
No, those are orthogonal. Whether you obtain the code for your dependency* at the same time you clone your repo or whether you get it by binding by name and then late-fetching it after the clone finishes, neither approach has any irreversible, distinct consequences re bugs/fixes.
* and it still is a dependency, even when it's "vendored"...
-
-
www.artima.com www.artima.com
-
If the program provides accurate real-time feedback about how user manipulations affect program state, it reduces user errors and surprises. A good GUI provides this service.
-
-
news.ycombinator.com news.ycombinator.com
-
The LISP part, though, is not going well. Porting clever 1970s Stanford AI Lab macros written on the original SAIL machine to modern Common LISP is hard. Anybody with a knowledge of MACLISP want to help?
-
-
jacksongl.github.io jacksongl.github.io
-
Performance. Running a research project in a browser isslower than running it natively. Fortunately, performanceis not generally critical for evaluating usability. Thus weprioritize compatibility and ease-of-use over performance.
Additionally, as more time passes, it will become less and less of a problem.
-
There are several issues to consider when translating re-search projects into JavaScript and running in a browser
It's a little bit of a misapprehension/mistake to describe this as translation into JS.
-
We propose building an infrastructure that makes it easy tocompile existing projects to JavaScript and optionally collectusage data. We plan to leverage existing tools to translateprograms into JavaScript. One such tool is Emscripten [15],which compiles C/C++ code and LLVM [14] bitcode toJavaScript.
It only occurred to me reading this now, and musing about the code size and relative performance of simple programs written first in a to-be-compiled-to-JS language and then in JS, that the folks associated with these pushes are taking the sufficiently smart compiler position. Connected to superinferiority?
-
the tools can be distributed within static web-pages, which can easily be hosted on any number of exter-nal services, so researchers need not run servers themselves
-
We propose helping researchers compile their tools toJavaScript
This is probably too ambitious, impractical. For most cases, it would suffice to make sure that the tools used to produce "traditional" release packages can run in the browser. (I.e., the release itself need not be able to run in the browser—though it doesn't hurt if it can.)
-
not designed to be secure
"secure" is used in this context to mean, roughly, "work with adversarial input"
-
Many research projects are publicly available but rarely useddue to the difficulty of building and installing them
-
-
schasins.com schasins.comHome1
-
Ringer is a low-level programming language for web automation tasks. Many statements in the high-level Helena language are implemented with Ringer. Ringer comes with a record and replay tool; when a user demonstrates how to complete an interaction in a normal browser, the tool writes a straight-line Ringer program that completes the same interaction on the same pages. GitHub
I also found https://github.com/sbarman/Ringer
-
-
dspace.mit.edu dspace.mit.edu
-
wehave found instances of people using relatively heavy-weightJavascript frameworks like Exhibit [11] just for the compar-atively minuscule feature of sortable HTML tables
-
-
www.zylstra.org www.zylstra.org
-
Some other things:
-
If I visit https://www.zylstra.org/blog/ and content from e.g. this blog post https://www.zylstra.org/blog/2022/09/10-thoughts-after-100-annotations-in-hypothes-is/ has been inlined, Hypothesis doesn't know how to deal with this—it doesn't know that the annotations that apply to the latter can also be layered onto the inlined content at /blog/. This is a minor quibble insofar as you can measure it on terms of mere convenience. Less minor, however, is that if a user attaches their annotations to /blog/, then it will eventually be orphaned (as more posts are written and the now-current ones get bumped off the page), and will never appear in the correct place on the actual post.
-
When people annotate Wikipedia there's a high likelihood that the annotation will become orphaned as the content changes, even though the version that was actually annotated will usually be available from the article history.
-
-
Despite the social nature of h., discovery is very difficult. Purposefully ‘finding the others’ is mostly impossible.
This was Peter's motivation to create annotations.lindylearn.io.
-
H.’s marketing director has 1348 public annotations over almost 6 years, its founder 1200 in a decade.
Public ones, at least.
-
-
www.zylstra.org www.zylstra.org
-
If you need a site that’s just a single page I think I would use a word processor and do a “save as html”.
-
Over in the IndieWeb community we were having a conversation about how easy it should be for people to create their own websites (also for small local businesses etc.) Where making the site is basically the same as writing the text you want to put on it. Social media silos do that for you, but out on the open web?
-
-
files.abnormalsecurity.com files.abnormalsecurity.com
-
Begin premium services directly to students
What kind of premium services?
-
-
www.w3.org www.w3.org
-
Traveler:
-
-
www.w3.org www.w3.org
-
Traveler:
Tags
Annotators
URL
-
-
www.researchgate.net www.researchgate.net
-
The program iswritten in Java, but it is pretty clear that it was translated from a language like ML
This reminds me of "I did write some code in Java once, but the code was in C and Lisp (I simply happened to be in Java at the time ;-)".
-
When a person uses a tool to achieve some goal, they quickly learn how towork around the short-comings of the tool in order to get their job done. As a result, the personbecomes desensitized to problems in the tool.
Tags
Annotators
URL
-
-
stallman.org stallman.org
-
I find C++ quite ugly
-
-
news.ycombinator.com news.ycombinator.com
-
When I go to use some software it takes an inordinate amount of time to set things up.
-
Some of my ruby, nodejs and python projects no longer run due to dependencies not being pinned at creation time.
Tags
Annotators
URL
-
-
www.ics.uci.edu www.ics.uci.edu
-
why I am doing this
This link is a 404. Why did Roy start doing this?
Tags
Annotators
URL
-
-
mozilla.github.io mozilla.github.io
-
XBL is a proprietary technology developed by Mozilla
Example of when "proprietary" is not an antonym of "open source".
-
-
indieweb.org indieweb.org
-
use-cases such as representing a 410 Gone deleted resource on static hosting that often disallow setting HTTP headers directly
Tags
Annotators
URL
-
- Aug 2022
-
lists.w3.org lists.w3.org
-
1) get an extra 'search' attribute on to the <a> tag in HTML so that we have: e.g. <a href='...' search='...'>link text</a> 2) If there's take-up, then later on push for adding a date-time of creation attribute to <a>. This will add link history to the internet. The way (1) works is someone sticks the basic href to a page in the href attribute, and then sticks the text they want to link to in the search attr. The browser fetches the page, and as a secondary action (at user option) searches for the text.
Another approach, inspired by the
<label>element, would be to encode these selectors as separate<link>elements in the head. You could write your links as normal, and then add these<link rel="selector" for="foo" href="XXX[...]X" />to your document (wherefoois the ID of the<a>element, and thehrefvalue is selector).
-
-
lists.w3.org lists.w3.org
-
Documents may be retrieved by URL: by sending mail to listserv@info.cern.ch with a line containing the command SEND followed by the URL of the document
Neat.
-
-
www.w3.org www.w3.org
-
If you refer to a particular place in a node, how does one follow it in a new version, if that place ceases to exist?
The answer is: linking party pays. I.e., after 30-ish years of the Web put into practice, what has emerged is that it is the linking party's responsibility to ensure:
- that the material remains available (by keeping a copy around, or arranging for the same thing by delegating to a third-party), and
- there is some reliable way to establish provenance (which can be handled in tandem with delegation to same third-party, as with availability)
These are only necessary if these two (availability and reputable provenance) are actually desirable properties. But in the event that one or the other or both is desired, the rule is that linking party pays.
-
-
www.w3.org www.w3.org
-
Linking by context In this case, the annotater refers to a part of the document by the context: for example, he quotes the first and last pieces. See for, example, the section "In this case...". This method is clearly heuristic and prone to failures just as is the automatic generation of differences above..
This is the only realistic one—and the only way to reconcile the vast pre-Web corpus that is ostensibly meant to be ingested to occupy a first-class status on the Web.
It still ~requires some amount of versioning (though not necessarily with the cooperation of the publisher, from whom it would be prudent to assume harbors a mildly adversarial outlook; cf Wayback Machine, Perma.cc, etc).
-
The solution set is as follows: take your pick.
It's almost like a pick two/impossible trinity.
You can either have: - a resource that stays up-to-date (e.g. by a convenient URL that always points to the most recent revision, i.e. documents are mutable) - links that don't break
You can't have both.
If you choose the latter, your document can never change--not even to say the current version is out of date. This also means using unique ("versioned", you could say) URLs--one for every revision, with it only* being possible for later revisions to point to earlier ones, almost like a Git-like DAG.
If you choose the former, you're gonna be breaking your links (and those of others, too).
* The workaround would be to include in rev A (which predates rev B) a URL to a non-existent resource, and you preregister your commitment to deposit a document (rev B) there. Not possible in systems which rely on content hashes, but imminently doable under the norms of the way URLs get minted. (Still doesn't fix the problem of have a clean and up-to-date URL like
/pricingor/about/team, though.)
-
-
www.w3.org www.w3.org
-
A disadvantage seemed to me to be that it is not obvious which words take one to particularly important new material, but I imagine that one could indicate that in the text.
I don't understand this.
-
A feature of Microcosm is that links are made using keywords in the following way. Within a certain region (for example, a set of documents), a keyword is connected to a particular destination. This means that a large number of links may be declared rapidly, and markup in the document itself is not needed.
Presaging lightweight wikilinks? (Or at least WikiWords?)
-
-
www.w3.org www.w3.org
-
The page https://www.w3.org/DesignIssues/Multiuser.html#3 links here (to #GenericLinking).
-
Since this page currently returns 404, try https://www.w3.org/History/19921103-hypertext/hypertext/Products/Microcosm/Microcosm.html.
-
-
www.w3.org www.w3.org
-
One might want to make a private annotation to something which is visible world-wide but unwritable. The annotation would be invisible to another reader
-
-
www.w3.org www.w3.org
-
In practice, a system in which different parts of the web have different capabilities cannot insist on bidirectional links. Imagine, for example the publisher of a large and famous book to which many people refer but who has no interest in maintaining his end of their links or indeed in knowing who has refered to the book.
Why it's pointless to insist that links should have been bidirectional: it's unenforceable.
-
-
gitlab.com gitlab.com新しいタブ3
-
Try * http://www.html-tidy.org/ * https://github.com/htacg/tidy-html5
-
Obnoxious.
As someone recently pointed out on HN, it's very common nowadays to encounter the no-one-knows-else-what-they're-doing-here refrain as cover—I don't have to feel insecure about not understanding this because not only am I not alone, nobody else understands it either.
Secondly, if your code is hard to understand regarding its use of
this, then your code his hard to understand.thisisn't super easy, but it's also not hard. Your code (or the code you're being made to wade into) probably just sucks. Thethisconfusion is making you confront it, though, instead of letting it otherwise fly under the radar.* So fix it and stop going in for the low-effort,this-centric clapter.* Not claiming here that
thisis unique; there are allowed to be other things that work as the same sort of indicator. -
And by the way, if you ever do do anything good with it, it'll just become a cesspool of people posting Donald Trump memes and complaining about what movie didn't have the right people in it on Twitter. That's what your inventions will be used for, right? Not advancing humankind and producing some kind of good future for everybody.
... says the guy who has dedicated his life to video games.
Previously: https://hypothes.is/a/tCHXEvj-Eey1N3dRiP4I4Q
-
-
www.w3.org www.w3.org
-
Best of all, there is no software to install, it just works from the Web!
Nice property. It would be nice if HTML Tidy could do the same.
-
-
-
This is the earliest known reference to HTML's
classattribute that I've found in a public document so far: 1995 May 6 in draft spec 02 (i.e. the third revision) of the HTML 2.0 spec.This postdate's Håkon W Lie's original CSS proposal (from 1994 and which doesn't mention classes) and predates the first CSS draft specification (dated to 1995 May 31 in one place and which mention's HTML's ability to subclass in order to "increase the granularity of vcontrol" [sic]).
-
-
www.w3.org www.w3.org
-
To increase the granularity of vcontrol, HTML elements can be subclasssed
The "subclassing" word choice here is interesting. It matches the model eventually suggested in HTML 3.x, but at the time of this draft, HTML was in a draft specification stage (third revision — "02") for HTML 2.0
I'm interested in the discussion that led to this.
Tags
Annotators
URL
-
-
-
It seems the Web is still in a primitive state of formation: loose bands of individuals hunting and gathering. I look forward to great nations in this new medium.
You sure about that, Dan-from-25-years-ago?
-
-
info.cern.ch info.cern.ch
-
You're looking for this, I think: https://www.w3.org/People/Connolly/9703-web-apps-essay.html
-
-
info.cern.ch info.cern.ch
-
If you manage to find a copy of this (or already have one) please let me know!
-
-
news.ycombinator.com news.ycombinator.com
-
michielbdejong.com michielbdejong.com
-
Especially if CouchDB were an integrated part of the browser
It's always funny/interesting to read these not-quite-projections about overhyped tech from a far enough vantage point, but this sounds a little like what I've advocated for in the past, which is resurrecting Google Gears.
Tags
Annotators
URL
-
-
scattered-thoughts.net scattered-thoughts.net
-
Of the two, the html version is the one where I've spent a bunch of time trying to understand and improve the performance. In the dear imgui version I've done the easiest possible thing at every step and the performance is still fine.
Tags
Annotators
URL
-
-
brent-noorda.com brent-noorda.com
-
So what’s wrong with all the different languages? Nothing, if you enjoy the mental exercise of creating and/or learning new languages. But usually that’s all they are: a mental exercise.
-
Editorial: The real reason I wanted Cmm to succeed: to democratize programming. It wouldn’t belong in any business plan, and I seldom mentioned to anyone, but the real reason I wanted Cmm to succeed was not about making money (although paying the mortgage was always important). The real reason was because of the feeling I had when I programmed a computer to perform work for me
-
Another thing I’ll do when I’m King of the World is declare that executable code will always be inseparable from it’s source code. Using a script language solves that, of course, because a script is it’s source. For compiled code a second-best solution is the executable always contain pointers to where the source originated, but since that source may disappear the better solution is for language compilers to simply tack the original source code (and related make files and resources) onto the end of the executable in a way that can be commonly extracted. The law will be known as the “All boots shall contain their own bootstraps” decree.
-
lack of a common powerful hi-level language available on every computer remains. You still cannot write a script file and send it to everyone and expect them to be able to run it without installing something first. The closest we probably have is HTML with JS embedded, since everyone has an HTML browser installed
-
-
martinfowler.com martinfowler.com
-
It's often a good idea to replace common primitives, such as strings, with appropriate value objects.
-
-
news.ycombinator.com news.ycombinator.com
-
You can use terribly slow scripting and techniques and get something working, not because the tooling is genius, but because the hardware is so incredibly fast.
If the thesis is sound, then logically we should expect the instances where people have decided to "use terribly slow scripting and techniques" to produce better programs; the not-slow (non-"scripting") stuff to be worse.
You can only pick one:
- software is worse because hardware improvements mean that what were previously noticeable inefficiencies are no longer noticeable
- programs that are written in low-level languages and don't as much incur runtime overheads are better on average because they're written in low-level languages that aren't as wasteful
-
-
news.ycombinator.com news.ycombinator.com
-
I think we can define an "archival virtual machine" specification that is efficient enough to be usable but simple enough that it never needs to be updated and is easy to implement on any platform; then we can compile our explorable explanations into binaries for that machine. Thenceforth we only need to write new implementations of the archival virtual machine platform as new platforms come along
We have that. It's the Web platform. The hard part is getting people to admit this, and then getting them to actually stop acting counter to these interests. Sometimes that involves getting them to admit that their preferred software stack (and their devotion to it) is the problem, and it's not going to just fix itself.
See also: Lorie and the UVC
-
-
jvns.ca jvns.ca
-
browsers aren’t magic! All the information browsers send to your backend is just HTTP requests. So if I copy all of the HTTP headers that my browser is sending, I think there’s literally no way for the backend to tell that the request isn’t sent by my browser and is actually being sent by a random Python program
This calls for a perspective shift: that "random Python program" is your browser.
-
-
theinformed.life theinformed.life
-
And it’s like, no, no, you know? This is an adaptation thing. You know, computers are almost as old as television now, and we’re still treating them like, “ooh, mysterious technology thing.” And it’s like, no, no, no! Okay, we’re manipulating information. And everybody knows what information is. When you bleach out any technical stuff about computers, everybody understands the social dynamics of telling this person this and not telling that person that, and the kinds of decorum and how you comport yourself in public and so on and so forth. Everybody kind of understands how information works innately, but then you like you try it in the computer and they just go blank and you know, like 50 IQ points go out the window and they’re like, “doh, I don’t get it?” And it’s the same thing, it’s just mediated by a machine.
-
-
ganelson.github.io ganelson.github.io
-
:
Notice with the line about Tom and Huck being boys, the bottom-up description sounds unnatural.
-
or a not very significant change is made to an existing feature in a way which probably won’t cause anybody problems
not what semver is actually about
-
The git repositories for Inform
-
-
-
a MIT style license, which does not require the notices to be reproduced anywhere outside of the actual library code.
-
-
www.newyorker.com www.newyorker.com
-
his ostensible real life (which is actually staged)
You accidentally a word.
-
This piece would benefit from better editing—e.g. its use of pronouns with an unnatural (or too-far-away) antecedent, dubious phrasing like "all the more", etc.
-
-
blog.khinsen.net blog.khinsen.net
-
The more the reimplementation differs from the original authors’ code, the better it is as a verification aid
-
the published journal article, or some supplementary material to that article, contains a precise enough human-readable description of the algorithms that a scientist competent in the field can write a new implementation from scratch
-
-
blog.dshr.org blog.dshr.org
-
The lack of CPU power in those days also meant there was deep skepticism about the performance of interpreters in general, and in the user interface in particular. Mention "interpreted language" and what sprung to mind was BASIC or UCSD Pascal, neither regarded as fast.
still widely held today
-
symmetric
traditional compilation foils "symmetric authoring"
-
-
notes.alinpanaitiu.com notes.alinpanaitiu.com
-
I can't get behind the call to anger here, even if I don't approve of Apple's stance on being the gatekeeper for the software that runs on your phone.
Elsewhere (in the comments by the author on HN), he or she writes:
The biggest problem I try to convey is that you have no way of knowing you'll get the rejection
No, I think there were pretty good odds that before even submitting the first iteration it would have been rejected, based purely on the concept alone. This is not an app. It's a set of pages—only implemented with the iOS SDK (and without any of the affordances, therefore, that you'd get if you were visiting in a Web browser. For whatever reason, the author both thought this was a good idea and didn't review the App Store guidelines and decided to proceed anyway.
Then comes the part where Apple sends the rejection and tells the author that it's no different from a Web page and doesn't belong on the App Store.
Here's where the problem lies: at the point where you're - getting rejections, and then - trying to add arbitrary complexity to the non-app for no reason other than to try to get around the rejection
... that's the point where you know you're wasting your time, if it wasn't already clear before—and, once again, it should have been. This is a series of Web pages. It belongs on the Web. (Or dumped into a ZIP and passed around via email.) It is not an app.
The author in the same HN comment says to another user:
So you, like me, wasted probably days (if not weeks) to create a fully functional app, spent much of that time on user-facing functions that you would have probably not needed
In other words, the author is solely responsible for wasting his or her own time.
To top it off, they finish their HN comment with this lament:
It's not like on Android where you can just share an APK with your friends.
Yeah. Know what else allows you to "just" share your work...? (No APK required, even!)
Suppose you were taking classes and wanted to know the rubric and midterm schedule. Only rather than pointing you to the appropriate course Web page or sharing a PDF or Word document with that information, the professor tells you to download an executable which you are expected to run on your computer and which will paint that information on the screen. You (and everyone else) would hate them—and you wouldn't be wrong to.
I'm actually baffled why an experienced iOS developer is surprised by any of the events that unfolded here.
-
-
news.ycombinator.com news.ycombinator.com
-
Not to dispute "deathanatos"'s (presumably) thorough refutation, and although the Wat behavior remains the default for compatibility reasons, but most of these problems are fixed with a single pragma.
-
-
news.ycombinator.com news.ycombinator.com
-
The JS that is being run is essentially IE6 or IE5-level, so none of the modern features introduced in IE8
Odd way to describe it! You could easily have said "Chrome 1.0" or "Firefox 1.x-level" (or even "Firefox 2.x" or "Firefox 3"), since JS didn't really change between 2000 and 2010. (ECMAScript 5 was standardized in 2009 but not bene, that's December 2009. IE 8 didn't introduce ES5, and it definitely wasn't available in IE at the tail end of the year. Full ES5 compliance didn't even really show up in browser until 2012-ish, not even Chrome—the one browser that had short release periods at the time.)
-
-
www.joelonsoftware.com www.joelonsoftware.com
-
Corollary: Spolsky (at least at the time of this article) didn't really understand types, having about the same grasp on computing fundamentals as your average C programmer.
-
-
wiki.arnaudcys.com wiki.arnaudcys.com
-
It should be dead simple to distribute content (eg. static content).It should be easy to build apps.It should be not too hard to build a platform.
The thing that gets me about takes like this is that it's all the stuff below the Web content layer that accounts for what makes all this stuff harder than it needs to be.
What's the hard part about distributing content (static or not, but let's go with static for simplicity)? It's registering a domain, updating DNS, and keeping a Web server up—all systems that have nothing to do with the "infernal" trio and also generally programmed in what are typically described as saner languages and their traditions. It's either that, or it's relying on somebody else, like GitHub Pages, and integrating the implementation details/design decisions for their value-add into your workflow.
To "build a platform" is ambiguous, but it sounds a lot like "creating a server-side application to serve non-static content and handle associated requests". Is the infernal trio to blame for the difficulties of that, too?
-
-
news.ycombinator.com news.ycombinator.com
-
Dynamic typing makes that harder
So run a typechecker on the code to check your work if you want type checking. That is what TypeScript does, after all. And it's been around long enough that people shouldn't be making the mistake that a runtime that support dynamic types at runtime means that you can't use a static typechecker at "compile time" (i.e. while the code is still on the developer workstation).
-
The point is to write bug-free code.
With this comment, the anti-JS position is becoming increasingly untenable. The author earlier suggested C as an alternative. So their contention is that it's easier to write bug-free code in C than it is in JS. This is silly.
C hackers like Fabrice Bellard don't choose C for the things they do because it's easier to write bug-free code in C.
-
-
www.reddit.com www.reddit.com
-
Easier than C? Really? Since all that crap ends up calling C libraries anyway? This is about hard real time as it gets.
Perfect synopsis of /r/programming
-
-
www.theverge.com www.theverge.com
-
If you’re still worried, do note that the Space Telescope Science Institute’s document mentions that the script processor itself is written in C++, which is known for being... well, the type of language you’d want to use if you were programming a spacecraft.
The author might as well have written, "I have no idea what I'm talking about and rely on pattern matching based on social cues I have observed."
-
written in arcane code that you’d have to recompile
What is the relationship between "arcane code" and "code that you'd have to recompile"?
-
therefore likely less error-prone
-
what’s the point of going with web-like scripts instead of more traditional code
Where does it say they're "going with web-like scripts"? (And what is "more traditional code"?)
-
-
www.grunch.net www.grunch.net
-
Usually the philosophy is akin to turning an antique coffee-grinder into the base for a lamp: it's there, so why not find a way to put it to some use.
-
-
jimmyhmiller.github.io jimmyhmiller.github.io
-
This note (and most modern JS programmers' code) could benefit from the eradication of its aversion to
this. -
Most notably missing from the discussion is inheritance.
Nah. OOP is just "messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things". Inheritance not needed.
-
if (property === 'firstName')
-
-
akkartik.name akkartik.name
-
Bob Nystrom used something like this while writing Crafting Interpreters. He wrote a post about its bespoke build system.
https://journal.stuffwithstuff.com/2020/04/05/crafting-crafting-interpreters/
-
-
github.com github.com
-
blog.khinsen.net blog.khinsen.net
-
This became a serious practical issue as version control was adopted as a tool for collaboration, with members of a team communicating through commit messages. Git therefore introduced the approach of “rewriting history”.
I'm not sure that the narrative implied by these statements is accurate.
-
this basic wisdom was lost with the adoption of computers. Computers make it very easy to modify information, to the point that version control had to be invented to prevent massive information loss by careless editing.
-
Since facts and narratives live in different universes, we should avoid mixing them carelessly. Crossing the boundary between the two universes should always be explicit. A narrative should not include copies of pieces of facts, but references to locations in a fact universe. And facts should not refer to narratives at all.
-
-
blog.khinsen.net blog.khinsen.net
-
Reminiscent of Helping my students overcome command-line bullshittery
-
I decided to start with a fresh install of Python 2.7. First surprise: no C compiler.
-
-
roadtoreality.substack.com roadtoreality.substack.com
-
I add mass to each of these… mental clusters? planetary bodies in the Mindscape? by hyperlinking the phrase as I type.
Nothing particular to what's described here, but this gives me an idea for a design of an efficient IME that doesn't require manually adding the brackets or even starting with an a priori intention of linking when you begin writing the to-be-linked phrase. The idea is that you start typing something, realize you want to link it, and then do so—from the keyboard, without having to go back and insert the open brackets—at least not with ordinary text editing commands. Here's how it goes:
Suppose you begin typing "I want to go to Mars someday", but after you type "Mars", you realize you want to link "go to Mars", as this example shows. The idea is that, with your cursor positioned just after "Mars", you invoke a key sequence, use Vim-inspired keys
bandw(orhandlfor finer movements) to select the appropriate number of words back from your current position, and then complete the action by pressing Enter.This should work really well here and reasonably well in the freeform editor originally envisioned for w2g/graph.global.
-
-
blog.khinsen.net blog.khinsen.net
-
The linear structure of a notebook that forces the narrative to follow the order of the computation.
I disagree that this is harmful. Striving to linearize things has payoffs. The resistance to doing so, on the other hand, is definitely what I would call harmful (to "communication", as the author put it, esp.).
Have you ever delivered (or witnessed) a lecture in non-linear time? You can't. Because you can't help but grapple with the inherent linearity arising from the base substrate of our existence, eventually.
-
-
khinsen.wordpress.com khinsen.wordpress.com
-
It comes to the conclusion that the closest existing approximation to a good platform is the Java virtual machine.
That's probably not a bad conclusion—if you insert the phrase "one of" before "the closest". (And there's no bias fueling my agreement; I don't work with the JVM).
For my bet, the Web platform is still the better option, though, and it has to do not just with its current ubiquity, but the factors that have led to that ubiquity, including versatility and relative ease of implementation (cf Kling et al).
-
we need to think about what the required platform is for each variety of canned computation
Yes! This is essential. Ask, "What does this probably actually require?"
-
The fundamental problem is that VirtualBox emulates arcane details of PC hardware in order to work with existing operating systems, and then the installed operating system recognizes that arcane hardware and installs drivers etc. that rely on it. That means we have to emulate the same arcane hardware 20 years from now just to be able to boot the old virtual system.
in other words, it captures too many irrelevant details whose effect sums to null (and in the best case should be replaced by NOP).
-
See also DSHR on preservation, along with Rothenberg and Lorie (independently).
-