10,000 Matching Annotations
  1. Last 7 days
    1. Most complex software ships with a few bugs. Obviously, we want to avoid them, but the more complex a feature is, the harder it is to cover all the use-cases. As we get closer to our RC date, do we feel confident that what we're shipping has as few blocking bugs as possible? I would like to say we're close, but the truth is I have no idea. It feels like we'll have to keep trying the features for a bit until we don't run into anything - but we have less than 3 weeks before the RC ships. Here's a few surprising bugs that need to get fixed before I would feel comfortable shipping node12 in stable.
  2. Oct 2025
    1. In your example you would simply say approved. The addition of the prefix pre has no meaning for words such as approve. It implies something that is done before approval. Therefore, pre-approved means not yet approved. You do find meaningless phrases like pre-approved and pre-booked used by marketers and advertisers but they cannot be recommended in good English.

      While technically not correct according to dictionary definition, this does at least raise good points about ambiguity/inconsistency in English:

      If it did not already have a pre-established meaning, then the pre- prefix here certainly could make the word mean "prior to approval", could it not? It's only the precedent set by those before us that makes it mean the other thing (that the dictionary says it actually means).

    1. Like the Elliptic curve Diffie-Hellman (ECDH) protocol that Signal has used since its start, KEM is a key encapsulation mechanism. Also known as a key agreement mechanism, it provides the means for two parties who have never met to securely agree on one or more shared secrets in the presence of an adversary who is monitoring the parties’ connection. RSA, ECDH, and other encapsulation algorithms have long been used to negotiate symmetric keys (almost always AES keys) in protocols including TLS, SSH, and IKE. Unlike ECDH and RSA, however, the much newer KEM is quantum-safe.
    1. All vi.mock calls are placed at the top of the file, and it's the first thing that's getting called. To change implementation for different tests, you can do: vi.spyOn(fs, 'existsSync').mockImplementation(() => { // new implementation })
    1. I came here after getting this warning during the build process and was confused as to why. The mdn webdocs specifically says "The autofocus attribute should be added to the element the user is expected to interact with immediately upon opening a modal dialog."
    2. Also just because some popular websites does something doesn't mean you should too. WebAIM Million 2021 revealed that 97.4% of the top 1 million home pages had detectable WCAG 2 errors (not warnings). They found that 40% of home pages had skipping heading levels; developers aren't exactly great at picking the right tool for the job, not even the developers of the most popular sites.
    3. More actionsWarnings make sense in two cases: something should just not be used ever, as it has no legitimate uses, or extremely rare there exists a better alternative In this case both are false. autofocus has many legitimate uses - including literally Internet's most popular website; and the alternatives of hand-writing JS code doing the pre-focusing, or not pre-focusing are both actively worse. Therefore this warning needs to go. (and all other warnings that don't fit into those two categories)
  3. Sep 2025
    1. Developers can ramp up more quickly on new APIs, providing quicker feedback to the platform while the APIs are still the most malleable. Mistakes in APIs can be corrected quickly by the developers who use them, and library authors who serve them, providing high-fidelity, critical feedback to browser vendors and platform designers.
  4. Aug 2025
    1. You need to claim ownership on Visual Studio Code's installation directory, by running this command: sudo chown -R $(whoami) "$(which code)" sudo chown -R $(whoami) /usr/share/code

      "claim ownership"

  5. Jul 2025
    1. Because Read Committed mode starts each command with a new snapshot that includes all transactions committed up to that instant, subsequent commands in the same transaction will see the effects of the committed concurrent transaction in any case. The point at issue above is whether or not a single command sees an absolutely consistent view of the database.
    1. When you open this in two browsers and refresh a few times, one browser after the other, you’ll see the count go up and up (when looking at the page source), proving that the state is shared between both browsers (well, not really, it’s shared on the server, and used by both users). This will have serious consequences if you go this route: if user A is logged in and you’d write the user object to the shared state, and user B is not logged in, they’d still see a flash of user A’s username appear in the navigation bar, until the shared state is overwritten by the undefined user object.
    2. One pattern that I love to use in my SvelteKit projects is returning writable stores from the layout’s load function. This makes it possible to fetch data from the server (for example the user object for the logged in user), and then you make this object available as a writable reactive store throughout the whole application. So when the user updates their username or avatar, you do the PUT request to the server and you get the updated user object back from the server as the response, you can simply update the $user writable store value and every place in your app where you show the user object gets updated immediately.
    1. For example, let’s say you fetch a list of books on the /books page and a list of albums on /albums page, so both those pages have a LayoutLoad method where the fetches are made. With SvelteKit this will cause the fetch to happen every time the user switches between these two pages, and the writable store will also be recreated every time. If your goal is to prevent refetching content you’ve already fetched before, then the global store is still your best bet.
    2. conditionally returned either a readable store (from SSR) or a global writable store (from CSR) to make things like real time updates via WebSockets possible - although that solution does have one big advantage: the global store is always there.
    3. But what if you want to update this user instance? For example on your website you have a form where the user can change their name, username, or avatar. When the form is submitted this gets stored on the server, but the site still shows the old user information, for example it still shows the old avatar of the user in the top menu. The user variable isn’t writable, so how do you overwrite this?
    1. The Proposal: Possible upstreaming into GNOME The Problem: Why we need this in GNOME Installation: For those wanting to install this on their distribution The Solution: Shared Features: Behaviors shared between stacking and auto-tiling modes Floating Mode: Behaviors specific to the floating mode Tiling Mode: Behaviors specific to the auto-tiling mode Developers: Guide for getting started with development
  6. main.vitest.dev main.vitest.dev
    1. If you use a random URL then option b won’t work because you can’t invalidate a random URL using that method.

      I think you mean option a (invalidating fetch url) won't work. Option b (depends('posts')) should work fine, because it's a static string that's easy to invalidate.

  7. Jun 2025
    1. You can use args in your stories to configure the component's appearance, similar to what you would do in an application. For example, here's how you could use a footer arg to populate a child component:

      In other words, args aren't necessarily only for passing straight through to the component. I agree with this idea in theory.

      I had trouble getting this to work with TypeScript in practice. It fought against me pretty hard and told me the args/argTypes were invalid if they didn't match the props from the component. I'm hoping that's solvable, but I did not manage to so far.

      Also, in this example, we should really destructure args so that we only pass the Page props to Page:

      template({ footer, ...props })

    1. Don't ask to be assigned a feature. There's no need to be assigned a feature or to reserve it. If you want to contribute, just ask questions, open issues or pull requests. History shows that contributors being assigned to tasks often don't finish them, but still blocks other contributors from picking them up.
  8. May 2025
    1. For larger projects with multiple interconnected components, monorepos can be a game-changer, providing efficient dependency management, atomic commits, simplified code sharing, and an improved developer experience.
    1. To dig deep on this though, .gitignore isn't a standard. It's a well documented and familiar syntax from a specific, widely adopted, tool. Maybe we can even pretend the git implementation is a reference implementation too. There's no spec though and, importantly, it isn't considered a standard by the git maintainers themselves. That's why I kept calling it a quasi-standard in my original post.
    1. There has been an attempt to systematize exit status numbers (see /usr/include/sysexits.h), but this is intended for C and C++ programmers. A similar standard for scripting might be appropriate. The author of this document proposes restricting user-defined exit codes to the range 64 - 113 (in addition to 0, for success), to conform with the C/C++ standard.

      It sounds like he's proposing aligning with the sysexits.h standard?

      But I'm not clear why he refers to "exit codes to the range 64 - 113 (in addition to 0, for success)" as user-defined. To me, these seem the complete opposite: those are reserved for pre-defined, standard exit codes — with 0 for success being the most standard (and least user-defined) of all!

      Why to use exit codes from 1-63 for user-defined errors??

    2. An update of /usr/include/sysexits.h allocates previously unused exit codes from 64 - 78. It may be anticipated that the range of unallotted exit codes will be further restricted in the future. The author of this document will not do fixups on the scripting examples to conform to the changing standard. This should not cause any problems, since there is no overlap or conflict in usage of exit codes between compiled C/C++ binaries and shell scripts.

      Eh, 0 and 64 - 78 are the only codes it defines. So if it had different codes defined before, what on earth were those codes before? Was only 0 "used"/defined here before? Nothing defined from 1-128? Or were the codes defined there different ones, like 20-42 and then they arbitrarily shifted these up to 64-78 one day? This is very unclear to me.

      Also unclear whether this is saying it won't update for any future changes after this, or if he hasn't even updated to align with this supposed "change". (Unclear because I can't figure out whether his "proposes restricting user-defined exit codes to the range 64 - 113 (in addition to 0, for success), to conform with the C/C++ standard" statement is actually conforming or rejecting the sysexits.h standard.)

      It seems that he's overreacting a bit here. It's hard to imagine there has been or will be any major changes to the sysexits.h. I would only imagine there being additions to, but not changes to because backwards compatibility would be of utmost concern.

    1. BSD-derived OS's have defined an extensive set of preferred interpretations: Meanings for 15 status codes 64 through 78 are defined in sysexits.h.[15] These historically derive from sendmail and other message transfer agents, but they have since found use in many other programs.[16] It has been deprecated and its use is discouraged.

      [duplicate of https://hyp.is/12j9KjELEfCQc79IbTwQnQ/man.freebsd.org/cgi/man.cgi?query=sysexits&sektion=3 ]

      Why is this deprecated and what should be used instead?? Standardizing this stuff would be good, and this de facto standard seems as good as any!!

    1. root@51a758d136a2:~/test/test-project# npx prisma migrate diff --from-empty --to-schema-datamodel prisma/schema.prisma --script > migration.sql root@51a758d136a2:~/test/test-project# cat migration.sql -- CreateTable CREATE TABLE "test" ( "id" SERIAL NOT NULL, "val" INTEGER, CONSTRAINT "test_pkey" PRIMARY KEY ("id") ); root@51a758d136a2:~/test/test-project# mkdir -p prisma/migrations/initial root@51a758d136a2:~/test/test-project# mv migration.sql prisma/migrations/initial/
    1. While that change fixes the issue, there’s a production outage waiting to happen. When the schema change is applied, the existing GetUserActions query will begin to fail. The correct way to fix this is to deploy the updated query before applying the schema migration. sqlc verify was designed to catch these types of problems. It ensures migrations are safe to deploy by sending your current schema and queries to sqlc cloud. There, we run your existing queries against your new schema changes to find any issues.
    1. It isn't strictly necessary, but set -euxo pipefail turns on a few useful features that make bash shebang recipes behave more like normal, linewise just recipe: set -e makes bash exit if a command fails. set -u makes bash exit if a variable is undefined. set -x makes bash print each script line before it's run. set -o pipefail makes bash exit if a command in a pipeline fails. This is bash-specific, so isn't turned on in normal linewise just recipes.
    1. So what I've been doing is using bulidx to build images for multiple architectures then you can pull those images with docker compose. # docker-bake.hcl variable "platforms" { default = ["linux/amd64", "linux/arm64"] } group "default" { targets = [ "my_image", ] } target "my_image" { dockerfile = "myimage.Dockerfile" tags = ["myrepo/myimage:latest"] platforms = platforms } # Command docker buildx bake --push
    1. #!/usr/bin/env npx ts-node // TypeScript code Whether this always works in macOS is unknown. There could be some magic with node installing a shell command shim (thanks to @DaMaxContext for commenting about this). This doesn't work in Linux because Linux distros treat all the characters after env as the command, instead of considering spaces as delimiting separate arguments. Or it doesn't work in Linux if the node command shim isn't present (not confirmed that's how it works, but in any case, in my testing, it doesn't work in Linux Docker containers). This means that npx ts-node will be treated as a single executable name that has a space in it, which obviously won't work, as that's not an executable.
    1. Use the syntax parser directive to declare the Dockerfile syntax version to use for the build. If unspecified, BuildKit uses a bundled version of the Dockerfile frontend. Declaring a syntax version lets you automatically use the latest Dockerfile version without having to upgrade BuildKit or Docker Engine, or even use a custom Dockerfile implementation.
  9. Apr 2025