10,000 Matching Annotations
  1. Last 7 days
    1. You need to claim ownership on Visual Studio Code's installation directory, by running this command: sudo chown -R $(whoami) "$(which code)" sudo chown -R $(whoami) /usr/share/code

      "claim ownership"

  2. Aug 2025
  3. Jul 2025
    1. Because Read Committed mode starts each command with a new snapshot that includes all transactions committed up to that instant, subsequent commands in the same transaction will see the effects of the committed concurrent transaction in any case. The point at issue above is whether or not a single command sees an absolutely consistent view of the database.
    1. When you open this in two browsers and refresh a few times, one browser after the other, you’ll see the count go up and up (when looking at the page source), proving that the state is shared between both browsers (well, not really, it’s shared on the server, and used by both users). This will have serious consequences if you go this route: if user A is logged in and you’d write the user object to the shared state, and user B is not logged in, they’d still see a flash of user A’s username appear in the navigation bar, until the shared state is overwritten by the undefined user object.
    2. One pattern that I love to use in my SvelteKit projects is returning writable stores from the layout’s load function. This makes it possible to fetch data from the server (for example the user object for the logged in user), and then you make this object available as a writable reactive store throughout the whole application. So when the user updates their username or avatar, you do the PUT request to the server and you get the updated user object back from the server as the response, you can simply update the $user writable store value and every place in your app where you show the user object gets updated immediately.
    1. For example, let’s say you fetch a list of books on the /books page and a list of albums on /albums page, so both those pages have a LayoutLoad method where the fetches are made. With SvelteKit this will cause the fetch to happen every time the user switches between these two pages, and the writable store will also be recreated every time. If your goal is to prevent refetching content you’ve already fetched before, then the global store is still your best bet.
    2. conditionally returned either a readable store (from SSR) or a global writable store (from CSR) to make things like real time updates via WebSockets possible - although that solution does have one big advantage: the global store is always there.
    3. But what if you want to update this user instance? For example on your website you have a form where the user can change their name, username, or avatar. When the form is submitted this gets stored on the server, but the site still shows the old user information, for example it still shows the old avatar of the user in the top menu. The user variable isn’t writable, so how do you overwrite this?
    1. The Proposal: Possible upstreaming into GNOME The Problem: Why we need this in GNOME Installation: For those wanting to install this on their distribution The Solution: Shared Features: Behaviors shared between stacking and auto-tiling modes Floating Mode: Behaviors specific to the floating mode Tiling Mode: Behaviors specific to the auto-tiling mode Developers: Guide for getting started with development
  4. main.vitest.dev main.vitest.dev
    1. If you use a random URL then option b won’t work because you can’t invalidate a random URL using that method.

      I think you mean option a (invalidating fetch url) won't work. Option b (depends('posts')) should work fine, because it's a static string that's easy to invalidate.

  5. Jun 2025
    1. You can use args in your stories to configure the component's appearance, similar to what you would do in an application. For example, here's how you could use a footer arg to populate a child component:

      In other words, args aren't necessarily only for passing straight through to the component. I agree with this idea in theory.

      I had trouble getting this to work with TypeScript in practice. It fought against me pretty hard and told me the args/argTypes were invalid if they didn't match the props from the component. I'm hoping that's solvable, but I did not manage to so far.

      Also, in this example, we should really destructure args so that we only pass the Page props to Page:

      template({ footer, ...props })

    1. Don't ask to be assigned a feature. There's no need to be assigned a feature or to reserve it. If you want to contribute, just ask questions, open issues or pull requests. History shows that contributors being assigned to tasks often don't finish them, but still blocks other contributors from picking them up.
  6. May 2025
    1. For larger projects with multiple interconnected components, monorepos can be a game-changer, providing efficient dependency management, atomic commits, simplified code sharing, and an improved developer experience.
    1. To dig deep on this though, .gitignore isn't a standard. It's a well documented and familiar syntax from a specific, widely adopted, tool. Maybe we can even pretend the git implementation is a reference implementation too. There's no spec though and, importantly, it isn't considered a standard by the git maintainers themselves. That's why I kept calling it a quasi-standard in my original post.
    1. There has been an attempt to systematize exit status numbers (see /usr/include/sysexits.h), but this is intended for C and C++ programmers. A similar standard for scripting might be appropriate. The author of this document proposes restricting user-defined exit codes to the range 64 - 113 (in addition to 0, for success), to conform with the C/C++ standard.

      It sounds like he's proposing aligning with the sysexits.h standard?

      But I'm not clear why he refers to "exit codes to the range 64 - 113 (in addition to 0, for success)" as user-defined. To me, these seem the complete opposite: those are reserved for pre-defined, standard exit codes — with 0 for success being the most standard (and least user-defined) of all!

      Why to use exit codes from 1-63 for user-defined errors??

    2. An update of /usr/include/sysexits.h allocates previously unused exit codes from 64 - 78. It may be anticipated that the range of unallotted exit codes will be further restricted in the future. The author of this document will not do fixups on the scripting examples to conform to the changing standard. This should not cause any problems, since there is no overlap or conflict in usage of exit codes between compiled C/C++ binaries and shell scripts.

      Eh, 0 and 64 - 78 are the only codes it defines. So if it had different codes defined before, what on earth were those codes before? Was only 0 "used"/defined here before? Nothing defined from 1-128? Or were the codes defined there different ones, like 20-42 and then they arbitrarily shifted these up to 64-78 one day? This is very unclear to me.

      Also unclear whether this is saying it won't update for any future changes after this, or if he hasn't even updated to align with this supposed "change". (Unclear because I can't figure out whether his "proposes restricting user-defined exit codes to the range 64 - 113 (in addition to 0, for success), to conform with the C/C++ standard" statement is actually conforming or rejecting the sysexits.h standard.)

      It seems that he's overreacting a bit here. It's hard to imagine there has been or will be any major changes to the sysexits.h. I would only imagine there being additions to, but not changes to because backwards compatibility would be of utmost concern.

    1. BSD-derived OS's have defined an extensive set of preferred interpretations: Meanings for 15 status codes 64 through 78 are defined in sysexits.h.[15] These historically derive from sendmail and other message transfer agents, but they have since found use in many other programs.[16] It has been deprecated and its use is discouraged.

      [duplicate of https://hyp.is/12j9KjELEfCQc79IbTwQnQ/man.freebsd.org/cgi/man.cgi?query=sysexits&sektion=3 ]

      Why is this deprecated and what should be used instead?? Standardizing this stuff would be good, and this de facto standard seems as good as any!!

    1. root@51a758d136a2:~/test/test-project# npx prisma migrate diff --from-empty --to-schema-datamodel prisma/schema.prisma --script > migration.sql root@51a758d136a2:~/test/test-project# cat migration.sql -- CreateTable CREATE TABLE "test" ( "id" SERIAL NOT NULL, "val" INTEGER, CONSTRAINT "test_pkey" PRIMARY KEY ("id") ); root@51a758d136a2:~/test/test-project# mkdir -p prisma/migrations/initial root@51a758d136a2:~/test/test-project# mv migration.sql prisma/migrations/initial/
    1. While that change fixes the issue, there’s a production outage waiting to happen. When the schema change is applied, the existing GetUserActions query will begin to fail. The correct way to fix this is to deploy the updated query before applying the schema migration. sqlc verify was designed to catch these types of problems. It ensures migrations are safe to deploy by sending your current schema and queries to sqlc cloud. There, we run your existing queries against your new schema changes to find any issues.
    1. It isn't strictly necessary, but set -euxo pipefail turns on a few useful features that make bash shebang recipes behave more like normal, linewise just recipe: set -e makes bash exit if a command fails. set -u makes bash exit if a variable is undefined. set -x makes bash print each script line before it's run. set -o pipefail makes bash exit if a command in a pipeline fails. This is bash-specific, so isn't turned on in normal linewise just recipes.
    1. So what I've been doing is using bulidx to build images for multiple architectures then you can pull those images with docker compose. # docker-bake.hcl variable "platforms" { default = ["linux/amd64", "linux/arm64"] } group "default" { targets = [ "my_image", ] } target "my_image" { dockerfile = "myimage.Dockerfile" tags = ["myrepo/myimage:latest"] platforms = platforms } # Command docker buildx bake --push
    1. #!/usr/bin/env npx ts-node // TypeScript code Whether this always works in macOS is unknown. There could be some magic with node installing a shell command shim (thanks to @DaMaxContext for commenting about this). This doesn't work in Linux because Linux distros treat all the characters after env as the command, instead of considering spaces as delimiting separate arguments. Or it doesn't work in Linux if the node command shim isn't present (not confirmed that's how it works, but in any case, in my testing, it doesn't work in Linux Docker containers). This means that npx ts-node will be treated as a single executable name that has a space in it, which obviously won't work, as that's not an executable.
    1. Use the syntax parser directive to declare the Dockerfile syntax version to use for the build. If unspecified, BuildKit uses a bundled version of the Dockerfile frontend. Declaring a syntax version lets you automatically use the latest Dockerfile version without having to upgrade BuildKit or Docker Engine, or even use a custom Dockerfile implementation.
  7. Apr 2025
    1. annotated tags point to a tag object in the object database. git tag -as -m msg annot cat .git/refs/tags/annot contains the SHA of the annotated tag object: c1d7720e99f9dd1d1c8aee625fd6ce09b3a81fef and then we can get its content with: git cat-file -p c1d7720e99f9dd1d1c8aee625fd6ce09b3a81fef
    1. I would argue that "whole tree" thinking is enhanced by --follow being the default. What I mean is when I want to see the history of the code within a file, I really don't usually care whether the file was renamed or not, I just want to see the history of the code, regardless of renames. So in my opinion it makes sense for --follow to be the default because I don't care about individual files; --follow helps me to ignore individual file renames, which are usually pretty inconsequential.
    1. You (and your collaborators) need to re-generate hooks every time there’s a change in .huskyrc.js. Re-generation could be bound to some events, but there’s no reliable way to cover all possible cases and unexpected behaviors would appear.

      Seems like you could just use a git hook (or several) to trigger the sync from js to .git/hooks?

    1. I would be very careful with the "common usage" argument. For example: the use of sign up and sign in has a very pleasant symmetry which doubtless appeals to many people. Unfortunately, this symmetry reduces the difference by which the user recognizes the button she needs to just two letters. It's very easy to click sign up when you meant sign in.
    2. "Log in" is a valid verb where "Login" is a valid noun. "Signin", however, isn't a valid noun. On the other hand, "Signup" and "Sign up" have the same relationship, and if you use "Log in", you'll probably use "Register" as opposed to "Sign up". Then there's also "Log on" and "Logon", and of course "Log off" or "Log out".
    1. Ask yourself what is the main purpose of storing this data? Do you intend to actually send mail to the person at the address? Track demographics, populations? Be able to ask callers for their correct address as part of some basic authentication/verification? All of the above? None of the above? Depending on your actual need, you will determine either a) it doesn't really matter, and you can go for a free-text approach, or b) structured/specific fields for all countries, or c) country specific architecture.
    2. Locality can be unclear, particularly the distinction between map locality and postal-locality. The postal locality is the one deemed by a postal authority which may sometimes be a nearby large town. However, the postcode will usually resolve any problems or discrepancies there, to allow correct delivery even if the official post-locality is not used.