19,686 Matching Annotations
  1. Jun 2024
    1. How can I wait for container X before starting Y? This is a common problem and in earlier versions of docker-compose requires the use of additional tools and scripts such as wait-for-it and dockerize. Using the healthcheck parameter the use of these additional tools and scripts is often no longer necessary.
    1. Locking the conversation in this issue for the reason @stevvoe mentioned above; comments on closed issues and PRs easily go unnoticed - I'm locking the conversation to prevent that from happening
    2. docker inspect --format='{{.State.Health.Status}}'
    1. created against https://github.com/docker-library/official-images (which is the source-of-truth for the official images program as a whole)
    2. On Windows, that interface doesn't really exist (and is really difficult to emulate properly)
    3. we leave it up to each image maintainer to make the appropriate judgement on what's going to be the best representation / most supported solution for the upstream project they're representing
    4. Explicit health checks are not added to official images for a number of reasons, some of which include:
    1. Rootless mode executes the Docker daemon and containers inside a user namespace. This is very similar to userns-remap mode, except that with userns-remap mode, the daemon itself is running with root privileges, whereas in rootless mode, both the daemon and the container are running without root privileges.
    1. Running Docker inside Docker lets you build images and start containers within an already containerized environment.
    2. If your use case means you absolutely require dind, there is a safer way to deploy it. The modern Sysbox project is a dedicated container runtime that can nest other runtimes without using privileged mode. Sysbox containers become VM-like so they're able to support software that's usually run bare-metal on a physical or virtual machine. This includes Docker and Kubernetes without any special configuration.
    3. Bind mounting your host's daemon socket is safer, more flexible, and just as feature-complete as starting a dind container.
    4. Docker-in-Docker via dind has historically been widely used in CI environments. It means the "inner" containers have a layer of isolation from the host. A single CI runner container supports every pipeline container without polluting the host's Docker daemon.
    5. While it often works, this is fraught with side effects and not the intended use case for dind. It was added to ease the development of Docker itself, not provide end user support for nested Docker installations.
    6. This means containers created by the inner Docker will reside on your host system, alongside the Docker container itself. All containers will exist as siblings, even if it feels like the nested Docker is a child of the parent.
    1. Root-privileges: As a container runtime, Sysbox requires root privileges to operate. As a result, the Sysbox-In-Docker container must be launched in "privileged" mode.
    2. Note that, for the general use-case, Sysbox is expected to operate in a regular (non-containerized) environment (i.e., host installation).
    3. As its name implies, Sysbox-In-Docker aims to provide a containerized environment where to execute the Sysbox runtime.
    1. Isn't a simple go get github.com/mayflower/docker-ls/cli/... sufficient, you ask? Indeed it is, but including the generate step detailed above will encode verbose version information in the binaries.
  2. May 2024
    1. Please note that '+' characters are frequently used as part of an email address to indicate a subaddress, as for example in <bill+ietf@example.org>.

      Nice of them to point that this is a common scenario, not just a hypothetical one.

    1. Choosing names from a list is a lot more user-friendly and less error-prone than asking them to blindly type in an e-mail address and hope that it is correct and matches an existing user (or at least a real e-mail account that can then be sent an invitation to register). In my opinion, this is a big reason why Facebook became so popular — because it let you see your list of friends, and send message to people by their names instead of having to already know/remember/ask for their e-mail address.
    1. You cannot sell or distribute Content (either in digital or physical form) on a Standalone basis. Standalone means where no creative effort has been applied to the Content and it remains in substantially the same form as it exists on our website.

      That seems fair enough...

    1. This is probably confusing because the "host" in --network=host does not mean host as in the underlying runner host / 'baremetal' system. To understand what is happening here, we must first understand how the docker:dind service works. When you use the service docker:dind to power docker commands from your build job, you are running containers 'on' the docker:dind service; it is the docker daemon. When you provide the --host option to docker run it refers to the host network of the daemon I.E. the docker:dind container, not the underlying system host.
    2. When you specify FF_NETWORK_PER_BUILD that was specifying the docker network for the build job and its service containers that encapsulates all of your job's containers.
    1. return &container.HostConfig{ DNS: e.Config.Docker.DNS, DNSSearch: e.Config.Docker.DNSSearch, RestartPolicy: neverRestartPolicy, ExtraHosts: e.Config.Docker.ExtraHosts, Privileged: e.Config.Docker.Privileged, NetworkMode: e.networkMode, Binds: e.volumesManager.Binds(), ShmSize: e.Config.Docker.ShmSize, Tmpfs: e.Config.Docker.ServicesTmpfs, LogConfig: container.LogConfig{ Type: "json-file", },
    1. For Linux systems, you can – starting from major version 20.04 of the Docker engine – now also communicate with the host via host.docker.internal. This won't work automatically, but you need to provide the following run flag: --add-host=host.docker.internal:host-gateway
    1. While the RSpec team now officially recommends system specs instead, feature specs are still fully supported, look basically identical, and work on older versions of Rails.

      Whose recommendation should one follow?

      RSpec team's recommendation seems to conflict with this project's: https://rspec.info/features/6-0/rspec-rails/request-specs/request-spec/:

      Capybara is not supported in request specs. The recommended way to use Capybara is with feature specs.

    2. RSpec Rails defines ten different types of specs for testing different parts of a typical Rails application. Each one inherits from one of Rails’ built-in TestCase
    1. I want RSpec Rails development to be fast, and lightweight, much like it was when I joined the RSpec project.
    2. As of right now the full build takes over an hour to run, and this makes cycling for PRs and quick iterative development very difficult.
    3. If we do this, it will become deeply unsustainable for us to maintain RSpec Rails in the future. We have too many Rails versions today, and we expect the rate of Rails releases to increase as time goes on.
    4. this has now become unsustainable and we want to take this tradeoff to best serve the needs of the community
    5. This makes ongoing maintenance difficult, as it requires that RSpec Rails' maintainers be conscious of every Rails version that might be loaded.
    6. Our need is therefore best characterised by cost of maintenance. Having to maintain several versions of Rails and Ruby costs us a lot. It makes our development slower, and forces us to write against Rails versions that most people no longer use.
    1. SO MOVED! This is a common statement which means nothing. One must state the actual motion so as to avoid confusion in the audience. Everyone has the right to know exactly what is being moved and discussed. "So moved!" is vague and pointless. Do not allow your club members to be vague and pointless.
    1. Strictly speaking, a cell (cellular) phone is a mobile phone, but a mobile phone may not necessarily be a cell phone. "Cellular" refers to the network technology


    2. The cell phone providers usually call them "mobile" phones which is more precise since "cell" refers to a kind of technology.


    3. However, it is increasingly becoming just a "phone", as landlines continue to disappear from households.
    4. In Australia, it has traditionally been a "mobile" - never a "cell" (unless you are deliberately trying to sound American!).

      regional diferences

    5. The one clarifying term might be "my phone" - this would guarantee it to be a mobile phone, rather than a landline.
    1. 81 View upvote and downvote totals. This answer is not useful Save this answer. Show activity on this post. Most people are focused on attribution (and rightfully so), but it seems that not much attention is being paid to the share alike part of the CC license. In AI contexts, copyright law is still being tested in court and many things are uncertain. There is a very real risk that training an AI on this site's data will not necessarily be considered "fair use" (it fails the "serves as a substitute for the original" test, among other things), which means there's a risk that the trained model will be considered a derivative work and thus required to carry a license similar to CC-BY-SA 4.0.
    2. One of the key elements was "attribution is non-negotiable". OpenAI, historically, has done a poor job of attributing parts of a response to the content that the response was based on.
    3. We contributed free work to the company because the content is under a CC BY-SA license. It is fine to make money off our content as long as they adhere to the license. This forbids selling the content to OpenAI, though, since they do not provide attribution or release their derivative works under a compatible license.
    4. One way to look at it is that corporations are never your friend. They love talking about building communities and ecosystems, but eventually they need to monetize user-generated content and change licensing to make your content their property. When their policies and promises change overnight 180° all you get "we are sorry you feel that way", "our hopes and prayers" and "that was a deliberate business decision we had to make with a heavy heart". And then they laugh all the way to the bank.
    5. Doing free work for a company to make THEIR place a better one, only because you were gamed into doing that. The solution is never contribute to anything that is controlled by private company.
    6. Humans are meant to exploit machines, not the other way round. Exploiting us, who helped make the world a little bit better, in this way, is a turning point. It makes the world for us worse instead of better.
    7. I feel violated, cheated upon, betrayed, and exploited.
    8. I wouldn't focus too much on "posted only after human review" - it's worth noting that's that's worth nothing. We literally just saw a case of obviously riduculous AI images in a scientific paper breezing through peer review with noone caring, so quality will necessarily go down because Brandolini's law combined with AI is the death sentence for communities like SE and I doubt they'll employ people to review content from the money they'll make
    9. "that post is written in a very indirect and unclear way" -- that is intentional, no? The company has been communicating in this style for quite some time now. Lots of grandiose phrases to bamboozle the audience while very little is actually being said. It's infuriating.
    10. On the surface, this is a very nice sentiment - one that we can all get behind.
    11. What could possibly go wrong? Dear Stack Overflow denizens, thanks for helping train OpenAI's billion-dollar LLMs. Seems that many have been drinking the AI koolaid or mixing psychedelics into their happy tea. So much for being part of a "community", seems that was just happy talk for "being exploited to generate LLM training data..." The corrupting influence of the profit-motive is never far away.
    12. If you ask ChatGPT to cite it will provide random citations. That's different from actually training a model to cite (e.g. use supervised finetuning on citations with human raters checking whether sources match, which would also allow you to verify how accurately a model cites). This is something OpenAI could do, it just doesn't.
    13. There are plenty of cases where genAI cites stuff incorrectly, that says something different, or citations that simply do not exist at all. Guaranteeing citations are included is easy, but guaranteeing correctness is an unsolved problem
    14. GenAIs are not capable of citing stuff. Even if it did, there's no guarantee that the source either has anything to do with the topic in question, nor that it states the same as the generated content. Citing stuff is trivial if you don't have to care if the citation is relevant to the content, or if it says the same as you.
    15. LLMs, by their very nature, don't have a concept of "source". Attribution is pretty much impossible. Attribution only really works if you use language models as "search engine". The moment you start generating output, the source is lost.
    1. Podman provides some extra features that help developers and operators in Kubernetes environments. There are extra commands provided by Podman that are not available in Docker.
    2. This is because Podman’s local repository is in /var/lib/containers instead of /var/lib/docker.  This isn’t an arbitrary change; this new storage structure is based on the Open Containers Initiative (OCI) standards.
    3. Podman commands are the same as Docker’s When building Podman, the goal was to make sure that Docker users could easily adapt. So all the commands you are familiar with also exist with Podman. In fact, the claim is made that if you have existing scripts that run Docker you can create a docker alias for podman and all your scripts should work (alias docker=podman). Try it.
    4. This article does not get into the detailed pros and cons of the Docker daemon process.  There is much to be said in favor of this approach and I can see why, in the early days of Docker, it made a lot of sense.  Suffice it to say that there were several reasons why Docker users were concerned about this approach as usage went up. To list a few: A single process could be a single point of failure. This process owned all the child processes (the running containers). If a failure occurred, then there were orphaned processes. Building containers led to security vulnerabilities. All Docker operations had to be conducted by a user (or users) with the same full root authority.
    1. AI-powered code generation tools like GitHub Copilot make it easier to write boilerplate code, but they don’t eliminate the need to consult with your organization’s domain experts to work through logic, debugging, and other complex problems.Stack Overflow for Teams is a knowledge-sharing platform that transfers contextual knowledge validated by your domain experts to other employees. It can even foster a code generation community of practice that champions early adopters and scales their learnings. OverflowAI makes this trusted internal knowledge—along with knowledge validated by the global Stack Overflow community—instantly accessible in places like your IDE so it can be used alongside code generation tools. As a result, your teams learn more about your codebase, rework code less often, and speed up your time-to-production.
    1. When a job uses needs, it no longer downloads all artifacts from previous stages by default, because jobs with needs can start before earlier stages complete. With needs you can only download artifacts from the jobs listed in the needs configuration.
    1. The asset pipeline is a collection of components that work together. Here's a list of what they might be.Concatenation for merging together many files into one big file.Minification for compressing the contents of a file to make it smaller in size.Pre-compilation for using your language of choice to write CSS or Javascript.Fingerprinting to force reloading of asset changes (i.e., cache busing).
    1. It’s generally good from a separations of concerns point of view and to reduce risk – migrations are scary enough as it is!
    2. Personally I’m not a fan of running migrations in an ENTRYPOINT script.I think it’s best suited to run this separately as part of your deploy process
    3. COPY --chown=ruby:ruby
    4. Debian Slim is a variant of Debian that’s optimized for running in containers. It removes a ton of libraries and tools that’s normally included with Debian.
    5. I know Alpine is also an option but in my opinion it’s not worth it. Yes, you’ll end up with a bit smaller image in the end but it comes at the cost of using musl instead of glibc. That’s too much of a side topic for this post but I’ve been burned in the past a few times when trying to switch to Alpine – such as having network instability and run-time performance when connecting to Postgres. I’m very happy sticking with Debian.
    1. If you are okay with the user appending arbitrary query params without enforcing an allow-list, you can bypass the strong params requirement by using request.params directly:
    2. Performing a redirect by constructing a URL based on user input is inherently risky, and is a well-documented security vulnerability. This is essentially what you are doing when you call redirect_to params.merge(...), because params can contain arbitrary data the user has appended to the URL.
    1. This is essentially what --update-refs does, but it makes things a lot simpler; it rebases a branch, "remembers" where all the existing (local) branches point, and then resets them to the correct point afterwards.
    2. An alternative approach would be to rebase the "top" of the stack, part-3 on top of dev. We could then reset each of the branches to the "correct" commit in the newly-rebased branch, something like this:
    3. Don't think that I just naturally perfectly segment these commits when creating the feature. I heavily rebase and edit the commits before creating a PR.
    1. We train our models using:
    2. We exclude sources we know to have paywalls, primarily aggregate personally identifiable information, have content that violates our policies, or have opted-out.
    3. We recently improved source links in ChatGPT(opens in a new window) to give users better context and web publishers new ways to connect with our audiences. 
    4. Our models are designed to help us generate new content and ideas – not to repeat or “regurgitate” content. AI models can state facts, which are in the public domain.
    5. When we train language models, we take trillions of words, and ask a computer to come up with an equation that best describes the relationship among the words and the underlying process that produced them.
    1. Wilderness permits are required for entry into all Gifford Pinchot National Forest Wildernesses. The self-issuing permits are free and are available at all trailheads leading into these Wildernesses, and at Forest Service Ranger Stations.
    1. Walk into store to buy bread, get to till, no you MUST buy 1 kilogram of fillet steak as well so that the average price of the goods you buy is more than what the bread costs, which is the only thing you need. Leave bread, walk out. It does not matter how good you explain it, buyers do not understand how you have an item on the shelf you are not willing to sell for the price you are advertising it at, or for which you need a degree in mathematics to work out how many you must put in a cart before you can, well, pay for it at checkout. I would rather BL take this away altogether. You already have minimum buys to avoid small orders, you can already set a minimum lot quantity for purchase. Why give an impression that an item can ship by itself, when you as the seller is not willing to sell it like that? It confuses buyers when sellers willfully shows prices for goods they are not willing to sell at. Rather suggest, if sellers really want to use this, that the quantities the buyer wants cannot be added to the cart unless the minimum average is met automatically. That way the cart is managed for the buyer and nobody has to know the why and the wherefores of why an item cannot be bought for the price it is listed at.
    1. # This is manual way to describe complex parameters parameter :one_level_array, type: :array, items: {type: :string, enum: ['string1', 'string2']}, default: ['string1'] parameter :two_level_array, type: :array, items: {type: :array, items: {type: :string}} let(:one_level_array) { ['string1', 'string2'] } let(:two_level_array) { [['123', '234'], ['111']] }
    1. This intuition is generally more useful, but is more difficult to explain, precisely because it is so general.
  3. Apr 2024
    1. Ain't it possible that every message I send or forward will just be replaced to the outbox and will be send by Thunderbird in the background ? I really hate it that every message sends itself away, running on top of all other windows, and it makes me wait till it has been sent from reading my other messages...
    1. An alternative way to remove the All Mail folder would be to login into Gmail webmail using a browser, left click on the gear icon in the upper right corner and select Settings, select the Labels tab, find All Mail, click on Hide and uncheck "Show in IMAP". Logout and delete "All Mail." and "All Mail.msf" in the Gmail accounts local directory in the Thunderbird profile.

      How did I not know about this before?

    2. Do NOT try to delete the All Mail folder by deleting its contents. That will delete all of the messages for the account when Thunderbird syncs the folder.
    3. The "All Mail" folder in a Gmail IMAP account has a copy of all messages for that account, doubling the number of messages downloaded for offline folders. Thunderbird tries to download only one copy of a message from a Gmail IMAP account and have the folders point to that copy. However, that doesn't help if the message was created using Thunderbird. [1] If you decide to keep offline folders enabled and have a Gmail IMAP account, uncheck "All Mail" in Tools -> Account Settings -> Account Name -> Synchronization & Storage -> Advanced. As a precaution right click on the Gmail account name in the folder pane, select subscribe in the context menu, expand the folder listing and verify the All Mail folder is not subscribed. Disabling it from being synced should have unsubscribed it. Exit Thunderbird and delete "All Mail." and "All Mail.msf" in the accounts local directory.
    4. If you sometimes want to use some of the disabled features when using a broadband connection consider using two profiles which use common directories outside of the profile to store the messages. One profile would disable features as described below. The other could keep them enabled. That way depending upon which Thunderbird shortcut you use you can easily switch configurations with minimal side effects.
    1. However, I don't want or need any email from the account other than when I check it manually.
    2. From here, you can configure the FIRST SYNCHRONIZATION OPTION (Important! If you mess with the lower options you might delete copies on the mail server).
    1. Compacting folders does nothing for me -- I don't know why. I compacted them today for the first time in about a year, and the folder size remained unchanged. I don't generally delete emails, so that's likely why, but that doesn't mean I need to keep local copies of 12.5GB of emails.
    1. And SHAW should absolutely be helping, it's as simple as providing the correct server details that you enter into your email client software/app. They don't have to support the software or tell you how to do it, but at the very least should inform their customers this is the likely problem and then provide the link to their help page.
    1. I got no actual help from my long Verizon Support chat session and I kept asking if there is a block list they use that they could check (or a whitelist I could be added to...but fat chance) my IP for, since that is clearly what the error is calling out, but they never acknowledged that particular part of my questions, just ignored it.
    2. I found that there was an entry for our external IP, which may well be the problem. I thankfully had the ability to change the external IP our internal postfix server NATs to to something else, and voila! the messages go through just fine. I know not everyone has that flexibility to select another IP
    1. Unlike traditional search engines that rely on keywords, Perplexity AI focuses on understanding your intent. It analyzes your query, the context of your previous interactions, and your overall knowledge base to determine what you're truly seeking. 
    1. I ran across an AI tool that cites its sources if anyone's interested (and heard of it yet): https://www.perplexity.ai/

      That's one of the things that I dislike the most about ChatGPT is that it just synthesizes/paraphrases the information, but doesn't let me quickly and easily check the original sources so that I can verify (and learn more about the topic by doing further reading) the information for myself. Without access to primary sources, it often feels no better than a rumor — a retelling of what someone somewhere allegedly, purportedly, ostensibly found to be true — can I really trust what ChatGPT claims? (No...)

    1. Perplexity AI's biggest strength over ChatGPT 3.5 is its ability to link to actual sources of information. Where ChatGPT might only recommend what to search for online, Perplexity doesn't require that back-and-forth fiddling.
    1. As it competes with generative AI search features from established tech titans like Google and Microsoft, Perplexity has another factor working in its favor: novelty, Friedman said. “I think many people are rooting for Perplexity because they represent the new player, the new paradigm, the new product,” he told Forbes. And if its quick growth and popularity among some of tech's highest profile people indicates anything, it looks like that novelty has some staying power.
    1. Strong organization sets a great example for your team at work and shows that you mean business. Keeping things in order ensures less stress, a greater sense of control, and sets you up for success. 
    2. Asking questions ensures they fully understand whatever it is they’re doing. They don’t go into projects blindly or assume anything. They ask probing questions to gain a complete understanding of what it is they’re trying to accomplish, why they’re working towards that goal, and everything else in between. Having an analytical mind ensures that they don’t let any details slip through the cracks.
    3. Some may mistake their numerous, detailed questions as a trait of a perfectionist, which can be the case, but not always. Accuracy can be misinterpreted as perfection. If you’re detail-oriented, don’t let the fear of appearing as a perfectionist keep you from doing quality work.
    4. Then, they reread it and check it for typos and grammatical errors, put it down, check it for context and completion, and repeat. They may do this again and again until they arrive at a product they feel good about. 
    1. Why do they follow these nouns? Sometimes it is imperative for them to follow the nouns they modify. For example, in your example, there's a difference between "proper reptiles" and "reptiles proper"
    1. It's definition 6 from Merriam-Webster: 6 : strictly limited to a specified thing, place, or idea

      Thanks for pointing to this! There are so many different meaninsg/senses of "proper". That's the one!

    2. It means the booth specifically, without any extra bits. By way of example: "Times Square" might often be used to refer to the area around Times Square, but may include things which are not actually part of the Square. To narrow such a usage, one might say "I mean only the actual Times Square" or "I mean Times Square proper."
    1. Unfortunately, regex syntax is not really standardized... there are many variants, which differ among other things in which "special characters" need \ and which do not. In some it's even configurable or depends on switches (as in GNU grep, which you can switch between three different regex dialects).
    1. Your concern over whether or not I care about a couple of characters is weird: I observed that the sed expression worked both with and without /P2/q on my system; that's it. I was curious about something and wanted to share what I found.