set( produce((state) => { state.lush.forest.contains = null }) ),
- Aug 2022
-
-
-
-
Don't disregard it because it's cute.
-
-
gist.github.com gist.github.com
-
stackoverflow.com stackoverflow.com
-
A related technique is git submodules, but they come with annoying caveats (for example people who clone your repository won't clone the submodules unless they call git clone --recursive),
-
git-subtrac (from the author of the earlier git-subtree) seems to solve some of the problems with git submodules.
-
# Do this the first time: $ git remote add -f -t master --no-tags gitgit https://github.com/git/git.git $ git subtree add --squash --prefix=third_party/git gitgit/master # In future, you can merge in additional changes as follows: $ git subtree pull --squash --prefix=third_party/git gitgit/master # And you can push changes back upstream as follows: $ git subtree push --prefix=third_party/git gitgit/master # Or possibly (not sure what the difference is): $ git subtree push --squash --prefix=third_party/git gitgit/master
-
-
github.com github.com
-
I intend to keep it around and maybe fix up minor things here and there if needed, but don't really have any plans for new features at this point. I think it's great to give people the option to choose the Go port if the advanced features is what they're after.
Tags
Annotators
URL
-
-
-
You're set up!
-
-
tar -C /usr/local -xzf go1.19.linux-amd64.tar.gz
-
-
github.com github.com
-
Why another tool?: At the moment of writting there exists no proper platform-independent GUI dialog tool which is bomb-proof in it's output and exit code behavior
-
-
the braking change documention
-
-
github.com github.com
-
nodemon will automatically know how to run the script even though out of the box support for processing scripts
even though out of the box support for processing scripts ?
-
-
github.com github.com
-
doesn't know if .env is a hidden file with no extension or a *.env without a filename.
-
-
nodemon.io nodemon.ionodemon1
-
depended on about 3 million projects
depended on by about 3 million projects
-
-
stackoverflow.com stackoverflow.com
-
The vendor prefix (vnd.) indicates that it is custom for this vendor.
-
The +json indicates that it can be parsed as JSON, but the media type should define further semantics on top of JSON.
-
-
github.com github.com
-
then two different listeners/renderers switching magically between each other based on the header being present or not, without the end user being informed or clear about this
-
Thus my docs recommendation of public function beforeFilter(Event $event) // do not render out the now inconsistent one for is(json) if (!$this->request->is('jsonapi')) { throw new NotFoundException('Invalid access, use application/vnd.api+json for Content-Type and Accept.'); } } to specifically only whitelist the desired jsonapi for the general use case.
-
A default baked app has all those included. Thats why I am saying this - it is by default an issue we should and need to address :)
-
-
cheatsheetseries.owasp.org cheatsheetseries.owasp.org
-
cheatsheetseries.owasp.org cheatsheetseries.owasp.org
-
If you're using JavaScript for writing to a HTML Attribute, look at the .setAttribute and [attribute] methods which will automatically HTML Attribute Encode. Those are Safe Sinks as long as the attribute name is hardcoded and innocuous, like id or class.
-
If you're using JavaScript for writing to HTML, look at the .textContent attribute as it is a Safe Sink and will automatically HTML Entity Encode.
-
-
-
www.oauth.com www.oauth.com
-
In a clickjacking attack, the attacker creates a malicious website in which it loads the authorization server URL in a transparent iframe above the attacker’s web page. The attacker’s web page is stacked below the iframe, and has some innocuous-looking buttons or links, placed very carefully to be directly under the authorization server’s confirmation button. When the user clicks the misleading visible button, they are actually clicking the invisible button on the authorization page, thereby granting access to the attacker’s application. This allows the attacker to trick the user into granting access without their knowledge.
Maybe browsers should prevent transparent iframes?! Most people would never suspect this is even possible.
-
-
-
store.steampowered.com store.steampowered.com
-
it's also one of the smartest games I've ever played and I can't recommend it enough if you enjoy system-driven narrative, which it handles exquisitely.
.
-
The Quiet Sleep has 'cult classic' written all over it. It uses strategy, management, and tower defence mechanics to take you inside someone's head in a way that I don't think has ever been done before. It's really a bold experiment, and you'll be glad you played it.
.
-
-
techcommunity.microsoft.com techcommunity.microsoft.com
-
Well I would like to express my huge concern regarding the withdrawal of support for the SMB 1.0 network protocol in Windows 11, and future versions of the Microsoft OS, as there are many, many users who need to make use of this communication protocol, especially users households, since there are hundreds of thousands of products that use the embedded Linux operating system on devices that still use the SMB 1.0 protocol, and many devices, such as media players and NAS, that have been discontinued and companies no longer update their firmware.
-
-
-
With Windows 10 version 1511, support for SMBv1 and thus NetBIOS device discovery was disabled by default. Depending on the actual edition, later versions of Windows starting from version 1709 ("Fall Creators Update") do not allow the installation of the SMBv1 client anymore. This causes hosts running Samba not to be listed in the Explorer's "Network (Neighborhood)" views.
.
-
Since NetBIOS discovery is not supported by Windows anymore, wsdd makes hosts to appear in Windows again using the Web Service Discovery method.
.
-
-
askubuntu.com askubuntu.com
-
Windows 10 if configured the way Microsoft wants you to configure it by default will never be able to "discover" your Ubuntu samba shares.
.
-
-
www.linuxquestions.org www.linuxquestions.org
-
to see the changes the commands make. Among the commands, I'd like to use useradd, userdel, usermod, groupadd, groupmod, & groupdel. And, as I'm guessing you are understanding, these are just the ones I've read about today. If I can get away without modifying any files directly, I'd rather be able to do that because it means I'll have a strong grasp of the commands, and I'd be able to learn the editing of smb.conf (& the other files) by seeing how it/they change as I use the commands.
.
-
I'm trying to learn enough about Samba that I'm able to do complete administration from the command line. That's a big task, I know, like learning DOS when all I know is French (I know far more DOS than French, but that's the idea).
.
-
I have definitely looked at some of the Samba.org instructions. The problem is mine - I'm either too busy dealing with the kids in the morning, or too tired in the evenings, to be able to - within my realm of patience - find what I need, implement it, test it, and confirm that it works or try something else. Finding it, and recognizing that I've found it, is usually the hard part. That's why a book does me worlds of good - I can read it during the work day when I'm taking a few minutes break, and it's uninterrupted concentration time.
.
-
-
-
sudo usermod -aG sambashare $USER
-
-
github.com github.com
-
Extensions from Ruby are noted in the following list.
-
-
-
github.com github.com
-
Component is not maintained anymore.
Tags
Annotators
URL
-
-
code.visualstudio.com code.visualstudio.com
-
The custom title bar has been a success on Windows, but the customer response on Linux suggests otherwise. Based on feedback, we have decided to make this setting opt-in on Linux and leave the native title bar as the default. The custom title bar provides many benefits including great theming support and better accessibility through keyboard navigation and screen readers. Unfortunately, these benefits do not translate as well to the Linux platform. Linux has a variety of desktop environments and window managers that can make the VS Code theming look foreign to users.
Tags
Annotators
URL
-
-
www.reddit.com www.reddit.com
-
convert to URL query parameters with the qs library
-
-
stackoverflow.com stackoverflow.com
-
If you insist on having the user id in the version table, you can do this: ActiveRecord::Base.transaction do @user.save! @user.versions.last.update_attributes!(:whodunnit => @user.id) end
Not ideal... but we can't set it any earlier because we don't know the id until after the save
-
-
www.roboleary.net www.roboleary.net
-
dev.to dev.to
-
Wouldn't it be easier to do a squash merge instead? git merge --squash [branch] Like comment: Like comment: 1 like Like Comment button Reply Collapse Expand Brack Carmony Brack Carmony Brack Carmony Follow Joined Jan 3, 2022 • Jan 3 Dropdown menu Copy link Hide Report abuse It would, if the assumption that every commit in the chain is what you want, this lets you keep the power of the rebase available if you want to cherry-pick commits or any of the other crazy features it seems to let you use.
-
-
stackoverflow.com stackoverflow.com
-
NOTE: Setting both is not necessarily needed, but some programs may not use the more-correct VISUAL. See VISUAL vs. EDITOR.
.
-
-
store.steampowered.com store.steampowered.com
-
Would be more of a neutral rating for me but seeing that I have only two options (or no review at all), I'll go with the upvote for encouragement as they do appear to be putting some effort into the game.
.
-
-
stackoverflow.com stackoverflow.com
-
This is actually the most correct answer, because it explains why people (like me) are suddenly seeing this warning after nearly a decade of using git. However,it would be useful if some guidance were given on the options offered. for example, pointing out that setting pull.ff to "only" doesn't prevent you doing a "pull --rebase" to override it.
-
I appreciate the time and effort you put into your answer, but frankly this is still completely incomprehensible to me.
-
one should not upgrade a production environment without extensive testing. I prefer to not upgrade prod at all. Instead, I create a new instance with latest everything, host my apps there, test everything out, and then make it production.
Tags
- surprising behavior
- confusing for newcomers
- clarification
- despite attempting careful/detailed explanation, audience still finds it too incomprehensible/confusing
- testing
- appreciation
- deployment
- detailed explanation
- migration path
- development vs. production
- don't want to be surprised
- cautious
Annotators
URL
-
-
github.com github.com
-
Beyond memory leaks, it's also really useful to be able to re-run a test many times to help with tracking down intermittence failure and race conditions.
-
I don't understand the hesitation here to accept a really useful addition to rspec. Maintenance burden. Forseen internal changes required to do it. Unforseen internal changes required to do it. Formatter changes to handle new output status for a spec that passed and failed It's simply not a previously design use case of RSpec. It will be hacky to implement.
-
We already have a very wide configuration API. The further we expand it the more unwieldy it becomes for users. At this point we generally require new features to be implemented first as extension gems, and then to see support, before considering including them in core.
-
I created a gem called rspec_n that installs an executable that will do this. It will re-run the test suite N times by default. You can make it stop as soon as it hits a failing iteration via the -s cli option. It will display other stats about the iterations as well.
-
-
github.com github.com
-
You can pass any options to puma via the server setting Capybara.server = :puma, { queue_requests: true }
-
This very much appears to be a bug or design flaw in puma - The fact that a persistent connection ties up a thread on the chance a request might come over that connection seems like not great behavior. This would really only be an issue when puma is run with no workers (which wouldn't be done in production) but it still seems a little nuts.
-
-
www.statesmanjournal.com www.statesmanjournal.com
-
"It's difficult because we can't tell people exactly what's allowed and not allowed," said Chris Castelli, a manager for the Department of State Lands. "It's even tougher for law enforcement that gets called out to very heated disputes and doesn't have strict laws they can apply."
-
-
www.ncsl.org www.ncsl.org
-
the declaration is statutory
what does this mean here? what is being clarified or contrasted here? statutory as opposed to what?
-
The extent of public use varies, with Montana affording the greatest access. Rafters can float and fishermen can wade in rivers that flow through private land so long as they enter from public property. They can even leave the river and walk up to the high-water mark.
-
-
stackoverflow.com stackoverflow.com
-
It sounds like the OP's needs have been met, but for future explorers, here's some tools to tell if something is clickable.
-
-
github.com github.com
-
I understand that you are bound to specification. And also understand that it could take months to decide wether the specification should be changed.
.
-
-
stackoverflow.com stackoverflow.com
-
stackoverflow.com stackoverflow.com
-
I thought something like git rev-parse --abbrev-ref origin/HEAD would work, but that just seems to show what the default branch was of the repo it was cloned from, at the time of cloning, provided that the remote we cloned from was named origin.
good enough for my purposes (local git scripts/aliases)!
⟫ cat .git/refs/remotes/origin/HEAD ref: refs/remotes/origin/main
-
This is a terrific answer! Without something like locks or transactions, we indeed will only ever be able to get an updated-as-of-when-the-repository-just-told-us point of accuracy that gets stale if changed in the time since then
-
It's a great way to test various limits. When you think about this even more, it's a little mind-bending, as we're trying to impose a global clock ("who is the most up to date") on a system that inherently doesn't have a global clock. When we scale time down to nanoseconds, this affects us in the real world of today: a light-nanosecond is not very far.
-
Which of these to use depends on the result you want. Note that by the time you get the answer, it may be incorrect (out of date). There is no way to fix this locally. Using some ESP,2 imagine the remote you're contacting is in orbit around Saturn. It takes light about 8 minutes to travel from the sun to Earth, and about 80 to travel from the sun to Saturn, so depending on where we are orbitally, they're 72 to 88 minutes away. Any answer you get back from them will necessarily be over an hour out of date.
-
When we have our git rev-parse examine our Git repository to view our origin/HEAD, what we see is whatever we have stored in this origin/HEAD. That need not match what is in their HEAD at this time. It might match! It might not.
-
There are many questions we can ask and answer about branch names. Each one is specific to one particular repository because all branch names are local to that particular repository. Any changes anyone makes in that repository affect only that one repository, at least at the time they make them.
which assumption? well, people make the assumption that our local repo should know some fact about the remote repo, like its default branch, without actually asking the remote about itself
-
The main problem here is that the problem itself is a little bit poorly defined.
-
Exaggeration of System Parameters
-
Using git remote set-head has the advantage of updating a cached answer, which you can then use for some set period. A direct query with git ls-remote has the advantage of getting a fresh answer and being fairly robust. The git remote show method seems like a compromise between these two that has most of their disadvantages with few of their advantages, so that's the one I would avoid.)
Tags
- good point
- do pros outweigh/cover cons?
- testing
- making too many assumptions
- seemed like a simple question at first
- caching
- may be out of sync
- caveat
- git
- good enough
- challenging one's assumptions
- not necessarily the case
- considering the extreme case
- taking things to extremes
- may be stale
- interesting way of thinking about it
- considering the extreme case: long times
- interesting idea
- in sync
Annotators
URL
-
-
stackoverflow.com stackoverflow.com
-
You can use the lsblk command. If the disk is already unlocked, it will display two lines: the device and the mapped device, where the mapped device should be of type crypt. # lsblk -l -n /dev/sdaX sdaX 253:11 0 2G 0 part sdaX_crypt (dm-6) 253:11 0 2G 0 crypt If the disk is not yet unlocked, it will only show the device. # lsblk -l -n /dev/sdaX sdaX 253:11 0 2G 0 part
-
-
askubuntu.com askubuntu.com
-
Bear in mind that lsof doesn't seem to present an easy solution because, once the device is disconnected, the associated names provided by lsof no longer include the name of the disconnected device.
-
-
askubuntu.com askubuntu.com
-
Yes, this happens when luks encrypted device was not cleanly deactivated with cryptsetup close. You can try to remove the mapping using dmsetup remove /dev/mapper/luks-... if you want to avoid rebooting.
-
-
cooking.stackexchange.com cooking.stackexchange.com
-
It is a good thing that you wrote up your assumptions, this helps greatly with explanations. To look at each:
-
-
www.baeldung.com www.baeldung.com
-
We can use the readlink command to resolve relative paths, including symlinks. It uses the -f flag to print the full path:
-
-
stackoverflow.com stackoverflow.com
-
$0 would be OK in most cases, some exceptions are, for instance, when the script you're executing is aliased (through alias in .bash_profile). You should really use $BASH_SOURCE variable, instead of $0.
-
Using $0 does not work when the script is run using source script or . script; the name of the script is not available.
-
MY_PATH=$(cd "$MY_PATH" && pwd) # absolutized and normalized
scripting: finding absolute path
-
-
bbs.archlinux.org bbs.archlinux.org
-
make it "strace -f chroot …" to cover child processes and skip the arch-chroot noise.
Tags
Annotators
URL
-
-
stackoverflow.com stackoverflow.com
-
you can also replicate the bind:this syntax if you please: Wrapper.svelte <script> let root export { root as this } </script> <div bind:this={root} />
This lets the caller use it like this:
<Wrapper bind:this={root} />
in the same way we can already do this with elements:
<div bind:this=
-
- Jul 2022
-
store.steampowered.com store.steampowered.com
-
Patrician IV is an overhauling upgrade to Patrician III; so if you have not played the previous games in the Patrician series, starting with IV is really all you need. Also, the game of Patrician is very straightforward and addicting, so playing previous versions won't offer you anything unseen in Patrician IV.
-
Onto the game itself.
onto
-
-
catamphetamine.github.io catamphetamine.github.io
-
Windows 10 currently (01.01.2020) doesn't support Unicode country flags, and displays two-letter country codes instead of emoji flag images.
-
-
yarnpkg.com yarnpkg.com
-
Don't worry if your project isn't quite ready for Plug'n'Play just yet! This guide will let you migrate without losing your node_modules folder. Only in a later optional section we will cover how to enable PnP support, and this part will only be recommended, not mandatory. Baby steps!
-
-
mywiki.wooledge.org mywiki.wooledge.org
-
shopt -s lastpipe
-
-
stackoverflow.com stackoverflow.com
-
Process Substitution is something everyone should be using regularly! It is super useful. I do something like vimdiff <(grep WARN log.1 | sort | uniq) <(grep WARN log.2 | sort | uniq) every day.
underused
-
-
stackoverflow.com stackoverflow.com
-
Always use a while read construct: find . -name "*.txt" -print0 | while read -d $'\0' file do …code using "$file" done The loop will execute while the find command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer.
-
What ever you do, don't use a for loop: # Don't do this for file in $(find . -name "*.txt") do …code using "$file" done Three reasons: For the for loop to even start, the find must run to completion. If a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names. Although now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it.
-
-
stackoverflow.com stackoverflow.com
-
$0 can be set to an arbitrary value by the caller. On the flip side, $BASH_SOURCE can be empty, if no named file is involved; e.g.: echo 'echo "[$BASH_SOURCE]"' | bash
-
hile this warning is helpful, it could be more precise, because you won't necessarily get the first element: It is specifically the element at index 0 that is returned, so if the first element has a higher index - which is possible in Bash - you'll get the empty string; try 'a[1]='hi'; echo "$a"'.
-
-
www.matteomattei.com www.matteomattei.com
Tags
Annotators
URL
-
-
docs.npmjs.com docs.npmjs.com
-
Pre and post commands with matching names will be run for those as well (e.g. premyscript, myscript, postmyscript)
Could potentially be confusing behavior if running a script does something extra and you don't know why. They might look at the definition of
myscript
and not see the additional commands and wonder how/why they are running. The premyscript might be lost in a lost unsorted script list. -
Since npm@1.1.71, the npm CLI has run the prepublish script for both npm publish and npm install, because it's a convenient way to prepare a package for use (some common use cases are described in the section below). It has also turned out to be, in practice, very confusing. As of npm@4.0.0, a new event has been introduced, prepare, that preserves this existing behavior. A new event, prepublishOnly has been added as a transitional strategy to allow users to avoid the confusing behavior of existing npm versions and only run on npm publish (for instance, running the tests one last time to ensure they're in good shape).
Tags
Annotators
URL
-
-
www.keithcirkel.co.uk www.keithcirkel.co.uk
-
Should I ever change my stance on this, I will immediately update this post.
-
-
www.hardscrabble.net www.hardscrabble.net
-
This option wasn’t offered by the library, but that doesn’t have to stop us. Isn’t that fun?
-
Here’s a quick blog post about a specific thing (making FactoryBot.lint more verbose) but actually, secretly, about a more general thing (taking advantage of Ruby’s flexibility to bend the universe to your will). Let’s start with the specific thing and then come back around to the general thing.
-
-
writingexplained.org writingexplained.org
-
Steer, of course, can also be a noun that refers to male cattle. This meaning is unrelated to the expression steer clear.
-
-
github.com github.com
-
As this stands, the specs could pass w/o the formatter.output == new_formatter.output check.
-
-
www.imdb.com www.imdb.com
-
Brilliant. Ignore Critics. Do watch it!
-
-
-
Oh I see whats happening, we actually have specs for this but they're not correct
-
-
stackoverflow.com stackoverflow.com
-
The amount of time wasted on this is ridiculous. Thanks. This is about the only thing that worked. Why in the world this wouldn't "just work" by defining the default url options in Rails config/environments/test.rb is beyond me.
-
-
github.com github.com
-
The goal of this project is to have a single gem that contains all the helper methods needed to resize and process images. Currently, existing attachment gems (like Paperclip, CarrierWave, Refile, Dragonfly, ActiveStorage, and others) implement their own custom image helper methods. But why? That's not very DRY, is it? Let's be honest. Image processing is a dark, mysterious art. So we want to combine every great idea from all of these separate gems into a single awesome library that is constantly updated with best-practice thinking about how to resize and process images.
-
-
stackoverflow.com stackoverflow.com
-
It is sublimely annoying to have to configure the exact same parameters in config/environments, spec/spec_helper.rb and again here... all in marginally different ways (with 'http://' or without, with port number or port specified separately). Even Capybara.configure syntax can't seem to stay consistent to itself between versions...
-
-
www.reddit.com www.reddit.com
-
It really only takes one head scratching issue to suck up all the time it saves you over a year, and in my experience these head scratchers happen much more often than once a year. So in that sense it's not worth it, and the first time I run into an issue with it, I disable it completely.
-
It feels like « removing spring » is one of those unchallenged truths like « always remove Turbolinks » or « never use fixtures ». It also feels like a confirmation bias when it goes wrong.
"unchallenged truths" is not really accurate. More like unchallenged assumption.
-
I may had to turn it off and on again a few times as debugging technique when I had no other ideas on what to do.
-
-
github.com github.com
-
Thanks for your making your first contribution to Cucumber, and welcome to the Cucumber committers team! You can now push directly to this repo and all other repos under the cucumber organization! In return for this generous offer we hope you will: Continue to use branches and pull requests. When someone on the core team approves a pull request (yours or someone else's), you're welcome to merge it yourself. Commit to setting a good example by following and upholding our code of conduct in your interactions with other collaborators and users. Join the community Slack channel to meet the rest of the team and make yourself at home. Don't feel obliged to help, just do what you can if you have the time and the energy. Ask if you need anything. We're looking for feedback about how to make the project more welcoming, so please tell us!
-
-
stackoverflow.com stackoverflow.com
-
A more conservative workaround is find the gems that are causing issues and list them on the top of your Gemfile.
good solution ... except that it didn't help/work
-
A good way to debug what is causing these is put this at the top of your Gemfile:
-
-
graceful.dev graceful.dev
-
to deepen and mature your coding practice
-
-
-
-
delegate :name, :to => :department, :prefix => true, :allow_nil => true
-
-
-
avdi.codes avdi.codes
-
How much do we really need constants, anyway?
-
-
-
nginx.org nginx.org
-
These directives are inherited from the previous configuration level if and only if there are no proxy_set_header directives defined on the current level.
This conditional rule for inheritance is different than most other apps/contexts. Usually it just always inherits, and any local config at the current level gets merged with or overrides what is inherited.
-
-
github.com github.com
-
By default, this function reads template files in /etc/nginx/templates/*.template and outputs the result of executing envsubst to /etc/nginx/conf.d.
'
-
-
stackoverflow.com stackoverflow.com
-
It is "guaranteed" as long as you are on the default network. 172.17.0.1 is no magic trick, but simply the gateway of the network bridge, which happens to be the host. All containers will be connected to bridge unless specified otherwise.
-
For example I use docker on windows, using docker-toolbox (OG) so that it has less conflicts with the rest of my setup and I don't need HyperV.
-
-
www.overcoming.software www.overcoming.software
-
Even with OverloadedRecordDot, Haskell’s records are still bad, they’re just not awful.
-
-
www.freecodecamp.org www.freecodecamp.org
-
You are context switching between new features and old commits that still need polishing.
-
If the code review process is not planned right, it could have more cost than value.
-
-
smartbear.com smartbear.com
-
Defects found in peer review are not an acceptable rubric by which to evaluate team members. Reports pulled from peer code reviews should never be used in performance reports. If personal metrics become a basis for compensation or promotion, developers will become hostile toward the process and naturally focus on improving personal metrics rather than writing better overall code.
-
-
www.codependentcodr.com www.codependentcodr.com
-
github.com github.com
-
raise StandardError.new "No authentication is configured for ActiveStorage"
forces the issue by requiring end-dev to edit/override this method to avoid getting this error
-
# ActiveStorage defaults to security via obscurity approach to serving links # If this is acceptable for your use case then this authenticable test can be # removed. If not then code should be added to only serve files appropriately. # https://edgeguides.rubyonrails.org/active_storage_overview.html#proxy-mode def authenticated? raise StandardError.new "No authentication is configured for ActiveStorage" end
-
-
github.com github.com
-
I'm thinking it might be worth a separate issue. I'm not sure though, so for now mentioning it here.
-
Interestingly, Rails doesn't see this in their test suite because they set this value during setup:
Tags
- testing: avoid testing implementation details
- testing: levels of tests: higher level better than stubbing a lot of internals
- testing: avoid unnecessarily testing things in too much isolation, in a different way than the code is actually used (should match production)
- whether to create a separate issue
Annotators
URL
-
-
github.com github.com
-
Stop autoclosing of PRs While the idea of cleaning up the the PRs list by nudging reviewers with the stale message and closing PRs that didn't got a review in time cloud work for the maintainers, in practice it discourages contributors to submit contributions. Keeping PRs open and not providing feedback also doesn't help with contributors motivation, so while I'm disabling this feature of the bot we still need to come up with a process that will help us to keep the number of PRs in check, but celebrate the work contributors already did instead of ignoring it, or dismissing in the form of a "stale" alerts, and automatically closing PRs.
Yes!! Thank you!!
typo: cloud work -> could work
-
-
github.com github.com
-
I don't understand why it should be so hard to keep issues open / reopen them. That's just going to cause people to open a duplicate issue/PR — or (if they notice in time) cause people to add extra "not stale" noise when the bot warns it's about to be closed. Wouldn't it be preferable to keep the discussion together in one place instead of spreading across duplicate issues? (Similarly, moving the meta conversation about an issue out to a completely separate system (Discord) seems like the wrong direction, because it wouldn't be visible to/discoverable by those arriving at the closed issue.) I get how it's useful to have stale issues not cluttering the list. But if interes/activity later picks up again, then "stale" is no longer accurate and its status should be automatically updated to reflect its newfound freshness... like it did back here:
-
ActiveSupport.on_load :active_storage_blob do def accessible_to?(accessor) attachments.includes(:record).any? { |attachment| attachment.accessible_to?(accessor) } || attachments.none? end end ActiveSupport.on_load :active_storage_attachment do def accessible_to?(accessor) record.try(:accessible_to?, accessor) end end ActiveSupport.on_load :action_text_rich_text do def accessible_to?(accessor) record.try(:accessible_to?, accessor) end end module ActiveStorage::Authorize extend ActiveSupport::Concern included do before_action :require_authorization end private def require_authorization head :forbidden unless authorized? end def authorized? @blob.accessible_to?(Current.identity) end end Rails.application.config.to_prepare do ActiveStorage::Blobs::RedirectController.include ActiveStorage::Authorize ActiveStorage::Blobs::ProxyController.include ActiveStorage::Authorize ActiveStorage::Representations::RedirectController.include ActiveStorage::Authorize ActiveStorage::Representations::ProxyController.include ActiveStorage::Authorize end
Interesting, rather clean approach, I think
-
I'm partial to the solution originally proposed. It follows a pattern already established in Rails. For example, using an application-specific ApplicationStorageController which inherits from ActiveStorage::BaseController is very similar to the ApplicationRecord which inherits from ActiveRecord::Base or ApplicationJob which inherits from ActiveJob::Base.
-
I think this is important, and I'd love to help making ActiveStorage a more secure place.
-
it should be normal for production apps to add authentication and authorization to their ActiveStorage controllers. Unfortunately, there are 2 possible ways to achieve it currently: Not drawing ActiveStorage routes and do everything by yourself Override/monkey patch ActiveStorage controllers None of them is ideal because in the end you can't benefit from Rails upgrades (bug fixes, etc) so the intention of this PR is to let people define a parent controller (inspired by Devise, maybe @carlosantoniodasilva can tell us his experience on this feature) so that people can add authentication and authorization in a single place and still benefit from the default controllers.
-
-
stackoverflow.com stackoverflow.com
-
Create a new controller to override the original: app/controllers/active_storage/blobs_controller.rb
Original comment:
I've never seen monkey patching done quite like this.
Usually you can't just "override" a class. You can only reopen it. You can't change its superclass. (If you needed to, you'd have to remove the old constant first.)
Rails has already defined ActiveStorage::BlobsController!
I believe the only reason this works:
class ActiveStorage::BlobsController < ActiveStorage::BaseController
is because it's reopening the existing class. We don't even need to specify the
< Base
class. (We can't change it, in any case.)They do the same thing here: - https://github.com/ackama/rails-template/pull/284/files#diff-2688f6f31a499b82cb87617d6643a0a5277dc14f35f15535fd27ef80a68da520
Correction: I guess this doesn't actually monkey patch it. I guess it really does override the original from activestorage gem and prevent it from getting loaded. How does it do that? I'm guessing it's because activestorage relies on autoloading constants, and when the constant
ActiveStorage::BlobsController
is first encountered/referenced, autoloading looks in paths in a certain order, and finds the version in the app'sapp/controllers/active_storage/blobs_controller.rb
before it ever gets a chance to look in the gem's paths for that same path/file.If instead of using autoloading, it had used
require_relative
(or evenrequire
?? but that might have still found the app-defined version earlier in the load path), then it would have loaded the model from activestorage first, and then (possibly) loaded the model from our app, which (probably) would have reopened it, as I originally commented.
-
-
github.com github.com
-
discuss.rubyonrails.org discuss.rubyonrails.org
-
Overriding the ActiveStorage controllers to add authentication or customize behavior is a bit tedious because it requires either: using custom routes, which means losing the nice url helpers provided by active storage copy pasting the routes in the application routes.rb, which is not very DRY.
-
-
github.com github.com
-
-
This was a surprise to me, since we generally authenticate the record quite well, but then go on to do something like record.file.url in our view, generating a URL that is permanent and unauthenticated.
-
-
github.com github.com
-
Compared to https://github.com/aki77/activestorage-validator, I slightly prefer this because - it has more users and has been battle tested more - is more flexible: can specify
exclude
as well asallow
- has more expansive Readme documentation - is mentioned by https://github.com/thoughtbot/paperclip/blob/master/MIGRATING.md#migrating-from-paperclip-to-activestorage - mentions security: whether or not it's needed, at least this makes extra attempt to be secure by using external tool to check content_type; https://github.com/aki77/activestorage-validator/blob/master/lib/activestorage/validator/blob.rb just usesblob.content_type
, which I guess just trusts whatever ActiveStorage gives us (which seems fair too: perhaps this should be kicked up to them to be their concern)In fact, it looks like ActiveStorage does do some kind of mime type checking...
activestorage-6.1.6/app/models/active_storage/blob/identifiable.rb
``` def identify_without_saving unless identified? self.content_type = identify_content_type self.identified = true end enddef identify_content_type Marcel::MimeType.for download_identifiable_chunk, name: filename.to_s, declared_type: content_type end
```
Tags
Annotators
URL
-
-
stackoverflow.com stackoverflow.com
-
Overall, there appears to be no MIME type image/jpg. Yet, in practice, nearly all software handles image files named "*.jpg" just fine.
Extension != MIME type
-
-
-
commented
-
-
disqus.com disqus.com
-
It really slows down your test suite accessing the disk.So yes, in principle it slows down your tests. There is a "school of testing" where developer should isolate the layer responsible for retrieving state and just set some state in memory and test functionality (as if Repository pattern). The thing is Rails is a tightly coupled with implementation logic of state retrieval on core level and prefers "school of testing" in which you couple logic with state retrial to some degree.Good example of this is how models are tested in Rails. You could just build entire test suite calling `FactoryBot.build` and never ever use `FactoryBot.create` and stub method all around and your tests will be lighting fast (like 5s to run your entire test suite). This is highly unproductive to achieve and I failed many times trying to achieve that because I was spending more time maintaining my tests then writing something productive for business.Or you can took more pragmatic route and save database record where is too difficult to just 'build' the factory (e.g. Controller tests, association tests etc)Same I would say for saving the file to the Disk. Yes you are right You could just "not save the file to disk" and save few milliseconds. But at the same time you will in future stumble upon scenarios where your tests are not passing because the file is not there (e.g. file processing validations) Is it really worth it ? I never worked on a project where saving file to a disk would slow down tests significantly enough that would be an issue (and I work for company where core business is related to file uploading) Especially now that we have SSD drives in every laptop/server it's blazing fast so at best you would save 1 seconds for entire test suite (given you call FactoryBot traits to set/store file where it make sense. Not when every time you build an object.)
-
-
github.com github.com
-
# Internal: This is how much Honeybadger cares about Rails developers. :)
:)
-
# Some Rails projects include ActionDispatch::TestProcess globally for the # use of `fixture_file_upload` in tests. This is a bad practice because it # includes other methods -- such as #session -- which override existing # methods on *all objects*.
-
-
github.com github.com
-
# This ensures that the pid namespace is shared between the host # and the container. It's not necessary to be able to run spring # commands, but it is necessary for "spring status" and "spring stop" # to work properly. pid: host
-
-
unix.stackexchange.com unix.stackexchange.com
-
If you don't use an intermediate variable, you need to protect the / characters in the directory to remove so that they aren't treated as the end of the search text.
-
If the path in question is at the beginning of the PATH variable, you need to match the colon at the end. This is an annoying caveat which complicates easy generic manipulations of PATH variables.
-
-
edgeguides.rubyonrails.org edgeguides.rubyonrails.org
-
I am ScaredDon't be :).
-
-
security.stackexchange.com security.stackexchange.com
-
I'm fully serious: If your accounts and data are important, then just don't make such mistakes. Being careful is completely possible.
Being careful is completely possible.
-
-
www.reddit.com www.reddit.com
-
I can't reverse it, but maybe somebody who understands how Chrome does the decryption can. The ability is there, its not that Chrome can't decrypt them, it is that Chrome won't decrypt them due to false "security".And if Chrome actually, genuinely can no longer decrypt passwords after they have been restored from backup, then that is a shockingly bad bug in their password manager.
-
If your security locks you out of your own home just because you changed your trousers, that would be shockingly bad security.If your security permanently locks you out of your accounts because you restored your Chrome settings from backup, how is that any better?
-
-
stackoverflow.com stackoverflow.com
-
So the correct command to use is findmnt, which is itself part of the util-linux package and, according to the manual: is able to search in /etc/fstab, /etc/mtab or /proc/self/mountinfo
-
-
stackoverflow.com stackoverflow.com
-
Rails 3 seems is ignoring my rescue_from handler so I cannot test my redirect below.
I have similar problem too
404 errors raise
ActiveRecord::RecordNotFound
to the test
-
-
bdunagan.com bdunagan.com
-
All seem focused on rendering the 404 page manually. However, I wanted to make rescue_from work. My solution is the catch-all route and raising the exception manually.
-
- Jun 2022
-
www.iubenda.com www.iubenda.com
-
Data protection authorities have found that the U.S. legal system does not guarantee the same standards of protection as the EU. The situation stems from a set of U.S. laws that allow government organizations to request access to consumers’ personal data from US-based services, regardless of where the data centers or servers are located. In light of this, NOYB filed 101 complaints with European DPAs to find that transferring European users’ data to the U.S. was unlawful. The decisions, which have noted the illegitimacy of the transfers, focus on the analysis of additional technical, contractual and organizational measures.
-
-
answers.microsoft.com answers.microsoft.com
-
This thread is locked.
locked but never resolved?! why lock??
-
-
towardsdatascience.com towardsdatascience.com
-
www.quora.com www.quora.com
-
gitlab.nadadventist.org gitlab.nadadventist.org
-
Users often forget to save their recovery codes when enabling 2FA. If you added an SSH key to your GitLab account, you can generate a new set of recovery codes with SSH:
-
-
github.com github.com
-
A custom component might be interesting for you if your views look something like this: <%= simple_form_for @blog do |f| %> <div class="row"> <div class="span1 number"> 1 </div> <div class="span8"> <%= f.input :title %> </div> </div> <div class="row"> <div class="span1 number"> 2 </div> <div class="span8"> <%= f.input :body, as: :text %> </div> </div> <% end %> A cleaner method to create your views would be: <%= simple_form_for @blog, wrapper: :with_numbers do |f| %> <%= f.input :title, number: 1 %> <%= f.input :body, as: :text, number: 2 %> <% end %>
Tags
Annotators
URL
-
-
joshkerr.com joshkerr.com
-
www.reddit.com www.reddit.com
-
The problem isn’t Linux, it’s the defective by design DRM.The studios demand ridiculous DRM that does nothing to actually stop piracy.
-
Valve long ago proved that piracy is a service issue. Make it more convenient to pay for something, and people pay. Just look at what they did to bring AAA games to Linux!Apple, Amazon, and others proved it as well when they removed DRM (or never had it in the first place) on digital music purchases! People still paid for music downloads! They figured out how to keep people paying by making subscriptions to pretty much all music cheap and convenient. The service is more convenient than piracy, and you have a useful option for anything you want more permanent than a subscription.
-
Linux users flood developers on projects in github. On Opensource projects were you can actually somehow talk to the developers as an end user. Or maybe on Twitter if a developer of a proprietary software is somehow known and you can contact him on social media.But Developers dont talk to first level customer support of a proprietary software like Adobe InDesign or a service like Netflix,But this is were these companies get their data. And they base their decisions on this data.
.
-
Linux users flood developers with bugs and requests because we actually know how to debug our systems. The creators then tend to get annoyed at the flood, because even if they resolved them all, it would be spending a lot of energy for less than 1% of their userbase.
.
-
there should be more Linux desktop community solidarity
-
The main problem of the Linux community is that it is divided. I know this division represents freedom of choice but when your rivals are successful, you must inspect them carefully. And both rivals here (MacOS and Windows) get their power from the "less is more approach".This division in Linux communities make people turn into their communities when they have problems and never be heard as a big, unified voice.When something goes wrong with other OSes, people start complaining in many forums and support sites, some of them writing to multiple places and others support them by saying "yeah, I have that problem, too".In the Linux world, the answers to such forums come as "don't use that shitty distro" or "use that command and circumvent the problem".Long story short" average Linux user doesn't know that they are:still customers and have all the rights to demand from companiesthey can get together and act up louder.Imagine such an organizing that most of the Linux users manage to get together and writing to Netflix. Maybe not all of them use Netflix but the number of the Linux users are greater than Netflix members. What a domination it would be!But instead we turn into our communities and act like a survival tribe who has to solve all their problems themselves .
-
Big Software companies like Adobe or Netflix do two things that are relevant for us and currently go wrong:They analyse the systems their customers use. They don't see their Linux users because we tend to either not use the product at all under Linux (just boot windows, just use a firertv stick and so one) or we use emulators or other tools that basically hide that we actually run Linux. --> The result is that they don't know how many we actually are. They think we are irrelevant because thats what the statistics tell them (they are completely driven by numbers).They analyze the feature requests and complains they get from their customers. The problem is: Linux users don't complain that much or try to request better linux support. We usually somehow work around the issues. --> The result is that these companies to neither get feature requests for better Linux support nor bug reports from linux users (cause its not expected to work anyways).
-
-
www.reddit.com www.reddit.com
-
It simplify things alot. Valves needs to constantly push updates so it makes perfect sense to pick Arch.
-
Mine you at the time Valve was trying to get developers to make Linux ports of the games so targeting Debian made some sense in terms of platform stability, this didn't work out well and developers did no such thing. Valve then moved to making WINE work better through spending dev time adding patches and making the Proton later on top of it.Valve likely moved to an Arch base to get bleeding edge support for new hardware and for performance enhancements that come along with it as they were no longer shackled trying to get developers to make native Linux ports.
-
manjaro maintaining a slightly different update cycle and overall behavior than upstream arch (I know this is a point of contention, but that's not the point here)
-
Compare that to bugfixes coming to a Ubuntu LTS or 6 month and you might not get it before the version is End Of Life making collaborating difficult & fruitless.Arch is where developers are so it makes sense from the massive array of software available in the AUR & repos too.Its like a software flee market, occasionally AUR software isn't up to the bar or theoretically there COULD be a bad actor once every few years otherwise its something truly special.
-
Bug triage is so much easier & faster on Arch. Everyone is on the same latest version and engaging developers usually lead to fixes that users can consume right away or within a week.
-
-
-
Other distros require glibc 2.28+ (ldd --version to check)
Tags
Annotators
URL
-
-
microg.org microg.org
-
The linux-based open-source mobile operating system Android is not only the most popular mobile operating system in the world, it’s also on the way to becoming a proprietary operating system. How is that?
-
A free-as-in-freedom re-implementation of Google’s proprietary Android user space apps and libraries.
.
-
-
privatephoneshop.com privatephoneshop.com
-
Additionally, GrapheneOS has only been developed for Google’s Pixel line of phones. Some people are a little hesitant to use a Google phone to de-google their lives.
-
The main flaw with Lineage is the phone’s bootloader must remain unlocked
-
-
grapheneos.org grapheneos.org
-
Our Camera app provides the system media intents used by other apps to capture images / record videos via the OS provided camera implementation. These intents can only be provided by a system app since Android 11, so the quality of the system camera is quite important.
.
-
-
grapheneos.org grapheneos.org
-
No, GrapheneOS will remain a non-profit open source project / organization. It will remain an independent organization not strongly associated with any specific company. We partner with a variety of companies and other organizations, and we're interested in more partnerships in the future. Keeping it as an non-profit avoids the conflicts of interest created by a profit-based model. It allows us to focus on improving privacy/security without struggling to build a viable business model that's not in conflict with the success of the open source project.
.
-
Using the network-provided DNS servers is the best way to blend in with other users. Network and web sites can fingerprint and track users based on a non-default DNS configuration.
-
Network and web sites can fingerprint and track users based on a non-default DNS configuration.
how?
-