Rails' inability to automatically route my link_to and form_for in STI subclasses to the superclass is a constant source of frustration to me. +1 for fixing this bug.
I've had to work around this by doing record.as(BaseClass)
Rails' inability to automatically route my link_to and form_for in STI subclasses to the superclass is a constant source of frustration to me. +1 for fixing this bug.
I've had to work around this by doing record.as(BaseClass)
You can do this elegantly with throw/catch, like this:
In most languages, there is no clean equivalent for breaking out of a recursive algorithm that uses a recursive function. In Ruby, though, there is!
it's much faster—the stack frame does not have to be carried along the "thrown symbol", and no object is created. Lightweight nonlinear flow control.
Throw it's a more elegant way to use an exception-like system as a control flow.
The most important part of this query is the with block. It's a powerful resource, but for this example, you can think of it as a "way to store a variable" that is the path of the contact you need to update, which will be dynamic depending on the record.
It just builds the path as '{1, value}', but we need to convert to text[] because that’s the type expected on the jsonb_path function.
For user-contributed data that's freeform and unstructured, use jsonb. It should perform as well as hstore, but it's more flexible and easier to work with.
SET object = object - 'b' || '{"a":1,"d":4}';
these functions above works like a so-called UPSERT (updates a field if it exists, inserts if it does not exist)
(The null result should not be confused with a SQL NULL; see the examples.)
The shell is responsible for expanding variables.
The title doesn't fit the question: it should be Show lines between two patterns. Whoever wants lines from the first line on, doesn't get his answer here
+1 to counter the drive-by downvote. I'd still use sed for this, unless you need the power of Perl regular expressions to select the delimiting lines
sed appears to be able to do this much more efficiently if a large number of files are involved. awk may be easier to remember, but sed seems to be worth a sticky note in my brain.
Any tips on how to make it exclusive (for any generic situation, not just OP's)?
This is one of the more-satisfying ruby expressions I've seen in a long time. I can't say that it also has prosaic transparency, but I think seeing it teaches important things.
x = -3 "++-"[x <=> 0] # => "-" x = 0 "++-"[x <=> 0] # => "+" x = 3 "++-"[x <=> 0] # => "+"
I think that it's nonsense not to have a method that just gives -1 or +1. Even BASIC has such a function SGN(n). Why should we have to deal with Strings when it's numbers we want to work with. But's that's just MHO.
As for why - a GET can be cached and in a browser, refreshed. Over and over and over. This means that if you make the same GET again, you will insert into your database again. Consider what this may mean if the GET becomes a link and it gets crawled by a search engine. You will have your database full of duplicate data.
This is not advice. A GET is defined in this way in the HTTP protocol. It is supposed to be idempotent and safe.
has the same effect (that is no side effect)
The difference between PUT and POST is that PUT is idempotent: calling it once or several times successively has the same effect (that is no side effect), whereas successive identical POST requests may have additional effects, akin to placing an order several times.
Why is there a reservation fee?The main reason for reservations is to ensure an orderly and fair ordering process for customers when Steam Deck inventory becomes available. The additional fee gives us a clearer signal of intent to purchase, which gives us better data to balance supply chain, inventory, and regional distribution leading up to launch.
Steam Deck is a PC so you can install third party software and operating systems.
Induction does not pander, but gives you the satisfaction of mastering an imaginary yet honest set of physical laws.
Across more than 50 meticulously designed puzzles
hard-crafted
you must explore the counter-intuitive possibilities time travel permits. You will learn to choreograph your actions across multiple timelines, and to construct seemingly impossible solutions, such as paradoxical time loops, where the future depends on the past and the past depends on the future.
> I should have used "side-effect-free" instead of "idempotent" in my tweetsThe HTTP term is "safe method".
whereas now, they know that user@domain.com was subscribed to xyz.net at some point and is unsubscribing. Information is gold. Replace user@domain with abcd@senate and xyz.net with warezxxx.net and you've got tabloid gold.
While Microsoft is entirely in the right by reminding people of the terms they agreed to, many users are taking issue with the fact that they hadn’t been warned about the limit in the eight years it’s been in place, and many people are now being told they are over the limit after years of being over.
password reset on another account which goes into Hotmail and is then read and stolen by the hackers who gain control of that other account.
Sending body/payload in a GET request may cause some existing implementations to reject the request — while not prohibited by the specification, the semantics are undefined. It is better to just avoid sending payloads in GET requests.
Requests using GET should only be used to request data (they shouldn't include data).
So long as the filters are only using GET requests to pull down links, there’s nothing fundamentally wrong with them. It’s a basic (though oft-ignored) tenet of web development that GET requests should be idempotent; that is, they shouldn’t somehow change anything important on the server. That’s what POST is for. A lot of people ignore this for convenience’s sake, but this is just one way that you can get bitten. Anyone remember the Google Web Accelerator that came out a while ago, then promptly disappeared? It’d pre-fetch links on a page to speed up things if you clicked them later on. And if one of those links happened to delete something from a blog, or log you out… well, then you begin to see why GET shouldn’t change things. So yes, the perfect solution to this is a 2-step unsubscribe link: the first step takes to you a page with a form on it, and that form then POSTs something back that finalizes the unsubscribe request.
Two step unsubscribe, where the link in the email goes to a webpage with a prominent “click here to unsubscribe” button is often a good thing for unsubscription. It also gives people an option to not unsubscribe, when they click on the wrong link, or hit “return” with the wrong link focused, in a mail inadvertently, which isn’t that unusual in link-laden emails.
Idempotent just means that following a link twice has exactly the same effect on persistent state as clicking it once. It does not mean that following the link must not change state, just that after following it once, following it again must not change state further. There are good reasons to avoid GET requests for changing state, but that’s not what idempotent means.
https://hyp.is/JTNJ6uaLEeuFtzvtkXWaeA/developer.mozilla.org/en-US/docs/Glossary/Safe/HTTP confirms this claim and states it even more clearly.
Arguably any link that performs such an action via GET is fundamentally broken. A proper unsubscribe should direct to a page with a form that requires a POST submission. (Of course, in the real world, few things are proper.)
Assuming that people trust your site, abusing redirections like this can help avoid spam filters or other automated filtering on forums/comment forms/etc. by appearing to link to pages on your site. Very few people will click on a link to https://evilphishingsite.example.com, but they might click on https://catphotos.example.com?redirect=https://evilphishingsite.example.com, especially if it was formatted as https://catphotos.example.com to hide the redirection from casual inspection - even if you look in the status bar while hovering over that, it starts with a reasonable looking string.
An HTTP method is safe if it doesn't alter the state of the server. In other words, a method is safe if it leads to a read-only operation.
All safe methods are also idempotent, but not all idempotent methods are safe. For example, PUT and DELETE are both idempotent but unsafe.
Each of them implements a different semantic, but some common features are shared by a group of them: e.g. a request method can be safe, idempotent, or cacheable.
Which ones are in each group?
Never mind. The answer is in the pages that are being linked to.
request method can be safe, idempotent, or cacheable.
I don't like that I can't really use head? to know it's a HEAD request, but I (think I) understand the reasoning
Testing at GitLab is a first class citizen, not an afterthought. It’s important we consider the design of our tests as we do the design of our features.
That's it! Just replace let! with let_it_be. That's equal to the before_all approach but requires less refactoring.
That technique works pretty good but requires us to use instance variables and define everything at once. Thus it's not easy to refactor existing tests which use let/let! instead.
(Not a Boolean attribute!)
A big gotcha needs to be mentioned: When testing transaction, you need to turn off transactional_fixtures. This is because the test framework (e.g Rspec) wraps the test case in transaction block. The after_commit is never called because nothing is really committed. Expecting rollback inside transaction doesn't work either even if you use :requires_new => true. Instead, transaction gets rolled back after the test runs.
urql stays true to server data and doesn’t provide functions to manage local state like Apollo Client does. In my opinion, this is perfectly fine as full-on libraries to manage local state in React are becoming less needed. Mixing server-side state and local state seems ideal at first (one place for all states) but can lead to problems when you need to figure out which data is fresh versus which is stale and when to update it.
Looking deeper, you can see a large amount of issues open, bugs taking months to fix, and pull requests never seem to be merged from outside contributors. Apollo seems unfocused on building the great client package the community wants.
This sort of behaviour indicates to me that Apollo is using open-source merely for marketing and not to make their product better. The company wants you to get familiar with Apollo Client and then buy into their products, not truly open-source software in my opinion. This is one of the negatives of the open-core business model.
This “bloat,” along with recently seeing how mismanaged the open-source community is, finally broke the camel’s back for me. I realized that I needed to look elsewhere for a GraphQL client library.
Sometimes libraries can be too opinionated and offer too much “magic”. I’ve been using Apollo Client for quite some time and have become frustrated with its caching and local state mechanisms.
Because GraphQL is an opinionated API spec where both the server and client buy into a schema format and querying format. Based on this, they can provide multiple advanced features, such as utilities for caching data, auto-generation of React Hooks based on operations, and optimistic mutations.
this happens with getClient and setClient because it is a svelte context which is only available at component initialization (construction) and cannot be in an event handler.
A better place to ask would be on the new (since 2010) coreutils user mailing list.
IMO: alias cp="rsync -avz" cp is outdated
In the examples above, the number 42 on the left hand side isn’t a variable that is being assigned. It is a value to check that the same element in that particular index matches that of the right hand side.
The proposed syntax is much harder to implement than it looks. It conflicts with Hash literals. As a result, humans can be confused as well.
harder than it looks
You'll note that it doesn't give the possibility to map the key to a different variable. Indeed, I don't think that it would be useful and I would rather encourage rubyists to use meaningful option and variable names
.
It’s fun but when would we ever use things like this in actual code?When it’s well tested, commented, documented, and becomes an understood idiom of your code base.We focus so much on black magic and avoiding it that we rarely have a chance to enjoy any of the benefits. When used responsibly and when necessary, it gives a lot of power and expressiveness.
If your inquiry was not fully resolved, please post a new question so we may continue in assisting you. This case will now be closed and locked.
Sure, the slow way is always "good enough" — until you learn a better way of doing things. By your logic, then, we shouldn't have the option of including "Move to" in our context menus either — because any move operation could be performed using the cut and paste operations instead? The method you proposed is 6-7 steps long, with step 4 being the most onerous when you're in a hurry: Select files "Cut" "Create New Folder" Think of a name for the new folder. Manually type in that name, without any help from the tool. (We can't even use copy and paste to copy some part of one of the file names, for example, because the clipboard buffer is already being used for the file selection.) Press Enter Press Enter again to enter the new folder (or use "Paste Into Folder") "Paste" The method that Nautilus (and apparently Mac's Finder) provides (which I and others love) is much more efficient, especially because it makes step 4 above optional by providing a default name based on the selection, coming in at 4-5 steps (would be 3 steps if we could assign a keyboard shortcut to this command like Mac apparently has ): Select files Bring up context menu (a direct shortcut key would make this even sweeter) Choose "New Folder With Selection" Either accept the default name or choose a different name (optional) Press Enter Assuming "Sort folders before files" option is unchecked, you can continue working/sorting in this outer folder, right where you left off: Can you see how this method might be preferable when you have a folder with 100s or 1000s of files you want to organize it into subfolders? Especially when there is already a common filename prefix (such as a date) that you can use to group related files together. And since Nemo kindly allows us to choose which commands to include in our context menu, those who don't use/like this workflow are free to exclude it from their menus... Having more than one way to accomplish something isn't necessarily a bad thing.
Has the Linux Mint team decided whether it might please add this sorely missed feature? I'm keeping Nautilus around in addition to Nemo just for this one feature. This feature is so much more efficient than other methods when you have a giant folder of many files and want to organize it into subfolders (which you can then easily move or rename afterwards — but at least this helps with the first step, which is to get the correct files into a folder together). P.S. This was also requested in #560.
Lazy == Efficient, so no judgements. :)
Wow, Aaron himself just answered it!
answered Oct 12 '09 at 18:28
Cheers, I never do random PPA's but just thought there was one from who-ever made Nemo
There is no such thing as a "correct PPA". PPA is a personal package archive created by some user. You install software from PPA's at your own risk.
"correct"
I mean, that's what a review is generally.
WARNING: I suspect FAKE or "purchased" positive reviews as there is at least one "positive" review that already shows almost 100 hours of game time.. and well this game is nothing but a mess of cobbled together assets off of the Unreal asset marketplace
just another coppy of a game witch allredy exist on steam, you just need to find it
ORANGE SWAN uses brand-new mechanics that offer the right balance between historical flavor, ease of play, and replay value.
One can also use sophisticated statistic software, such as RRR (free, but not that easy to use, overkill).
Known Limitations
Epics can contain both issues and epics as children
Transition teams from Mailchimp to Marketo
Have your wife stand on the handles of a bakers rolling pin, while you push and pull her around!
make-shift
If the screen doesn't progress from "Activating...", either try in another web browser or try in the current browser after clearing cookies then contact with us if the issue persists.
.
This new edition is based on an exhaustive two-year study by the Designer of the records that have come to light since the fall of the Berlin Wall. The game combines highly accurate information on the forces the Warsaw Pact actually had with now de-classified reports from the CIA and the Defense Intelligence Agency regarding what satellite surveillance and HUMINT revealed about their actual plans.
A board game that explores what If the Soviets attacked first in 1941?
Closed issues are locked after 30 days of inactivity. This helps our team focus on active issues. If you have found a problem that seems similar to this, please open a new issue.
I will not be using BackerKit or GameFound or another third party pledge taker. I will just be using Kickstarter. I have found that some people have trouble with third party software.
Okay... What kind of trouble?
This Kickstarter was made to be run during WellyCon, New Zealand's board game convention (which carefully and successfully hosted the world's biggest live board game con in 2020!)
Hello, maksimets: code blocks using triple backticks (```) don't work on all versions of Reddit!Some users see this / this instead.
This cache has a small trade-off! If we request a list of data, and the API returns an empty list, then the cache won't be able to see the __typename of said list and invalidate it.
That's one big caveat!
In 2.8 you can use conditional types to achieve a similar effect
type CReturn<C, K extends keyof C> = C extends Array<any> ? C[number] : C[K];
Prior to 2.9 keyof only returned string indexes, in 2.9 this will include numeric and symbol keys.
const test = new Person(TestPerson).at("name").at("name")
type FooType = { // interfaces or classes of course also possible bar: string; } type BarType = FooType['bar']; // BarType is a string now
You would get a return value of the type "string" | "number" | "boolean" | "symbol" | "undefined" | "object" | "function", because you're using the JavaScript typeof operator at runtime, which returns a string like "object", not the compile-time type seen by TypeScript.
You can use this format to get the member type: type InputType = typeof input[number];
(This, incidentally, is why the current 'zero-config' marketing fad is such nonsense: it really means 'abdicate the responsibility for config'. Instead of a single place where you can view all the build config in a structured, coherent form, you have the exact same amount of config but scattered around your project in lots of annoying files that are harder to understand.)
if you're using near-operation-file preset
Bash is a wonderful and terrible language. It can provide extremely elegant solutions to common text processing and system management tasks, but it can also drag you into the depths of convoluted workarounds to accomplish menial jobs.
if x.strip('%').isnumeric(): return float(x.strip('%'))/100
.isnumeric() matches 430 Unicode codepoints in the BMP that float() won't accept, and there are codepoints that .isdigit() returns true for that are also not convertible.
This is especially nice for opening Vim from other tools, as this call can be done on the command-line: "+call cursor($LINE,$COLUMN)"
Be aware, for general usage, that this is screen column, not real column. This means that <Tab> characters will get different results. If these characters will be there, you will instead want |30lh or |29l or 029l or something like that.
@DavidPope: note that in this case "screen columns" means that it's still relative to the start of the line. g0 achieves "start of current screen line".
The vim documentation is hilarious: "Ceci n'est pas une pipe" :-)
git diff-index --name-status --relative --cached @ might be a bit easier to parse (and only includes staged files so you don't have to do an extra step to filter them). Also, I couldn't use git status --porcelain because my Rails app is in a sub-folder so I needed the list of files to be relative to the Rails root instead of relative to the git repo root (although git status in general seems to respect the --relative option, git status --porcelain seems to not).
Note that you could skip the https:// if you want a shorter command and you’re feeling adventurous with your HTTP MITM concerns, plus you can use the direct GitHub link as well if you don’t trust my redirect pointing there.
We also get a hook to alter commit messages so that they include a common suffix. We can then use this to set up a server-side hook that refuses changes that don’t have this in their messages.
This compatibility simply means that you can have a .githooks folder at the root of your project, where you can organize your individual hooks into folders.
https://github.com/rycus86/githooks is a really option for managing hooks It is... safe (it uses an opt-in model, where it will ask for confirmation whether new or changed scripts should be run or not (or disabled)) configurable handles a lot of the details for you lets you keep your hooks nicely organized. For example:
And from a security standpoint, that'd be really kind of scary - no one should have the ability to force me to execute certain scripts whenever I run certain git commands
Luckily there is not a way to force hooks to people upon clone. If there was, you could write a post-receive hook with rm -rf / in it and wipe people's hard disk on pull
If you want, you can try out what the script would do first, without changing anything. $ sh -c "$(curl -fsSL https://r.viktoradam.net/githooks)" -- --dry-run
To try and make things a little bit more secure, Githooks checks if any new hooks were added we haven't run before, or if any of the existing ones have changed
git diff --cached --diff-filter=ACMR --name-only
gitree works very similarly to tree but only lists files related to the current git repository.
What?
I'm using this to run against staged files only
but note that the value will be nil when using the attributes_for strategy.
Apologies for digging up a closed thread, but it already contains some monorepo examples so feels like the best place to do it.
These days, Monorepo’s and Typescript are very popular, but configuring the development environment to work with both is still a fairly complex task.
problem: low-resolution sourcemaps
interesting wording: "low-res" here
Ok I'll reopen until the culprit is found
Today, Sass uses complex heuristics to figure out whether a / should be treated as division or a separator. Even then, as a separator it just produces an unquoted string that’s difficult to inspect from within Sass.
Sass currently treats / as a division operation in some contexts and a separator in others. This makes it difficult for Sass users to tell what any given / will mean, and makes it hard to work with new CSS features that use / as a separator.
Of course, I don't doubt your report and of course, I want this to work. Before I invest a lot of effort again into supporting Node.js's exotic "support" for ESM, though: would you mind trying whether upgrading to Node.js 14.17 solves the problem for you?
Another important thing to remember is: don’t run npm install inside a sub-project. npm isn’t smart enough to figure out it’s inside a workspace and will assume it’s a normal project, create a local node_modules directory inside the sub-project, etc. I hope this changes soon and npm can detect the root package.json and perform the install up at the root.
.
1) all dependencies of the root package + sub-packages are installed into a single node_modules folder at the root and 2) sub-packages are symlinked into node_modules during npm install.
We’ve broken our project up into three different types of packages: apps which are preact apps intended to be bundled and deployed somewhere, modules which are plain npm packages for node/browsers and do not bundle their dependencies, and workers which are either Worker or ServiceWorker scripts entirely bundled up with no imports or exports. We don’t have to keep these three types of packages separated, but it helps us navigate around.
But if you're working on a bigger project, with multiple packages and a complex dependency tree, you might want to combine npm with a tool like Lerna.
Yarn has stated before that the goal of Yarn Workspaces is to provide low-level primitives for tools such as Lerna to use, not to compete with them.
Yarn is constantly cited as prior art in the RFCs. I would be surprised to see big disparities between both CLIs.
You can run all the "test" scripts at once by adding the --workspaces (plural) to your npm run command: # Run "test" script on all packages npm run test --workspaces # Tip - this also works: npm run test -ws
Your packages (the ones you created)
Dependencies are hoisted, meaning they get installed in the root node_modules folder. This is done for performance reasons: if a dependency is shared by multiple packages, it gets saved only once in the root.
Other package managers such as Yarn and pnmp already ship with Workspaces for quite a while now.
In fact, npm is not trying to reinvent the wheel. You can find similarities between all three Workspace implementations.
This demonstrates how the nature of node_modules resolution allows for workspaces to enable a portable workflow for requiring each workspace in such a way that is also easy to publish these nested workspaces to be consumed elsewhere.
Please make sure that your file(s) referenced in bin starts with #!/usr/bin/env node, otherwise the scripts are started without the node executable!
Monorepo use cases
Apart from that it’s just more convenient to have all your source files opened in a single IDE instance. You can jump from project to project without switching windows on your desktop.
Why is it big news? Because the main advantage of npm over other package managers like yarn or pnpm is that it comes bundled with NodeJS.
I've copied his response here as this question ranks very high in web search results.
A really good question. Sad to realise that there is no feature equivalent for package.json to what we have in Gemfiles.
npm install <folder>: Install the package in the directory as a symlink in the current project. Its dependencies will be installed before it's linked. If <folder> sits inside the root of your project, its dependencies may be hoisted to the top-level node_modules as they would for other types of dependencies.
The local package will be copied to the prefix (./node-modules).
Yay for linking to relevant PR!
The answer for me is @whitecolor's yalc.
But this solution has technical complications, and the npm and the yarn implimentations give people trouble (as of this writing there are about 40 open npm link issues and over 150 open yarn link issues). If you have tried to use symlinked dependencies while developing a package you've probably run into into a stumbling block, whether simply an unexpected unlink behavior, trouble with peer dependencies, or something bigger.
Use with Yarn/Pnpm workspaces
Selected state should be applied on the .mdc-list-item when it is likely to frequently change due to user choice. E.g., selecting one or more photos to share in Google Photos.Activated state is more permanent than selected state, and will NOT change soon relative to the lifetime of the page. Common examples are navigation components such as the list within a navigation drawer.
In Material Design, the selected and activated states apply in different, mutually-exclusive situations:
Do not use aria-orientation attribute for standard list (i.e., role="list"), use component's vertical property to set the orientation to vertical.
(write-only)
When dealing with the verb, the issue of how to treat the past participle is a contentious one, with much blood being shed on both sides. Some people feel that the past participle of input should be input, not inputted, based on the reasoning that the word comes from put, and we don’t say “he putted the papers on the shelf.” A similar line of reasoning has caused many people to aver that words such as broadcast should never be written as broadcasted, since the cast portion of the word remains unchanged with tense.
but we think that it’s simpler, as well as easier to write and to maintain, to go with the single actor model of Docker.
Docker returns: Client sent an HTTP request to an HTTPS server.
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting bugs and feature requests.
Integrated access to the pdb debugger and the Python profiler.
${0%/*} removes everything including and after the last / in the filename ${0##*/} removes everything before and including the last / in the filename
Since looping over the positional parameters is such a common thing to do in scripts, for arg defaults to for arg in "$@". The double-quoted "$@" is special magic that causes each parameter to be used as a single word (or a single loop iteration). It's what you should be using at least 99% of the time.
Bash (like all Bourne shells) has a special syntax for referring to the list of positional parameters one at a time, and $* isn't it. Neither is $@. Both of those expand to the list of words in your script's parameters, not to each parameter as a separate word.
Instead of using a for loop, which will fail on spaces unless you redefine the IFS variable, I would recommend using a while loop combined with find.