This thing has struck a nerve here. Literally overnight millions of people are talking about real things again. But in such an abrupt way, it's odd, like they were suddenly turned on after having been flash frozen for the last two years.
.
This thing has struck a nerve here. Literally overnight millions of people are talking about real things again. But in such an abrupt way, it's odd, like they were suddenly turned on after having been flash frozen for the last two years.
.
I want to filter emails to exclude starred message threads. Even a simple "-is:starred" or "-has:yellow-star" does not work.
limitation
dmsetup remove /dev/dm-5
.
udisksctl unlock -b /dev/sdg1
.
class Note < ApplicationRecord delegated_type :authorable, types: %w[ Customer Employee ] end
Checkout this PR to know more about this feature.
typo
The existence of polymorphic associations does not allow the database to enforce referential integrity, however, because no foreign keys can be defined.
Good point. In my example, cardinalities would be fundamentally different: an Entry could have_many :messages and have_many :comments. In the original example, a Message could have_many :entries, etc. In either case, there's no way to enforce the cardinalities at the database level (not that I'm aware of).
@entry = Entry.create! entryable: Spot.new(params.require(:spot).permit(:address)) redirect_to @entry # Redirects to e.g. /spots/47, with 47 being the newly created Entry id.
Delegated types newly introduced here looks like a Class Table Inheritance (CTI).
I did a spike to come up with a PoC for introducing this into the codebase of a product that I'm working on (matteeyah/respondo#225) by monkey-patching ActiveRecord with delegated types. It's amazing how can a small code change in ActiveRecord facilitate a big change in the domain model.
I just thought that if there was any time to improve the naming it would be now, before rolling it out to thousands of devs/projects. I don't think of that as bikeshedding, personally.
I think I might inadvertently have shared plans for a bike shed and opened the floor to which color it should be painted. My sincerest apologies
That was my initial reaction too. I think because we are used to talking about delegating behavior, whereas this is delegating subtyping. Or in other words, delegating the ability to be extended with specialized behavior.
we're used to "delegating" meaning...
Is the name "delegated type" up for review? I don't see any delegation happening in the code. It looks more like a "subtype", or "secondary type", or something like that.
From the text as it is currently written, though, it is not entirely clear what the advantage would be of this new technique vs. using plain composition.
That's not clear to me either
A very visible aspect of the object-relational mismatch is the fact that relational databases don't support inheritance. You want database structures that map clearly to the objects and allow links anywhere in the inheritance structure. Class Table Inheritance supports this by using one database table per class in the inheritance structure.
Another strategy is reinforcement learning (aka. constraint learning), as used in some AI systems.
has the operator return its first defined argument, then pass over the next defined one in case of a dead-end, in a depth-first selection algorithm.
The evaluation may result in the discovery of dead ends, in which case it must "switch" to a previous branching point and start over with a different alternative.
They allow for writing nondeterministic programs which contain various alternatives for the program flow.
Ambiguous functions
You may want to jump straight to the Examples section if formal stuff annoys you.
formal stuff annoys you
prefer practical vs. prefer theoretical/academic
A continuation is like a savepoint, representing "what's left to run" at a given time.
I tested it, and it indeed works, but I don't want to depend on a to-be-removed feature.
I am using them in a real life application. I am calculating the available tables for a full calendar with many time slots and with respect to many configurable business rules for restaurants. Using callcc this feature got blazingly fast and very nicely readable. Also we use it to optimise table arrangements with respect to complex restaurant business rules (even something like: Guest A doesn't like to sit near Guest B). Please just have a look at these resources: https://github.com/chikamichi/amb/tree/master/examples http://web.archive.org/web/20151116124853/http://liufengyun.chaos-lab.com/prog/2013/10/23/continuation-in-ruby.html Please help me to keep Guest A away from Guest B. Bad things might happen.
"Context" manipulation is one of big topic and there are many related terminologies (academic, language/implementation specific, promotion terminologies). In fact, there is confusing. In few minutes I remember the following related words and it is good CS exam to describe each :p Thread (Ruby) Green thread (CS terminology) Native thread (CS terminology) Non-preemptive thread (CS terminology) Preemptive thread (CS terminology) Fiber (Ruby/using resume/yield) Fiber (Ruby/using transfer) Fiber (Win32API) Generator (Python/JavaScript) Generator (Ruby) Continuation (CS terminology/Ruby, Scheme, ...) Partial continuation (CS terminology/ functional lang.) Exception handling (many languages) Coroutine (CS terminology/ALGOL) Semi-coroutine (CS terminology) Process (Unix/Ruby) Process (Erlang/Elixir) setjmp/longjmp (C) makecontext/swapcontext (POSIX) Task (...)
Using callcc this feature got blazingly fast and very nicely readable.
This (somewhat contrived) example allows the inner loop to abandon processing early: callcc {|cont| for i in 0..4 print "\n#{i}: " for j in i*5...(i+1)*5 cont.call() if j == 17 printf "%3d", j end end }
For example, did you known React has nothing to do with reactive programming?
Ruby should not completely ignore blocks. const_set :Example, Class.new do p "Hello, world" end # Doesn't print anything, generate any warning nor error. To minimize any impact, Ruby should issue a warning, and in future version could even raise an error. Even unused variables provide warnings in verbose mode, and they have their use. I can't think of a case where passing a block to a builtin method that doesn't accept a block is not a programming error though.
But since it can't be fixed generally, then just add a check in each core method that doesn't accept block, update its def to include a check.
Where I've been bitten by this was some Enumerable method that I assumed took a block. I think it was first { cond }
, and I assumed it worked the same as detect { cond }
The remaining problem should be how to declare Ruby-define methods to be 'non-block taking'. Under the current language spec, absence of '& argument' may or may not mean the method would take a block.
This solution can hide a bad user experience. We’re not making any DOM changes on AJAX success, meaning Capybara can’t automatically detect when the AJAX completes. If Capybara can’t see it, neither can our users. Depending on your application, this might be OK.
.
As of Rails 7.0+, Active Record has an option for handling associations that would perform a join across multiple databases.
impressive
You can also use silence_redefinition_of_method if you need to define the replacement method yourself (because you're using delegate, for example).
You need to balance several factors: the need for new features, the increasing difficulty of finding support for old code, and your available time and skills, to name a few.
I am open to discussion but I don't want to jump on the conclusion.
The biggest reason is that we still have several options, so I didn't want to restrict the future possibility.
Shouldn't the #descendants method be the reverse of #ancestors?
personally, i think this is useful when you have objects which are not stored in database, as shown in the database, e.g. temperature, gps location, balance, etc. You might ask then why those are not stored in the database? In the database we only store a value, but if we want to attach useful, relevant methods to that value,
composed_of attr, :class_name => 'AddressableRecord::Address', :converter => :convert, :allow_nil => true,
In computer science, a value object is a small object that represents a simple entity whose equality is not based on identity: i.e. two value objects are equal when they have the same value, not necessarily being the same object.
There is nothing stopping you from creating store objects which scrapes XE for the current rates or just returns rand(2):
For example the german city munich in german München .. both save a city-model with a name translated all app-locales.
doesn't seem all that useful if that's all it does
most place names would be the same in any language
belongs_to :city
belongs_to :zipcode
Country State (belongs to country) City (belongs to State) Neighborhood (belongs to city)
Address (Belongs to Neighborhood and City, because neighborhood is not required)
biggs is a small ruby gem/rails plugin for formatting postal addresses from over 60 countries.
By default the wizard will render a view with the same name as the step. So for our controller AfterSignupController with a view path of /views/after_signup/ if call the :confirm_password step, our wizard will render /views/after_signup/confirm_password.html.erb
To send someone to the first step in this wizard we can direct them to after_signup_path(:confirm_password)
steps :confirm_password, :confirm_profile, :find_friends
Note that render_wizard does attempt to save the passed object. This means that in the above example, the object will be saved twice. This will cause any callbacks to run twice also. If this is undesirable for your use case, then calling assign_attributes (which does not save the object) instead of update might work better.
acceptable
Notice: Another method for partial validations, which might be considered more flexible by some users (allowing for easy validation testing inside model tests), was described by Josh McArthur here.
I liked the linked-to solution
The best way to build an object incrementally with validations is to save the state of our product in the database and use conditional validation. To do this we're going to add a status field to our Product class.
.
people want to have an object, lets call it a Product that they want to create in several different steps
This action will work a little differently from a normal create action that you might be used to, as it doesn’t strictly need a new action - we won’t be saving this Pet model with any data - just putting it in the database so that our StepsController can access that.
Remember, our wizard controller is responsible for showing and updating steps, but our top-level controller is still responsible for managing our Pet models.
and calls .unarchived and .archived appropriately when passed an ActiveRecord relation.
acts_as_tokened Quickly adds rails 5 has_secure_token to your model, along with some Post.find() enhancements to work with tokens instead of IDs.
include Effective::CrudController
# All queries and objects will be built with this scope resource_scope -> { current_user.posts } # Similar to above, with block syntax resource_scope do Post.active.where(user: current_user) end
Loads an appropriate @posts or @post type instance variable.
Replaces your Rails controllers, views and forms with meta programming. Considers routes.rb, ability.rb, current_user and does the right thing.
The goal of this gem is to reduce the amount of code that needs to be written when developing a ruby on rails website.
enumerate_by :alpha_2_code
Note that this is a reference implementation and, most likely, should be modified for your own usage.
[:state, :zip]
presumably this groups them together more indivisibly, perhaps so they'll show up as a single line, for example?
alias_method :normalize_whitespace_with_warning, :normalize_whitespace def normalize_whitespace(*args) silence_warnings do normalize_whitespace_with_warning(*args) end end
suppress warnings
Since factory_bot_rails automatically loads factory definitions as your application loads, writing a definition like this would cause another Daniel to get added to your database every time you start the server or open a console. I like Daniels and all, but there is a limit.
As a workaround, you can use setters in every affected reactive block instead of direct assignment. let localForm = {}; const setLocalForm = (data) => { localForm = data; }; $: setLocalForm({...$formData});
Even though not all code smells indicate real problems (think fluent interfaces)
At this point I would call into question the job of Event to both be responsible for managing what gets charged and how something should be charged. I would probably investigate moving those to external service classes to keep charging responsibilities out of a simple event object.
however, I prefer to take it as an indication that a pretty smart group of people didn't think there was a particularly strong reason to use a different term.
seems reasonable
Unfortunately, I think a lot of the answers here are perpetuating or advancing the idea that there's some complex, meaningful difference. Really - there isn't all that much to it, just different words for the same thing.
'method' is the object-oriented word for 'function'. That's pretty much all there is to it (ie., no real difference).
a function is a mathematical construct. I would say all methods are functions but not all functions are methods
theory
Coming from a functional programming background, I feel there is a profound distinction between function and method. Mainly methods have side effects, and that functions should be pure thus giving a rather nice property of referential transparency
I agree it might be nice if "function" and "method" meant what you wanted them to, but your definitions do not reflect some very common uses of those terms.
Unfortunately, if the number of command-line arguments argc is 0 – which means if the argument list argv that we pass to execve() is empty, i.e. {NULL} – then argv[0] is NULL. This is the argument list’s terminator.
subtle bug
Ruby 2.6 introduces an initial implementation of a JIT (Just-In-Time) compiler. The JIT compiler aims to improve the performance of Ruby programs. Unlike traditional JIT compilers which operate in-process, Ruby’s JIT compiler writes out C code to disk and spawns a common C compiler to generate native code. For more details about it, see the MJIT organization by Vladimir Makarov.
For example, if you pre-build a swordman, a spearman and an horseman in 4 cities, you can produce a total of 12 units in 3 turns. This make you save a lot of gold in units maintenance for a good amount of turns.
I just didn't realize the harbor needed to be in the capital as well since it says that it creates a city connection with the capital, and doesn't mention that your capital needs one too. I just figured I could get away without one
tr '\n' '\\n' would change newlines to backslashes (and then there's an extra n in the second set). sed 's/\n/\\n/g won't work because sed doesn't load the line-terminating newline into the buffer, but handles it internally.
This proposal is deeply flawed and would have far-reaching consequences if implemented. In #31148 I proposed a strategy to address the same pain points in a correct, more generic way. Regardless of whether my approach is taken or not, async handling of promises is a core feature that simply cannot be deprecated, and we should remove the erroneous deprecation accordingly.
NODE_OPTIONS=--unhandled-rejections=none node
There are other pjax implementations floating around, but most of them are jQuery-based or overengineered. Hence simple-pjax.
As said in the chapter, there’s an "implicit try..catch" around the function code. So all synchronous errors are handled. But here the error is generated not while the executor is running, but later. So the promise can’t handle it.
new Promise(function(resolve, reject) { setTimeout(() => { throw new Error("Whoops!"); }, 1000); }).catch(alert);
we should have the unhandledrejection event handler (for browsers, and analogs for other environments) to track unhandled errors and inform the user (and probably our server) about them, so that our app never “just dies”.
What happens when a regular error occurs and is not caught by try..catch? The script dies with a message in the console. A similar thing happens with unhandled promise rejections.
You cannot try/catch the reactive statement ($: has no effect outside of the top-level) window.onerror does not catch it window.onunhandledrejection does not catch it
.
The best you can do is try/catch inside a function that is reactively called, but my goal is to have a global exception handler to handle all exceptions that I did not expect...
It's vanilla JS, doesn't bind you to specifc syntax and that's the main reason why I like svelte that it doesn't try to sandbox you into framework constraints.
const originalUnhandledRejection = window.onunhandledrejection; window.onunhandledrejection = (e) => { console.log('we got exception, but the app has crashed', e); // or do Sentry.captureException(e); originalUnhandledRejection(e); }
If at least one component has smallest unhandled error, the whole app will crash and users will not know what to do and developers will not know such an error occurred.
When an error is thrown inside a Promise it looks like Firefox still calls window.onerror, but Chrome swallows the error silently.
Maybe once the core onError lifecycle is implemented (if maintainers decide to go that way) everyone will discover you're right and an implementation will be built in. I think that's probably what's going to happen. But until real life has proved it, it's usually best to go for the smallest most broadly applicable solution. I can definitely imagine <svelte:error> eventually being a thing, but it's a pretty dramatic change compared to an added importable function.
Having a consistent and predictable pattern is key to the elegance.
I think the issue is that it's not totally perfect. It doesn't define what should happen in parent components, it's not as flexible as onError and it doesn't allow you (for instance) to nest a svelte:head inside, or decide what to do with the rest of the rendering. What do you do with <div>My component</div> in your example? What about changing the <title>? I assume you can inspect the error...does <svelte:error> allow specifying which error types to expect?
That's like the example i wrote, and i think it's very ugly. It's annoying when frameworks are very elegant in demos and presentations but then that elegance disappear when you have to write real world code.
An annoying thing about frameworks is when they get too opinonated, which is, in my view, a problem React has.
const unsubscribe = errorStore.subscribe(value => { if (!value) { return } error = value errorStore.set() })
Boilerplate is only boilerplate if it's the same everywhere, which it shouldn't be.
Additionally, if you're writing a notification display in every single component, wrapped in a <svelte:error> tag, that's the very definition of boilerplate.
In other words, adding a svelte:error tag wouldn't help much.
and if I think this is too boilerplatey, I can export a handler from some .js file and pass the error to that: <script> import { onError } from 'svelte' import { genericHandler } from '../my-error-handler.js' onError(genericHandler(e => { // code which is called first to try to handle this locally return true // we've handled it here, don't do anything else. }) </script>
If a developer wants to handle the error inside the component and also wants to have it bubble up, they can use this.fire('error', e) within onerror.
So it is safe to use an async function as the callback argument to setTimer and setInterval. You just need toBe sure to wrap any operations that can throw an exception in a try/catch block andBe aware that the timer will not await on the promise returned by your async function
My gut told me calling an async function from the setTimeout callback was a bad thing. Since the setTimeout machinery ignores the return value of the function, there is no way it was awaiting on it. This means that there will be an unhandled promise. An unhandled promise could mean problems if the function called in the callback takes a long time to complete or throws an error.
The callback executed by setTimeout is not expected to return anything, it just ignores the returned value. Since once you enter the promise/async world in JavaScript you cannot escape, I was left to wonder what happens when the setTimeout callback returns a promise?
test2 being marked async does wrap your return value in a new promise:
const rejectedP = Promise.reject('-'); const finallyP = rejectedP.finally(); const result1 = rejectedP; const result2 = new Promise(resolve => { const rejectedP = Promise.reject('-'); const finallyP = rejectedP.finally(); resolve(rejectedP); }); we can see that the first snippet creates two promises (result1 and rejectedP being the same) while the second snippet creates three promises. All of these promises are rejected, but the rejectedP rejection is handled by the callbacks attached to it, both through ….finally() and resolve(…) (which internally does ….then(resolve, reject)). finallyP is the promise whose rejection is not handled in the both examples. In the second example, result2 is a promise distinct from rejectedP that is also not handled, causing the second event.
You basically did var a = promise.then(…); var b = promise.catch(…); creating a branch in the chain. If promise is getting rejected now, the catch callback will be called and b will be a fulfilled promise just fine, but the a promise is getting rejected too and nobody handles that. Instead, you should use both arguments of then and write Requirement.create({id: id, data: req.body.data, deleted: false}) .then(requirement => { res.json(requirement); }, reason => { let err = {'error': reason}; res.json(err); });
Updates/edits based on comments should preferably be reflected in the question itself. This way other readers don't have to weed out the whole comment section. You find the edit option under the question.
Just "ignoring" a Promise result if it is not longer needed is an antipattern in my opinion.
Moving forward I'd rather see {#await} being removed than adding more {#await}. But that's just from my experience and I'm sure there are use-cases for it.
I personally abstract everything away into stores. Stores are amazing. With everything I mean things like fetch, Worker or WebSocket.
Another limitation is that you are forced into the syntax of the {#await} block. What I mean by that is that for example you can't add a loading class to a parent. You can only render stuff for the loading state in the given block. Nowhere else.
export const fibonacci = function (n, initialData) { return readable( { loading: true, error: null, data: initialData, }, (set) => { let controller = new AbortController(); (async () => { try { let result = await fibonacciWorker.calculate(n, { signal: controller.signal }); set({ loading: false, error: null, data: result, }); } catch (err) { // Ignore AbortErrors, they're not unexpected but a feature. // In case of abortion we just keep the loading state because another request is on its way anyway. if (err.name !== 'AbortError') { set({ loading: false, error: err, data: initialData, }); } } })(); return () => { controller.abort(); }; } ); };
<script> import { fibonacci } from './math.js'; $: result = fibonacci(n, 0); </script> <input type=number bind:value={n}> <p>The {n}th Fibonacci number is {$result.data}</p> {#if $result.loading} <p>Show a spinner, add class or whatever you need.</p> <p>You are not limited to the syntax of an #await block. You are free to do whatever you want.</p> {/if}
Once you've written the imperative library/util code once, your components are super slim and completely reactive/declarative. Wow.
Yes I love stores.
No need to debounce fetch (terrible UX), just fire them away
export const load: Load = async ({ page, session }) => { if (!isPublic(page.path) && !isAuthenticated(session)) { console.log('Unauthorized access to private page'); return { redirect: '/', status: 302 }; } else { console.log('Auth OK'); } return {}; };
That whole setup works. If the user logs out, I can just write an empty JWT cookie and clear the $session.jwt value, redirect back to the home page, done.
In hooks.js I have a handle function that basically does request.locals.jwt = cookies.jwt, and then a getSession function that returns { jwt: locals.jwt }
Persisting across apps Your notifications can persist across multiple apps / page reloads, as long as they use this library. This is useful for a scenario where you show a notification and then redirect the browser to a different application, or trigger a full reload of the page. This is completely automatic and uses session storage.
I have an idea: trigger a BSOD on unhandled promise rejection... just to teach users
I invite even the most cantankerous among you to review it.
To be perfectly frank, this proposal seems far more about creating the appearance of safety than addressing an actual deficit in application correctness. I'm not questioning the value in detecting unhandled promises (resolved OR rejected) as a development tool for calling attention to a potentially undesired flow... but just like other lint rules, this belongs in tooling and NOT the execution environment.
Fundamentally, I think promise rejection is substantially different than "throwing" under normal synchronous flow.
but has a critical difference: the expression console.log("before 2"); does not and cannot depend on the resolved value result. The throw propagates through all chained promises, and when it stops, there is no remaining undefined behavior! No piece of code is left in an unclear state, and therefore there is no reason to crash.
If this proposal is fully implemented, we will end up with this garbage everywhere.
I value this pattern because it allows concise concurrency.
Node is entirely at liberty to limit the design the same way we crash the process on errors (which browsers do not).
const promise = Promise.reject(new Error("Something happened!")); setTimeout(async () => { // You want to process the result here... try { const result = await promise; console.log(`Hello, ${result.toUpperCase()}`) } // ...and handle any error here. catch (err) { console.error("There was an error:", err.message); } }, 100);
// Prevent errors crashing Node.js, see: // https://github.com/nodejs/node/issues/20392 this.promise.catch(() => {})
The yieldable objects currently supported are: promises thunks (functions) array (parallel execution) objects (parallel execution) generators (delegation) generator functions (delegation) Nested yieldable objects are supported, meaning you can nest promises within objects within arrays, and so on!
co(function* () { var result = yield Promise.resolve(true); return result;}).then(function (value) { console.log(value);}, function (err) { console.error(err.stack);});
It is a stepping stone towards ES7 async/await.
Generator based control flow goodness for nodejs and the browser, using promises, letting you write non-blocking code in a nice-ish way.
The power of await is that it lets you write asynchronous code using synchronous language constructs.
because it is in a central location and contributed to by many people, problems are found quickly, and fixes are for everyone—not just one specific template.
Its goal is to solve the problems with downloading templates to start your app from:
There are a lot of nasty gotchas with unhandled rejections. That's why Node.js gives you a mechanism for globally handling unhandled rejections.
Seems easy, right? How about the below code, what will it print? new Promise((_, reject) => reject(new Error('woops'))). catch(error => { console.log('caught', err.message); }); It'll print out an unhandled rejection warning. Notice that err is not defined!
Some argue that throwing an exception in the executor function is bad practice. I strongly disagree.
just a nit... make the values lower-case. They tend to be easier for users. --unhandled-rejections=error_on_gc
Point being (again), definitions seem to differ, and what you call "full stack" is what I call "batteries-included framework". Full stack simply means (for me) that it gives you a way of building frontend and backend code, but implies nothing about what functionality is included in either part.
Nothing in "full-stack" requires having a validation library in order for it to be full-stack, that would more be leaning towards the "batteries included" approach to a framework instead of strictly being about "full-stack".
Yes, precisely because I've been involved in maintaining codebases built without real full stack frameworks is why I say what I said.The problem we have in this industry, is that somebody reads these blog posts, and the next day at work they ditch the "legacy rails" and starts rewriting the monolith in sveltekit/nextjs/whatever because that's what he/she has been told is the modern way to do full stack.No need to say those engineers will quit 1 year later after they realize the mess they've created with their lightweight and simple modern framework.I've seen this too many times already.It is not about gatekeeping. It is about engineers being humble and assume it is very likely that their code is very unlikely to be better tested, documented, cohesive and maintained than what you're given in the real full stack frameworks.Of course you can build anything even in assembler if you want. The question is if that's the most useful thing to do with your company's money.
Tauri
Vue+Vuetify was like writing binary by hand instead of using an expressive modern language that abstracts away 99% of plumbing.
Vuetify
It has many advantages but the main reason for me is that it simplifies your front end code. It's not perfect by any means, but overall its cons are worth it IMO.
As someone who is a future-ex React developer who uses Svelte in a personal project
"future-ex"
but you want to style your components yourself and not be constrained by existing design systems like Material UI
This is an unofficial, complete Svelte port of the Headless UI component library (https://headlessui.dev/)
Instead of render props, we use Svelte's slot props: // React version <Listbox.Button> {({open, disabled} => /* Something using open and disabled */)} </Listbox.Button> <!--- Svelte version ---> <ListboxButton let:open let:disabled> <!--- Something using open and disabled ---> </ListboxButton>
You want to use the commercial Tailwind UI component library (https://tailwindui.com/) in your Svelte project, and want a drop-in replacement for the React components which power Tailwind UI.
compatibility with any front-end framework means you don't have to change your stack
Brownfield
Hi, it seems as though you have multiple questions: you should separate these into multiple posts.
A multi-question post would be perfectly appropriate in a forum or mailing list. Seems a bit too strict to not allow something like this, where one has multiple related questions.
SSR is used for pages as well, but prerendering means that rendering happens at build time instead of when a visitor visits the page.
The most obvious time you'd encounter a 401 error, on the other hand, is when you have not logged in at all, or have provided the incorrect password.
As mentioned in the previous article, the 403 error can result when a user has logged in but they don't have sufficient privileges to access the requested resource. For example, a generic user may be attempting to load an 'admin' route.
The server generating a 401 response MUST send a WWW-Authenticate header field (Section 4.1) containing at least one challenge applicable to the target resource.
Meaning that 99% of the people use it are using it "wrong" because they're not using it for HTTP authentication and don't send a WWW-Authenticate header field with their 401 response?
Hmm. That's a tough one. On the one hand, the spec does say they must send it.
Initial opinion
But on the other hand, one could argue that that requirement only applies if using 401 for HTTP authentication. And that saying it's wrong to do so (as they claim at https://stackoverflow.com/questions/3297048/403-forbidden-vs-401-unauthorized-http-responses/14713094#14713094 and https://hyp.is/JA45zHotEeybDdM_In4frQ/stackoverflow.com/questions/3297048/403-forbidden-vs-401-unauthorized-http-responses) is having a too strict/narrow/literal interpretation.
HTTP is meant to be used widely in many very different uses and contexts, most of which do not use this very specific HTTP authentication scheme; my opinion is that they shouldn't be denied from using it, just because they don't have anything useful WWW-Authenticate header field. (Or (which is also fine with me), just put something "emptyish" in the field, like "Unused". Unless that would trigger a Basic auth modal in the browser, in which case we shouldn't, for practical reasons.)
Why shouldn't we be able to repurpose this same status code for uses that are still authentication, but just not HTTP authentication per se?
Is it really wrong to repurpose this useful status code for other contexts, like cookie-based app-defined authentication systems?
I say that it's okay to repurpose/reuse 401 for any authentication system (that uses HTTP as a part of it, even though not using HTTP's own authentication system), as long as we try to maintain the same semantic as originally intended/described here. I think it's okay to use 401 as a response to a XHR request, and then have the client redirect to a login page, which provides a way to authenticate again (reattempt the authentication challenge), analogous to how it works for HTTP authentication.
Revised opinion
https://stackoverflow.com/questions/3297048/403-forbidden-vs-401-unauthorized-http-responses/14713094#14713094 has made me change my mind and convinced me that...
Authentication by schemes outside of (not defined by) RFC7235: Hypertext Transfer Protocol (HTTP/1.1): Authentication should not use HTTP status 401, because 401 Unauthorized is only defined (by current RFCs) by RFC7235: Hypertext Transfer Protocol (HTTP/1.1): Authentication, and has semantics and requirements (such as the requirement that "A server generating a 401 (Unauthorized) response MUST send a WWW-Authenticate header field containing at least one challenge.") that simply don't make sense or cannot be fulfilled if using a non-HTTP authentication scheme.
403 Forbidden, on the other hand, is defined by the broader HTTP standard, in RFC7231: Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content and RFC7235: Hypertext Transfer Protocol (HTTP/1.1): Authentication.
In conclusion, if you have your own roll-your-own login process and never use HTTP Authentication, 403 is always the proper response and 401 should never be used.
Couldn't a custom auth system use WWW-Authenticate header?
The question was asked:
Doesn't RFC7235 provide for "roll-your-own" or alternate auth challenges? Why can't my app's login flow present its challenge in the form of a WWW-Authenticate header? Even if a browser doesn't support it, my React app can...
And I would say sure, if you want (and if the browser doesn't automatically show a Basic auth modal in this case and thwart your plans).
They might be on to something here with that question!
But that should probably be the test of whether you can/should use 401: are you actually using WWW-Authenticate header?
Indeed I found an example where it is used for OAuth2.
For example, suppose your API returns a 401 Unauthorized status code with an error description like The access token is expired. In this case, it gives information about the token itself to a potential attacker. The same happens when your API responds with a 403 Forbidden status code and reports the missing scope or privilege.
An access token is expired, revoked, malformed, or invalid for other reasons.
That comes in the form of the WWW-Authenticate header with the specific authentication scheme to use. For example, in the case of OAuth2, the response should look like the following:
"The basic principle behind REST status code conventions is that a status code must make the client aware of what is going on and what the server expects the client to do next"
You can fulfill this principle by giving answers to the following questions:Is there a problem or not?If there is a problem, on which side is it? On the client or on the server side?If there is a problem, what should the client do?
The difference is what the server expects the client to do next.
Authentication by schemes outside of RFC2617 is not supported in HTTP status codes and are not considered when deciding whether to use 401 or 403.
What does "are not considered when deciding whether to use 401 or 403" mean exactly? What exactly should not be considered, and what exactly should be considered instead? In other words, how did someone arrive at the conclusion that "if you have your own roll-your-own login process and never use HTTP Authentication, 403 is always the proper response and 401 should never be used."? Why is 403 okay to use for non-HTTP authentication, but not 401?
Oh, I think I understand the difference now.
They should have said:
Authentication by schemes outside of (not defined by) RFC7235: Hypertext Transfer Protocol (HTTP/1.1): Authentication should not use HTTP status 401, because 401 Unauthorized is only defined (by current RFCs) by RFC7235: Hypertext Transfer Protocol (HTTP/1.1): Authentication, and has semantics and requirements (such as the requirement that "A server generating a 401 (Unauthorized) response MUST send a WWW-Authenticate header field containing at least one challenge.") that simply don't make sense or cannot be fulfilled if using a non-HTTP authentication scheme.
403 Forbidden, on the other hand, is defined by the broader HTTP standard, in RFC7231: Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content and RFC7235: Hypertext Transfer Protocol (HTTP/1.1): Authentication.
In conclusion, if you have your own roll-your-own login process and never use HTTP Authentication, 403 is always the proper response and 401 should never be used.
See also my comments in https://hyp.is/p1iCnnowEeyUPl9PxO8BuQ/www.rfc-editor.org/rfc/rfc7235