997 Matching Annotations
  1. Mar 2021
    1. With all this “monetization” happening around Trailblazer, we will also make sure that all free and paid parts of the project grow adult and maintan an LTS - or long-term support - status. Those are good news to all you users out there having been scared to use gems of this project, not knowing whether or not they’re being maintained, breaking code in the future or making your developers addicted to and then cutting off the supply chain. Trailblazer 2.1 onwards is LTS, and the last 1 ½ years of collaboration have proven that.
    1. seems a interesting talk on k8s

      听了一半,这里的 Builders and Operators 指的是运维而非 k8s controller 里的 operator,以后有机会再看看吧


      配合自己用 kubeadm 部署一个 cluster 可能不错

    1. If you want to compile youself you can pass the --with-features=huge to the configure script. Note, however, this does not enable the different language bindings because those are mostly optional and also the various GUIs need to enabled specifically, because you can have only one gui.

      This explains why the standard vim package on ubuntu doesn't have GUI support (I was going to say because it wouldn't know which GUI you needed, but I think it would based on the Ubuntu variant: GNOME, KDE, etc.) (maybe because it wouldn't know whether you wanted GUI support at all)

      I was going to say because it wouldn't know which GUI you needed, but I think it would based on the Ubuntu variant: GNOME, KDE, etc.

      found answer to that: https://hyp.is/NyJRxIgqEeuNmWuaScborw/askubuntu.com/questions/345593/how-to-build-vim-with-gui-option-from-sources

      so you have to install a different package with GUI support, like vim-gtk or vim-athena

    1. Very often in these monorepos, packages are so incredibly specific in functionality, the question then becomes why even have a separate package at all if it’s tightly coupled? Can you use these packages independently or are they tied to specific versions of other packages in the monorepo? It’ll probably be easier to remove the mask and just work as a monolith.
    1. The number one problem that I see developers have when practicing test-first development that impedes them from refactoring their code is that they over-specify behavior in their tests. This leads developers to write more tests than are needed, which can become a burden when refactoring code.
    1. much software requires continuous changes to meet new requirements and correct bugs, and re-engineering software each time a change is made is rarely practical.
    1. Microlibraries are easier to understand, develop and test. They make it easier for new people to get involved and contribute. They reduce the distinction between a “core module” and a “plugin”, and increase the pace of development in D3 features.
    2. Small files are nice, but modularity is also about making D3 more fun.
    1. Unfortunately, given how widely used concat_javascript_sources is, this required changing a lot of tests. It would be nice if we could remove some of the duplication in these tests (so that similar changes would not require updating this many tests), but that can come in another PR.
    1. I don't understand why this isn't being considered a bigger deal by maintainrs/the community. Don't most Rails developers use SCSS? It's included by default in a new Rails app. Along with sprockets 4. I am mystified how anyone is managing to debug CSS in Rails at all these days, that this issue is being ignored makes sprockets seem like abandonware to me, or makes me wonder if nobody else is using sprockets 4, or what!
    2. Meh... as I said earlier, I think using Webpack is the recommended way now. Another issue is there is no way to generate source maps in production.
    3. sprockets 4 makes Chrome browser identification of SCSS css lines _worse_
    4. But maybe few are still using sprockets at all, for JS or (S)CSS anymore? Hard to say.
    5. If I can't do something to change the sprockets 4 debugging experience I am seeing, I am going to probably downgrade back to sprockets 3. I am finding it impossible to develop CSS the ways I am used to.
    6. Is there a PR to... something? sassc-rails? That would make the patch not necessary? (I don't know if there's any good way to monkey-patch that in, I think you have to fork? So some change seems required...) Should the defaults be different somehow? This is very difficult to figure out.
    7. Is there a PR to... something? sassc-rails?
    1. Any updates on this one? It makes debugging JS and CSS in the web inspector next to impossible when you can't get any help finding the offending code in your own source files.
    1. Want to know how to build a taxi app that will become the next Uber or Carb? It is a reasonable question considering how convenient and cost-effective it is to use a taxi instead of maintaining your own vehicle. The best way for a cab company to ensure this convenience for customers is to build a taxi booking app.
    1. The HTML5 form validation techniques in this post only work on the front end. Someone could turn off JavaScript and still submit jank data to a form with the tightest JS form validation.To be clear, you should still do validation on the server.
    1. Therefore client side validation should always be treated as a progressive enhancement to the user experience; all forms should be usable even if client side validation is not present.
    2. It's important to remember that even with these new APIs client side validation does not remove the need for server side validation. Malicious users can easily workaround any client side constraints, and, HTTP requests don't have to originate from a browser.
    3. Since you have to have server side validation anyways, if you simply have your server side code return reasonable error messages and display them to the end user you have a built in fallback for browsers that don't support any form of client side validation.
  2. afarkas.github.io afarkas.github.io
    1. Webshim is also more than a polyfill, it has become a UI component and widget library. Webshim enables a developer to also enhance HTML5 capable browsers with more highly customizable, extensible and flexible UI components and widgets.

      And now that it's deprecated (presumably due to no longer needing these polyfills), not only do the polyfills go away (no longer maintained), but also these unrelated "extras" that some of us may have been depending on are now going away with no replacement ...

      If those were in a separate package, then there would have been some chance of the "extras" package being updated to work without the base webshims polyfills.

      In particular, I was using $.webshims.addCustomValidityRule which adds something that you can't do in plain HTML5 (that I can tell), so it isn't a polyfill...

    1. Ci taatu guy googu la jigéeni Ajoor yi di jaaye sanqal.

      C'est sous ce baobab que les femmes originaires du Kayor vendent de la semoule de mil.

      ci -- close; at @, in, on, inside, to.

      taat+u (taat) wi -- base, bottom, foundation, buttocks.

      guy gi -- baobab. 🌴

      googu -- that (closeness).

      la -- (?).

      jigéen+i (jigéen) bi ji -- sister versus brother; woman as opposed to man. 👩🏽

      ajoor bi -- person from Kayor.

      yi -- the (plural).

      di -- be; mark of the imperfective affirmative not inactual.

      jaay+e (jaay) v. -- sell.

      sanqal si -- millet semolina. 🌾

    2. Peñe, kenn du ko able.

      Un peigne, personne ne le prête.

      peñe bi -- (French) comb.

      kenn -- no one.

      du -- to be (negative). ➖

      ko -- it.

      able v. -- to lend.

    3. Sëriñ boobu aj na daaw, doomam a ko wuutu léegi.

      Ce marabout est décédé l'an dernier, c'est son fils qui le remplace maintenant.

      sëriñ bi -- marabout.

      boobu -- this.

      aj (Arabic: Hajj) v. -- make the pilgrimage to Mecca. 🕋; deceased ☠️ (for a religious personality).

      na -- he (?).

      daaw n. -- last year. 🗓

      doom+am (doom) ji -- child by descent 👶🏽; doll🪆; to have a child.

  3. Feb 2021
    1. Do you have collaborators who could have generated keys and sold them on their own? DIG's Steam keys and other stores' Steam keys must have some source, after all. Keys don't generate themselves, and only your accounts should be able to request them.This particular game was in Bunch Keys Indie Wizardry Bundle. I assume you had a proper contract for that. Maybe DIG or an intermediary bought 50-200 copies of it?
    2. It isn't stealing because you or an associate must have generated and given them the keys in some way or another?Ideally you would ask a DIG bundle buyer to show you their key for your game, so you can figure out what key request batch it came from, and then you can scratch your head and wonder who you gave those keys to and what journey they took afterwards.
    1. For branching out a separate path in an activity, use the Path() macro. It’s a convenient, simple way to declare alternative routes

      Seems like this would be a very common need: once you switch to a custom failure track, you want it to stay on that track until the end!!!

      The problem is that in a Railway, everything automatically has 2 outputs. But we really only need one (which is exactly what Path gives us). And you end up fighting the defaults when there are the automatic 2 outputs, because you have to remember to explicitly/verbosely redirect all of those outputs or they may end up going somewhere you don't want them to go.

      The default behavior of everything going to the next defined step is not helpful for doing that, and in fact is quite frustrating because you don't want unrelated steps to accidentally end up on one of the tasks in your custom failure track.

      And you can't use fail for custom-track steps becase that breaks magnetic_to for some reason.

      I was finding myself very in need of something like this, and was about to write my own DSL, but then I discovered this. I still think it needs a better DSL than this, but at least they provided a way to do this. Much needed.

      For this example, I might write something like this:

      step :decide_type, Output(Activity::Left, :credit_card) => Track(:with_credit_card)
      
      # Create the track, which would automatically create an implicit End with the same id.
      Track(:with_credit_card) do
          step :authorize
          step :charge
      end
      

      I guess that's not much different than theirs. Main improvement is it avoids ugly need to specify end_id/end_task.

      But that wouldn't actually be enough either in this example, because you would actually want to have a failure track there and a path doesn't have one ... so it sounds like Subprocess and a new self-contained ProcessCreditCard Railway would be the best solution for this particular example... Subprocess is the ultimate in flexibility and gives us all the flexibility we need)


      But what if you had a path that you needed to direct to from 2 different tasks' outputs?

      Example: I came up with this, but it takes a lot of effort to keep my custom path/track hidden/"isolated" and prevent other tasks from automatically/implicitly going into those steps:

      class Example::ValidationErrorTrack < Trailblazer::Activity::Railway
        step :validate_model, Output(:failure) => Track(:validation_error)
        step :save,           Output(:failure) => Track(:validation_error)
      
        # Can't use fail here or the magnetic_to won't work and  Track(:validation_error) won't work
        step :log_validation_error, magnetic_to: :validation_error,
          Output(:success) => End(:validation_error), 
          Output(:failure) => End(:validation_error) 
      end
      
      puts Trailblazer::Developer.render o
      Reloading...
      
      #<Start/:default>
       {Trailblazer::Activity::Right} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=validate_model>
      #<Trailblazer::Activity::TaskBuilder::Task user_proc=validate_model>
       {Trailblazer::Activity::Left} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=log_validation_error>
       {Trailblazer::Activity::Right} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=save>
      #<Trailblazer::Activity::TaskBuilder::Task user_proc=save>
       {Trailblazer::Activity::Left} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=log_validation_error>
       {Trailblazer::Activity::Right} => #<End/:success>
      #<Trailblazer::Activity::TaskBuilder::Task user_proc=log_validation_error>
       {Trailblazer::Activity::Left} => #<End/:validation_error>
       {Trailblazer::Activity::Right} => #<End/:validation_error>
      #<End/:success>
      
      #<End/:validation_error>
      
      #<End/:failure>
      

      Now attempt to do it with Path... Does the Path() have an ID we can reference? Or maybe we just keep a reference to the object and use it directly in 2 different places?

      class Example::ValidationErrorTrack::VPathHelper1 < Trailblazer::Activity::Railway
         validation_error_path = Path(end_id: "End.validation_error", end_task: End(:validation_error)) do
          step :log_validation_error
        end
        step :validate_model, Output(:failure) => validation_error_path
        step :save,           Output(:failure) => validation_error_path
      end
      
      o=Example::ValidationErrorTrack::VPathHelper1; puts Trailblazer::Developer.render o
      Reloading...
      
      #<Start/:default>
       {Trailblazer::Activity::Right} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=validate_model>
      #<Trailblazer::Activity::TaskBuilder::Task user_proc=validate_model>
       {Trailblazer::Activity::Left} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=log_validation_error>
       {Trailblazer::Activity::Right} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=save>
      #<Trailblazer::Activity::TaskBuilder::Task user_proc=log_validation_error>
       {Trailblazer::Activity::Right} => #<End/:validation_error>
      #<Trailblazer::Activity::TaskBuilder::Task user_proc=save>
       {Trailblazer::Activity::Left} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=log_validation_error>
       {Trailblazer::Activity::Right} => #<End/:success>
      #<End/:success>
      
      #<End/:validation_error>
      
      #<End/:failure>
      

      It's just too bad that:

      • there's not a Railway helper in case you want multiple outputs, though we could probably create one pretty easily using Path as our template
      • we can't "inline" a separate Railway acitivity (Subprocess "nests" it rather than "inlines")
    2. step :direct_debit

      I don't think we would/should really want to make this the "success" (Right) path and :credit_card be the "failure" (Left) track.

      Maybe it's okay to repurpose Left and Right for something other than failure/success ... but only if we can actually change the default semantic of those signals/outputs. Is that possible? Maybe there's a way to override or delete the default outputs?

    1. Personally, I'm starting to think that the feature where it automatically adds xray.js to the document is more trouble than it's worth. I propose that we remove that automatic feature and just make it part of the install instructions that you need to add this line to your template/layout: <%= javascript_include_tag 'xray', nonce: true if Rails.env.development? %>
    1. now that I realize how easy it is to just manually include this in my app: <%= javascript_include_tag 'xray', nonce: true if Rails.env.development? %> I regret even wasting my time getting it to automatically look for and add a nonce to the auto-injected xray.js script
    2. This is failing CI because CI is testing against Rails < 6. I think the appropriate next steps are: Open a separate PR to add Rails 6 to the CI matrix Update this PR to only run CSP-related test code for Rails >= 6.0.0 Can you help with either or both of those?
    1. At work, we often mention "throwing something over the fence" and "wrong rock" so there is (to us) a proverbial fence and a proverbial wrong rock.
    1. Keep in mind that third party code with references to other files also processed by the asset Pipeline (images, stylesheets, etc.), will need to be rewritten to use helpers like asset_path.
    1. found that using only the Pascal-provided control structures, the correct solution was given by only 20% of the subjects, while no subject wrote incorrect code for this problem if allowed to write a return from the middle of a loop.
    2. computers theoretically need only one machine instruction (subtract one number from another and branch if the result is negative)
    1. This is a useful approach to error handling, but please don’t take it to extremes! See my post on “Against Railway-Oriented Programming”.
    1. provide interfaces so you don’t have to think about them

      Question to myself: Is not having to think about it actually a good goal to have? Is it at odds with making intentional/well-considered decisions?  Obviously there are still many of interesting decisions to make even when using a framework that provides conventions and standardization and makes some decisions for you...

    1. What this means is: I better refrain from writing a new book and we rather focus on more and better docs.

      I'm glad. I didn't like that the book (which is essentially a form of documentation/tutorial) was proprietary.

      I think it's better to make documentation and tutorials be community-driven free content

    2. The new 2.1 version comes with a few necessary but reasonable changes in method signatures. As painful as that might sound to your Rails-spoiled ears, we preferred to fix design mistakes now before dragging them on forever.
    3. The new call API is much more consistent and takes away another thing we kept explaining to new users - an indicator for a flawed API.
    4. Also, the more I use Trailblazer in projects or even in Trailblazer itself, I feel how needed those new abstractions are.
    1. While Trailblazer offers you abstraction layers for all aspects of Ruby On Rails, it does not missionize you. Wherever you want, you may fall back to the "Rails Way" with fat models, monolithic controllers, global helpers, etc. This is not a bad thing, but allows you to step-wise introduce Trailblazer's encapsulation in your app without having to rewrite it.
    1. compose(Add, x: x, y: 3)

      How is this better than simply:

      Add.run(x: x, y: 3)
      

      ?

      I guess if we did that we would also have to remember to handle merging errors from that outcome into self...

    2. Why is all this interaction code better? Two reasons: One, you can reuse the FindAccount interaction in other places, like your API controller or a Resque task. And two, if you want to change how accounts are found, you only have to change one place.

      Pretty weak arguments though...

      1. We could just as easily used a plain object or module to extract this for easy reuse and having it in only one place (avoiding duplication).
    3. For this one we'll define a helper method to handle raising the correct errors. We have to do this because calling .run! would raise an ActiveInteraction::InvalidInteractionError instead of an ActiveRecord::RecordNotFound. That means Rails would render a 500 instead of a 404.

      True, but why couldn't it handle this for us?

    1. No one has requested it before so it's certainly not something we're planning to add.
    2. I'm sure there will be a few other people out there who eventually want something like this, since Interactions are actually a great fit for enforcing consistency in data structures when working with a schemaless NoSQL store, but obviously it's still a bit of a niche audience.
    3. To give a little more context, structures like this often come up in my work when dealing with NoSQL datastores, especially ones that rely heavily on JSON, like Firebase, where a records unique ID isn't part of the record itself, just a key that points to it. I think most Ruby/Rails projects tend towards use cases where these sort of datastores aren't appropriate/necessary, so it makes sense that this wouldn't come up as quickly as other structures.
    1. Consequently, you act irresponsibly when you adopt any programming practice simply because "that's the way you're supposed to do things."
    2. My point is that you should not program blindly. You must understand the havoc a feature or idiom can wreak. In doing so, you're in a much better position to decide whether you should use that feature or idiom. Your choices should be both informed and pragmatic.
    3. And just because a feature or idiom is commonly used does not mean you should use it either.
    1. I do think it's a common pattern that should be solved, and I am probably going to try and solve it as a Gem as opposed to simply writing code that we use in our code base
    2. with ActiveForm-Rails, validations is the responsability of the form and not of the models. There is no need to synchronize errors from the form to the models and vice versa.

      But if you intend to save to a model after the form validates, then you can't escape the models' validations:

      either you check that the models pass their own validations ahead of time (like I want to do, and I think @mattheworiordan was wanting to do), or you have to accept that one of the following outcomes is possible/inevitable if the models' own validations fail:

      1. if you use object.save then it may silently fail to save
      2. if you use object.save then it will fail to save and raise an error

      Are either of those outcomes acceptable to you? To me, they seem not to be. Hence we must also check for / handle the models' validations. Hence we need a way to aggregate errors from both the form object (context-specific validations) and from the models (unconditional/invariant validations that should always be checked by the model), and present them to the user.

      What do you guys find to be the best way to accomplish that?

      I am interested to know what best practices you use / still use today after all these years. I keep finding myself running into this same problem/need, which is how I ended up looking for what the current options are for form objects today...

    3. DSLs can be problematic for the user since the user has to manage state (e.g. am I supposed to call valid? first or update_attributes?). This is exactly why the #validate is the only method to change state in Reform.
    4. Trust me, I thought a lot about #validate and its semantics, and I am gonna make it even more "SRP" by making Form#errors and #valid? semi-public. All that happens via #validate reducing the possible wrong usage for users.
    5. I apologize for the slow development of Reform after the "explosion" when I released it initially. The reason for this is I changed jobs and didn't use Reform (yet).
    1. Yes, you do face difficult choices (moral) but you don't care about it. All you care are the reputation bars. So... Let's kill this guy, who cares if he is innocent, but this faction needs it or I'm dead. Sounds great on paper but to be honest... you just sit there and do whatever for these reputation bars. If you won't, then you lose
    1. It's difficult because it's a case-by-case basis - there is no one right answer so it falls into subjective arguments.
    2. Space: Suppose we had infinite memory, then cache all the data; but we don't so we have to decide what to cache that is meaningful to have the cache implemented (is a ??K cache size enough for your use case? Should you add more?) - It's the balance with the resources available.
    3. Time: Suppose all your data was immutable, then cache all the data indefinitely. But this isn't always to case so you have to figure out what works for the given scenario (A person's mailing address doesn't change often, but their GPS position does).
    1. So the hard and unsolvable problem becomes: how up-to-date do you really need to be?
    2. After considering the value we place, and the tradeoffs we make, when it comes to knowing anything of significance, I think it becomes much easier to understand why cache invalidation is one of the hard problems in computer science

      the crux of the problem is: trade-offs

    3. The non-determinism is why cache invalidation — and that other hard problem, naming things — are uniquely and intractably hard problems in computer science. Computers can perfectly solve deterministic problems. But they can’t predict when to invalidate a cache because, ultimately, we, the humans who design and build computational processes, can’t agree on when a cache needs to be invalidated.
    1. Now if you think about it, PJAX sounds a lot like Turbolinks. They both use JS to fetch server-rendered HTML and put it into the DOM. They both do caching and manage the forward and back buttons. It's almost as if the Rails team took a technique developed elsewhere and just rebranded it.
    1. And honestly, most people prefer the no hassle, especially after wasting too much time dabbling with distros that are "for advanced users" troubleshooting all kinds of dumbass problems that just worked out of the box in many other distros.
    1. This nav bar by Chris Coyier is a great example of something that makes more sense as a flexbox than grid.
    2. Flexbox's strength is in its content-driven model. It doesn't need to know the content up-front. You can distribute items based on their content, allow boxes to wrap which is really handy for responsive design, you can even control the distribution of negative space separately to positive space.
    1. that's a point, but I would say the opposite, when entering credit card data I would rathre prefer to be entirely in the Verified By Visa (Paypal) webpage (with the url easily visible in the address bar) rather that entring my credit card data in an iframe of someone's website.
  4. Jan 2021
    1. Group Rules from the Admins1NO POSTING LINKS INSIDE OF POST - FOR ANY REASONWe've seen way too many groups become a glorified classified ad & members don't like that. We don't want the quality of our group negatively impacted because of endless links everywhere. NO LINKS2NO POST FROM FAN PAGES / ARTICLES / VIDEO LINKSOur mission is to cultivate the highest quality content inside the group. If we allowed videos, fan page shares, & outside websites, our group would turn into spam fest. Original written content only3NO SELF PROMOTION, RECRUITING, OR DM SPAMMINGMembers love our group because it's SAFE. We are very strict on banning members who blatantly self promote their product or services in the group OR secretly private message members to recruit them.4NO POSTING OR UPLOADING VIDEOS OF ANY KINDTo protect the quality of our group & prevent members from being solicited products & services - we don't allow any videos because we can't monitor what's being said word for word. Written post only.

      Wow, that's strict.

    1. We informed and documented. We made it easy for you to understand the problem and also to take action if you disagreed. I hope you didn’t read https://linuxmint-user-guide.readthedocs.io/en/latest/snap.html#how-to-install-the-snap-store-in-linux-mint-20. I can’t understand how it could be simpler.
    2. Is it harder to enable it in Mint than it is to disable it in Ubuntu? Not at all. Is how to enable it better documented in Mint than how to disable it in Ubuntu? Absolutely: https://linuxmint-user-guide.readthedocs.io/en/latest/snap.html.
    3. We don’t do politics, and we certainly don’t do religion. You’re bringing these here by using terms such as “politicians” or “evil”.

      Does "evil" refer to religion? Or perhaps they meant "evil" in a more general way, as a more extreme version of "bad".

    1. Ubuntu also supports ‘snap’ packages which are more suited for third-party applications and tools which evolve at their own speed, independently of Ubuntu. If you want to install a high-profile app like Skype or a toolchain like the latest version of Golang, you probably want the snap because it will give you fresher versions and more control of the specific major versions you want to track.
    1. This is open-source. You can always fork and maintain that fork yourself if you feel that's warranted. That's how this project started in the first place, so I know the feeling.
    1. Bordering an element with a single repeating image is something that seems like it should be easy with a property called border-image, but the process for actually doing that is somewhat counter-intuitive. Let’s say, for example, that you want to border an element with a repeating heart icon. You can’t do that with a image of a single heart. Instead, you have to make an image of a “frame” of hearts arranged as you’d like them to appear in the border, then slice that image. <img sizes="(min-width: 735px) 864px, 96vw" src='https://i2.wp.com/css-tricks.com/wp-content/uploads/2013/01/enlarged-border-image-slice.png' alt='' data-recalc-dims="1" />Eight hearts in a “frame” image, enlarged to show detail. The red lines indicate slices. If you think that sounds preposterous, you’re in good company. There was a lengthy discussion of the subject on Eric Myer’s blog a few years ago where many frontend development greats weighed in.
    1. CSS Grid Layout excels at dividing a page into major regions or defining the relationship in terms of size, position, and layer, between parts of a control built from HTML primitives.
    1. Besides running contrary to the principles that lead a lot of people to Linux systems (a closed store that you can't alter...automatic updates you have no control over....run by just the one company)
    2. If upstream code presumes things will work that dont in snap (e.g. accesses /tmp or /etc) the snap maintainer has to rewrite that code and maintain a fork. Pointless work. Packaging for .deb is a no-brainer.
    3. Its not too complicated but it is an annoyance. I want /etc/hosts, /etc/resolv.conf, /etc/nsswitch.conf, /etc/rc.local and all the standard stuff to work. The heavy lifting is done in the kernel. All they need to do is leave it alone. Its getting harder to make Ubuntu behave like Linux.
    4. if it's not broken, fix it until it is.
    1. If components gain the slot attribute, then it would be possible to implement the proposed behavior of <svelte:fragment /> by creating a component that has a default slot with out any wrappers. However, I think it's still a good idea to add <svelte:fragment /> so everyone who encounters this common use case doesn't have to come up with their own slightly different solutions.
    1. The CardLayout creates a store in context and the Card creates a standardized div container and registers it to the store so that the CardLayout has access to that DOM element. Then in afterUpdate you can move the DOM elements into columns and Svelte will not try to put them back where they go. It's a bit messy but it works.
    1. I don’t find the software slow, I find the startup time for snap packages when the start for the first time on a session slow, but that has been improved, and it’s public that the snapcraft team has been working hard to improve that.
    2. What’s the use of ie. snap libreoffice if it can’t access documents on a samba server in my workplace ? Should I really re-organize years of storage and work in my office for being able to use snap ? A too high price to pay, for the moment.
    3. I’m not a dev either, so no Ubuntu fork, but I will perhaps be forced to look at Debian testing, without some advantages of Ubuntu - but now that Unity is gone (and I deeply regret it), gap would not be so huge anymore…
    4. If folks want to get together and create a snap-free remix, you are welcome to do so. Ubuntu thrives on such contribution and leadership by community members. Do be aware that you will be retreading territory that Ubuntu developers trod in 2010-14, and that you will encounter some of the same issues that led them to embrace snap-based solutions. Perhaps your solutions will be different. .debs are not perfect, snaps are not perfect. Each have advantages and disadvantages. Ubuntu tries to use the strengths of both.
  5. Dec 2020
    1. The only solution that I can see is to ensure that each user gets their own set of stores for each server-rendered page. We can achieve this with the context API, and expose the stores like so: <script> import { stores } from '@sapper/app'; const { page, preloading, session } = stores(); </script> Calling stores() outside component initialisation would be an error.

      Good solution.

    1. This would be cumbersome, and would encourage developers to populate stores from inside components, which makes accidental data leakage significantly more likely.
    2. which makes it much harder to accidentally keep logged-in state visible after a client-side logout
    1. it focuses on compiling non-standard language extensions: JSX, TypeScript, and Flow. Because of this smaller scope, Sucrase can get away with an architecture that is much more performant but less extensible
    1. You can afford to make a proper PR to upstream.
    2. No more waiting around for pull requests to be merged and published. No more forking repos just to fix that one tiny thing preventing your app from working.

      This could be both good and bad.

      potential downside: If people only fix things locally, then they may be less inclined/likely to actually/also submit a merge request, and therefore it may be less likely that this actually (ever) gets fixed upstream. Which is kind of ironic, considering the stated goal "No more waiting around for pull requests to be merged and published." But if this obviates the need to create a pull request (does it), then this could backfire / work against that goal.

      Requiring someone to fork a repo and push up a fix commit -- although a little extra work compared to just fixing locally -- is actually a good thing overall, for the community/ecosystem.

      Ah, good, I see they touched on some of these points in the sections:

      • Benefits of patching over forking
      • When to fork instead
    1. locked and limited conversation to collaborators

      Why do they punish the rest of us (can't even add a thumb up reaction) just because someone was "talking too much" or something on this issue?

  6. Nov 2020