19,785 Matching Annotations
  1. Aug 2022
    1. you can also replicate the bind:this syntax if you please: Wrapper.svelte <script> let root export { root as this } </script> <div bind:this={root} />

      This lets the caller use it like this: <Wrapper bind:this={root} />

      in the same way we can already do this with elements: <div bind:this=

  2. Jul 2022
    1. Patrician IV is an overhauling upgrade to Patrician III; so if you have not played the previous games in the Patrician series, starting with IV is really all you need. Also, the game of Patrician is very straightforward and addicting, so playing previous versions won't offer you anything unseen in Patrician IV.
    2. Onto the game itself.

      onto

    1. Don't worry if your project isn't quite ready for Plug'n'Play just yet! This guide will let you migrate without losing your node_modules folder. Only in a later optional section we will cover how to enable PnP support, and this part will only be recommended, not mandatory. Baby steps!
    1. Process Substitution is something everyone should be using regularly! It is super useful. I do something like vimdiff <(grep WARN log.1 | sort | uniq) <(grep WARN log.2 | sort | uniq) every day.

      underused

    1. Always use a while read construct: find . -name "*.txt" -print0 | while read -d $'\0' file do …code using "$file" done The loop will execute while the find command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer.
    2. What ever you do, don't use a for loop: # Don't do this for file in $(find . -name "*.txt") do …code using "$file" done Three reasons: For the for loop to even start, the find must run to completion. If a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names. Although now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it.
    1. $0 can be set to an arbitrary value by the caller. On the flip side, $BASH_SOURCE can be empty, if no named file is involved; e.g.: echo 'echo "[$BASH_SOURCE]"' | bash
    2. hile this warning is helpful, it could be more precise, because you won't necessarily get the first element: It is specifically the element at index 0 that is returned, so if the first element has a higher index - which is possible in Bash - you'll get the empty string; try 'a[1]='hi'; echo "$a"'.
    1. Pre and post commands with matching names will be run for those as well (e.g. premyscript, myscript, postmyscript)

      Could potentially be confusing behavior if running a script does something extra and you don't know why. They might look at the definition of myscript and not see the additional commands and wonder how/why they are running. The premyscript might be lost in a lost unsorted script list.

    2. Since npm@1.1.71, the npm CLI has run the prepublish script for both npm publish and npm install, because it's a convenient way to prepare a package for use (some common use cases are described in the section below). It has also turned out to be, in practice, very confusing. As of npm@4.0.0, a new event has been introduced, prepare, that preserves this existing behavior. A new event, prepublishOnly has been added as a transitional strategy to allow users to avoid the confusing behavior of existing npm versions and only run on npm publish (for instance, running the tests one last time to ensure they're in good shape).
    1. This option wasn’t offered by the library, but that doesn’t have to stop us. Isn’t that fun?
    2. Here’s a quick blog post about a specific thing (making FactoryBot.lint more verbose) but actually, secretly, about a more general thing (taking advantage of Ruby’s flexibility to bend the universe to your will). Let’s start with the specific thing and then come back around to the general thing.
    1. Steer, of course, can also be a noun that refers to male cattle. This meaning is unrelated to the expression steer clear.
    1. The amount of time wasted on this is ridiculous. Thanks. This is about the only thing that worked. Why in the world this wouldn't "just work" by defining the default url options in Rails config/environments/test.rb is beyond me.
    1. The goal of this project is to have a single gem that contains all the helper methods needed to resize and process images. Currently, existing attachment gems (like Paperclip, CarrierWave, Refile, Dragonfly, ActiveStorage, and others) implement their own custom image helper methods. But why? That's not very DRY, is it? Let's be honest. Image processing is a dark, mysterious art. So we want to combine every great idea from all of these separate gems into a single awesome library that is constantly updated with best-practice thinking about how to resize and process images.
    1. It is sublimely annoying to have to configure the exact same parameters in config/environments, spec/spec_helper.rb and again here... all in marginally different ways (with 'http://' or without, with port number or port specified separately). Even Capybara.configure syntax can't seem to stay consistent to itself between versions...
    1. It really only takes one head scratching issue to suck up all the time it saves you over a year, and in my experience these head scratchers happen much more often than once a year. So in that sense it's not worth it, and the first time I run into an issue with it, I disable it completely.
    2. It feels like « removing spring » is one of those unchallenged truths like « always remove Turbolinks » or « never use fixtures ». It also feels like a confirmation bias when it goes wrong.

      "unchallenged truths" is not really accurate. More like unchallenged assumption.

    3. I may had to turn it off and on again a few times as debugging technique when I had no other ideas on what to do.
    1. Thanks for your making your first contribution to Cucumber, and welcome to the Cucumber committers team! You can now push directly to this repo and all other repos under the cucumber organization! In return for this generous offer we hope you will: Continue to use branches and pull requests. When someone on the core team approves a pull request (yours or someone else's), you're welcome to merge it yourself. Commit to setting a good example by following and upholding our code of conduct in your interactions with other collaborators and users. Join the community Slack channel to meet the rest of the team and make yourself at home. Don't feel obliged to help, just do what you can if you have the time and the energy. Ask if you need anything. We're looking for feedback about how to make the project more welcoming, so please tell us!
    1. A more conservative workaround is find the gems that are causing issues and list them on the top of your Gemfile.

      good solution ... except that it didn't help/work

    2. A good way to debug what is causing these is put this at the top of your Gemfile:
    1. These directives are inherited from the previous configuration level if and only if there are no proxy_set_header directives defined on the current level.

      This conditional rule for inheritance is different than most other apps/contexts. Usually it just always inherits, and any local config at the current level gets merged with or overrides what is inherited.

    1. By default, this function reads template files in /etc/nginx/templates/*.template and outputs the result of executing envsubst to /etc/nginx/conf.d.

      '

    1. It is "guaranteed" as long as you are on the default network. 172.17.0.1 is no magic trick, but simply the gateway of the network bridge, which happens to be the host. All containers will be connected to bridge unless specified otherwise.
    2. For example I use docker on windows, using docker-toolbox (OG) so that it has less conflicts with the rest of my setup and I don't need HyperV.
    1. Even with OverloadedRecordDot, Haskell’s records are still bad, they’re just not awful.
    1. You are context switching between new features and old commits that still need polishing.
    2. If the code review process is not planned right, it could have more cost than value.
    1. Defects found in peer review are not an acceptable rubric by which to evaluate team members. Reports pulled from peer code reviews should never be used in performance reports. If personal metrics become a basis for compensation or promotion, developers will become hostile toward the process and naturally focus on improving personal metrics rather than writing better overall code.
    1. raise StandardError.new "No authentication is configured for ActiveStorage"

      forces the issue by requiring end-dev to edit/override this method to avoid getting this error

    2. # ActiveStorage defaults to security via obscurity approach to serving links # If this is acceptable for your use case then this authenticable test can be # removed. If not then code should be added to only serve files appropriately. # https://edgeguides.rubyonrails.org/active_storage_overview.html#proxy-mode def authenticated? raise StandardError.new "No authentication is configured for ActiveStorage" end
    1. Stop autoclosing of PRs While the idea of cleaning up the the PRs list by nudging reviewers with the stale message and closing PRs that didn't got a review in time cloud work for the maintainers, in practice it discourages contributors to submit contributions. Keeping PRs open and not providing feedback also doesn't help with contributors motivation, so while I'm disabling this feature of the bot we still need to come up with a process that will help us to keep the number of PRs in check, but celebrate the work contributors already did instead of ignoring it, or dismissing in the form of a "stale" alerts, and automatically closing PRs.

      Yes!! Thank you!!

      typo: cloud work -> could work

    1. I don't understand why it should be so hard to keep issues open / reopen them. That's just going to cause people to open a duplicate issue/PR — or (if they notice in time) cause people to add extra "not stale" noise when the bot warns it's about to be closed. Wouldn't it be preferable to keep the discussion together in one place instead of spreading across duplicate issues? (Similarly, moving the meta conversation about an issue out to a completely separate system (Discord) seems like the wrong direction, because it wouldn't be visible to/discoverable by those arriving at the closed issue.) I get how it's useful to have stale issues not cluttering the list. But if interes/activity later picks up again, then "stale" is no longer accurate and its status should be automatically updated to reflect its newfound freshness... like it did back here:
    2. ActiveSupport.on_load :active_storage_blob do def accessible_to?(accessor) attachments.includes(:record).any? { |attachment| attachment.accessible_to?(accessor) } || attachments.none? end end ActiveSupport.on_load :active_storage_attachment do def accessible_to?(accessor) record.try(:accessible_to?, accessor) end end ActiveSupport.on_load :action_text_rich_text do def accessible_to?(accessor) record.try(:accessible_to?, accessor) end end module ActiveStorage::Authorize extend ActiveSupport::Concern included do before_action :require_authorization end private def require_authorization head :forbidden unless authorized? end def authorized? @blob.accessible_to?(Current.identity) end end Rails.application.config.to_prepare do ActiveStorage::Blobs::RedirectController.include ActiveStorage::Authorize ActiveStorage::Blobs::ProxyController.include ActiveStorage::Authorize ActiveStorage::Representations::RedirectController.include ActiveStorage::Authorize ActiveStorage::Representations::ProxyController.include ActiveStorage::Authorize end

      Interesting, rather clean approach, I think

    3. I'm partial to the solution originally proposed. It follows a pattern already established in Rails. For example, using an application-specific ApplicationStorageController which inherits from ActiveStorage::BaseController is very similar to the ApplicationRecord which inherits from ActiveRecord::Base or ApplicationJob which inherits from ActiveJob::Base.
    4. I think this is important, and I'd love to help making ActiveStorage a more secure place.
    5. it should be normal for production apps to add authentication and authorization to their ActiveStorage controllers. Unfortunately, there are 2 possible ways to achieve it currently: Not drawing ActiveStorage routes and do everything by yourself Override/monkey patch ActiveStorage controllers None of them is ideal because in the end you can't benefit from Rails upgrades (bug fixes, etc) so the intention of this PR is to let people define a parent controller (inspired by Devise, maybe @carlosantoniodasilva can tell us his experience on this feature) so that people can add authentication and authorization in a single place and still benefit from the default controllers.
    1. Create a new controller to override the original: app/controllers/active_storage/blobs_controller.rb

      Original comment:

      I've never seen monkey patching done quite like this.

      Usually you can't just "override" a class. You can only reopen it. You can't change its superclass. (If you needed to, you'd have to remove the old constant first.)

      Rails has already defined ActiveStorage::BlobsController!

      I believe the only reason this works:

      class ActiveStorage::BlobsController < ActiveStorage::BaseController

      is because it's reopening the existing class. We don't even need to specify the < Base class. (We can't change it, in any case.)

      They do the same thing here: - https://github.com/ackama/rails-template/pull/284/files#diff-2688f6f31a499b82cb87617d6643a0a5277dc14f35f15535fd27ef80a68da520

      Correction: I guess this doesn't actually monkey patch it. I guess it really does override the original from activestorage gem and prevent it from getting loaded. How does it do that? I'm guessing it's because activestorage relies on autoloading constants, and when the constant ActiveStorage::BlobsController is first encountered/referenced, autoloading looks in paths in a certain order, and finds the version in the app's app/controllers/active_storage/blobs_controller.rb before it ever gets a chance to look in the gem's paths for that same path/file.

      If instead of using autoloading, it had used require_relative (or even require?? but that might have still found the app-defined version earlier in the load path), then it would have loaded the model from activestorage first, and then (possibly) loaded the model from our app, which (probably) would have reopened it, as I originally commented.

    1. This was a surprise to me, since we generally authenticate the record quite well, but then go on to do something like record.file.url in our view, generating a URL that is permanent and unauthenticated.
    1. meat: https://github.com/musaffa/file_validators/blob/master/lib/file_validators/validators/file_content_type_validator.rb

      Compared to https://github.com/aki77/activestorage-validator, I slightly prefer this because - it has more users and has been battle tested more - is more flexible: can specify exclude as well as allow - has more expansive Readme documentation - is mentioned by https://github.com/thoughtbot/paperclip/blob/master/MIGRATING.md#migrating-from-paperclip-to-activestorage - mentions security: whether or not it's needed, at least this makes extra attempt to be secure by using external tool to check content_type; https://github.com/aki77/activestorage-validator/blob/master/lib/activestorage/validator/blob.rb just uses blob.content_type, which I guess just trusts whatever ActiveStorage gives us (which seems fair too: perhaps this should be kicked up to them to be their concern)

      In fact, it looks like ActiveStorage does do some kind of mime type checking...

      activestorage-6.1.6/app/models/active_storage/blob/identifiable.rb ``` def identify_without_saving unless identified? self.content_type = identify_content_type self.identified = true end end

      def identify_content_type
        Marcel::MimeType.for download_identifiable_chunk, name: filename.to_s, declared_type: content_type
      end
      

      ```

    1. Overall, there appears to be no MIME type image/jpg. Yet, in practice, nearly all software handles image files named "*.jpg" just fine.

      Extension != MIME type

    1. It really slows down your test suite accessing the disk.So yes, in principle it slows down your tests. There is a "school of testing" where developer should isolate the layer responsible for retrieving state and just set some state in memory and test functionality (as if Repository pattern). The thing is Rails is a tightly coupled with implementation logic of state retrieval on core level and prefers "school of testing" in which you couple logic with state retrial to some degree.Good example of this is how models are tested in Rails. You could just build entire test suite calling `FactoryBot.build` and never ever use `FactoryBot.create` and stub method all around and your tests will be lighting fast (like 5s to run your entire test suite). This is highly unproductive to achieve and I failed many times trying to achieve that because I was spending more time maintaining my tests then writing something productive for business.Or you can took more pragmatic route and save database record where is too difficult to just 'build' the factory (e.g. Controller tests, association tests etc)Same I would say for saving the file to the Disk. Yes you are right You could just "not save the file to disk" and save few milliseconds. But at the same time you will in future stumble upon scenarios where your tests are not passing because the file is not there (e.g. file processing validations) Is it really worth it ? I never worked on a project where saving file to a disk would slow down tests significantly enough that would be an issue (and I work for company where core business is related to file uploading) Especially now that we have SSD drives in every laptop/server it's blazing fast so at best you would save 1 seconds for entire test suite (given you call FactoryBot traits to set/store file where it make sense. Not when every time you build an object.)
    1. # Internal: This is how much Honeybadger cares about Rails developers. :)

      :)

    2. # Some Rails projects include ActionDispatch::TestProcess globally for the # use of `fixture_file_upload` in tests. This is a bad practice because it # includes other methods -- such as #session -- which override existing # methods on *all objects*.
    1. # This ensures that the pid namespace is shared between the host # and the container. It's not necessary to be able to run spring # commands, but it is necessary for "spring status" and "spring stop" # to work properly. pid: host
    1. If you don't use an intermediate variable, you need to protect the / characters in the directory to remove so that they aren't treated as the end of the search text.
    2. If the path in question is at the beginning of the PATH variable, you need to match the colon at the end. This is an annoying caveat which complicates easy generic manipulations of PATH variables.
    1. I'm fully serious: If your accounts and data are important, then just don't make such mistakes. Being careful is completely possible.

      Being careful is completely possible.

    1. I can't reverse it, but maybe somebody who understands how Chrome does the decryption can. The ability is there, its not that Chrome can't decrypt them, it is that Chrome won't decrypt them due to false "security".And if Chrome actually, genuinely can no longer decrypt passwords after they have been restored from backup, then that is a shockingly bad bug in their password manager.
    2. If your security locks you out of your own home just because you changed your trousers, that would be shockingly bad security.If your security permanently locks you out of your accounts because you restored your Chrome settings from backup, how is that any better?
    1. So the correct command to use is findmnt, which is itself part of the util-linux package and, according to the manual: is able to search in /etc/fstab, /etc/mtab or /proc/self/mountinfo
    1. Rails 3 seems is ignoring my rescue_from handler so I cannot test my redirect below.

      I have similar problem too

      404 errors raise ActiveRecord::RecordNotFound to the test

  3. Jun 2022
    1. Data protection authorities have found that the U.S. legal system does not guarantee the same standards of protection as the EU. The situation stems from a set of U.S. laws that allow government organizations to request access to consumers’ personal data from US-based services, regardless of where the data centers or servers are located. In light of this, NOYB filed 101 complaints with European DPAs to find that transferring European users’ data to the U.S. was unlawful. The decisions, which have noted the illegitimacy of the transfers, focus on the analysis of additional technical, contractual and organizational measures.
    1. Users often forget to save their recovery codes when enabling 2FA. If you added an SSH key to your GitLab account, you can generate a new set of recovery codes with SSH:
    1. A custom component might be interesting for you if your views look something like this: <%= simple_form_for @blog do |f| %> <div class="row"> <div class="span1 number"> 1 </div> <div class="span8"> <%= f.input :title %> </div> </div> <div class="row"> <div class="span1 number"> 2 </div> <div class="span8"> <%= f.input :body, as: :text %> </div> </div> <% end %> A cleaner method to create your views would be: <%= simple_form_for @blog, wrapper: :with_numbers do |f| %> <%= f.input :title, number: 1 %> <%= f.input :body, as: :text, number: 2 %> <% end %>
    1. The problem isn’t Linux, it’s the defective by design DRM.The studios demand ridiculous DRM that does nothing to actually stop piracy.
    2. Valve long ago proved that piracy is a service issue. Make it more convenient to pay for something, and people pay. Just look at what they did to bring AAA games to Linux!Apple, Amazon, and others proved it as well when they removed DRM (or never had it in the first place) on digital music purchases! People still paid for music downloads! They figured out how to keep people paying by making subscriptions to pretty much all music cheap and convenient. The service is more convenient than piracy, and you have a useful option for anything you want more permanent than a subscription.
    3. Linux users flood developers on projects in github. On Opensource projects were you can actually somehow talk to the developers as an end user. Or maybe on Twitter if a developer of a proprietary software is somehow known and you can contact him on social media.But Developers dont talk to first level customer support of a proprietary software like Adobe InDesign or a service like Netflix,But this is were these companies get their data. And they base their decisions on this data.

      .

    4. Linux users flood developers with bugs and requests because we actually know how to debug our systems. The creators then tend to get annoyed at the flood, because even if they resolved them all, it would be spending a lot of energy for less than 1% of their userbase.

      .

    5. there should be more Linux desktop community solidarity
    6. The main problem of the Linux community is that it is divided. I know this division represents freedom of choice but when your rivals are successful, you must inspect them carefully. And both rivals here (MacOS and Windows) get their power from the "less is more approach".This division in Linux communities make people turn into their communities when they have problems and never be heard as a big, unified voice.When something goes wrong with other OSes, people start complaining in many forums and support sites, some of them writing to multiple places and others support them by saying "yeah, I have that problem, too".In the Linux world, the answers to such forums come as "don't use that shitty distro" or "use that command and circumvent the problem".Long story short" average Linux user doesn't know that they are:still customers and have all the rights to demand from companiesthey can get together and act up louder.Imagine such an organizing that most of the Linux users manage to get together and writing to Netflix. Maybe not all of them use Netflix but the number of the Linux users are greater than Netflix members. What a domination it would be!But instead we turn into our communities and act like a survival tribe who has to solve all their problems themselves .
    7. Big Software companies like Adobe or Netflix do two things that are relevant for us and currently go wrong:They analyse the systems their customers use. They don't see their Linux users because we tend to either not use the product at all under Linux (just boot windows, just use a firertv stick and so one) or we use emulators or other tools that basically hide that we actually run Linux. --> The result is that they don't know how many we actually are. They think we are irrelevant because thats what the statistics tell them (they are completely driven by numbers).They analyze the feature requests and complains they get from their customers. The problem is: Linux users don't complain that much or try to request better linux support. We usually somehow work around the issues. --> The result is that these companies to neither get feature requests for better Linux support nor bug reports from linux users (cause its not expected to work anyways).
    1. It simplify things alot. Valves needs to constantly push updates so it makes perfect sense to pick Arch.
    2. Mine you at the time Valve was trying to get developers to make Linux ports of the games so targeting Debian made some sense in terms of platform stability, this didn't work out well and developers did no such thing. Valve then moved to making WINE work better through spending dev time adding patches and making the Proton later on top of it.Valve likely moved to an Arch base to get bleeding edge support for new hardware and for performance enhancements that come along with it as they were no longer shackled trying to get developers to make native Linux ports.
    3. manjaro maintaining a slightly different update cycle and overall behavior than upstream arch (I know this is a point of contention, but that's not the point here)
    4. Compare that to bugfixes coming to a Ubuntu LTS or 6 month and you might not get it before the version is End Of Life making collaborating difficult & fruitless.Arch is where developers are so it makes sense from the massive array of software available in the AUR & repos too.Its like a software flee market, occasionally AUR software isn't up to the bar or theoretically there COULD be a bad actor once every few years otherwise its something truly special.
    5. Bug triage is so much easier & faster on Arch. Everyone is on the same latest version and engaging developers usually lead to fixes that users can consume right away or within a week.
    1. The linux-based open-source mobile operating system Android is not only the most popular mobile operating system in the world, it’s also on the way to becoming a proprietary operating system. How is that?
    2. A free-as-in-freedom re-implementation of Google’s proprietary Android user space apps and libraries.

      .

    Tags

    Annotators

    URL

    1. Additionally, GrapheneOS has only been developed for Google’s Pixel line of phones. Some people are a little hesitant to use a Google phone to de-google their lives.
    2. The main flaw with Lineage is the phone’s bootloader must remain unlocked
    1. Our Camera app provides the system media intents used by other apps to capture images / record videos via the OS provided camera implementation. These intents can only be provided by a system app since Android 11, so the quality of the system camera is quite important.

      .

    1. No, GrapheneOS will remain a non-profit open source project / organization. It will remain an independent organization not strongly associated with any specific company. We partner with a variety of companies and other organizations, and we're interested in more partnerships in the future. Keeping it as an non-profit avoids the conflicts of interest created by a profit-based model. It allows us to focus on improving privacy/security without struggling to build a viable business model that's not in conflict with the success of the open source project.

      .

    2. Using the network-provided DNS servers is the best way to blend in with other users. Network and web sites can fingerprint and track users based on a non-default DNS configuration.
    3. Network and web sites can fingerprint and track users based on a non-default DNS configuration.

      how?

    1. So why do we continue to perpetuate the myth of conventional current flow (CCF) when we have known for a century that current in most electrical and electronic circuits is electron flow (EF)?
    1. What's wrong with a simple, feel good movie?
    2. As has been mentioned this is a take on the Prince and the Pauper story that may not appeal to those who are into art films and like to sit around discussing and dissecting a film's philosophical nuances. If, on the other hand, you simply like a fun story, gorgeous sets, and yes, the occasional over-the-top scene, this can be a thoroughly enjoyable tale of a man who is willing to put the woman he loves ahead of himself.
    3. There are too many people giving this 7+ stars and even 10 stars (unusual to see on IMDb) to believe the legitimacy of the number of 1-star "worst movie ever" reviews. Such paradox is simply difficult to accept as valid.
    4. I almost didn't watch this movie due to the repetitive negative reviews here on IMDb. Usually I find reviews here fairly spot-on. But in this case I am convinced we are living in a generation of viewers who have been raised on so much schlock, sex, violence, blood and foul language that they wouldn't recognize a prime movie if it whacked them with a hammer. Either that or we have a set of the most bogus witch-hunt reviews ever.
    5. Please, feel free to consider the negative reviews as suspect (at the very least)-- and give this film a try. The best and most valid review is your own.
    1. ``` function download(url, fileName) { const link = document.createElement("a"); const xhr = new XMLHttpRequest();

      xhr.open('GET', url, true); xhr.responseType = 'blob'; xhr.onload = function() { file = new Blob([xhr.response], { type: 'application/octet-stream' }); link.href = window.URL.createObjectURL(file); link.download = fileName; document.body.appendChild(link); link.click(); setTimeout(() => { link.parentNode.removeChild(link); }, 0); } xhr.send() }

      url = document.querySelector('#player').dataset.mediaUrl; fileName = url.replace(/.(202\d)\/(\d\d)-(\d\d).\/([^/]).mp3/, '$1-$2-$3-$4') + '-' + document.querySelector('.epHeader h2').textContent + '--' + document.querySelector('.epDesc').textContent.replace(/^\s/g, "").slice(0, 200) + '.mp3';

      download(url, fileName);

      ```

    2. document.querySelector('#player').dataset.mediaUrl.replace(/.*(202\d)\/(\d\d)-(\d\d).*\/([^/]*).mp3/, '$1-$2-$3-$4') + '-' + document.querySelector('.epHeader h2').textContent + '--' + document.querySelector('.epDesc').textContent.replace(/^\s*/g, "").slice(0, 200) + '.mp3'

    1. I know this bug is labeled invalid and I know the devs don't want to address the issue further...but if anyone reads this I'd really like to know if there is some way *advanced* FF users can enable cross-origin downloads. I mean, c'mon, the case against allowing cross-origin downloads is built on the premise that users could unknowingly download a file from a site containing their own personal information (e.g., gmail.com) and save it using a misleading name (e.g. "30off.coupon.txt") AND THEN proceed to another malicious page where they directly go and upload that same file they just downloaded. I mean c'mon. Seriously?? Anyone who's gonna fall for that deserves to lose their personal information. I'm all for browser security, but I think a simple preference in about:config to enable cross-origin a@download is in order. Please consider. Thank you.

      .

    1. The Bugzilla issues don't seem to rule-out the possibility of using CORS for cross-origin download attribute support in the future, but right now using CORS headers does not do anything for the download attribute. It's possible that if other browsers start supporting the attribute, a consensus may yet be reached.

      .

    1. This actually is possible with JavaScript, though browser support would be spotty. You can use XHR2 to download the file from the server to the browser as a Blob, create a URL to the Blob, create an anchor with its href property and set it to that URL, set the download property to whatever filename you want it to be, and then click the link. This works in Google Chrome, but I haven't verified support in other browsers. window.URL = window.URL || window.webkitURL; var xhr = new XMLHttpRequest(), a = document.createElement('a'), file; xhr.open('GET', 'someFile', true); xhr.responseType = 'blob'; xhr.onload = function () { file = new Blob([xhr.response], { type : 'application/octet-stream' }); a.href = window.URL.createObjectURL(file); a.download = 'someName.gif'; // Set to whatever file name you want // Now just click the link you created // Note that you may have to append the a element to the body somewhere // for this to work in Firefox a.click(); }; xhr.send();
    1. Context
    2. A development container allows you to use a container as a full-featured development environment. It can be used to run an application, to separate tools, libraries, or runtimes needed for working with a codebase, and to aid in continuous integration and testing.
    1. A GitHub Action and an Azure DevOps Task are available for running a repository's dev container in continuous integration (CI) builds. This allows you to reuse the same setup that you are using for local development to also build and test your code in CI.
    2. Our development container teams across Microsoft and GitHub continue active development on the new Dev Container Specification, and this iteration had several exciting highlights.
    1. Many believe that companies should give more time to employees to contribute to open source, with 79% agreeing or strongly agreeing that companies should give time during work hours to contribute.
    2. time is listed as the biggest barrier to contributing to open source projects

      .

    3. while just 20% have been paid for their contributions to open source, 53% agree or strongly agree that individuals should be paid for open source contributions
    1. At this point, you’ll want to mow your grass 3-5 times. This amounts to roughly once per week. Do this before walking on it.

      How do you mow a lawn without walking on it? :)

      I think they mean "optional" kinds of walking on it other than mowing, but it still seems contradictory.

      I think this one made more sense: https://hyp.is/Hyh4YuhXEeyNCrckBwtGgg/www.backyarddigs.com/lawn-care/how-long-after-planting-grass-seed-can-you-walk-on-it/

      Add in another two or so weeks for the grass to grow tall enough for its first mowing, at which point you have no choice but to walk over the area.

    1. Add in another two or so weeks for the grass to grow tall enough for its first mowing, at which point you have no choice but to walk over the area.

      have to do it

      no other reasonable choice/alternative

  4. www.postgresql.org www.postgresql.org
    1. all duplicate rows are removed from the result set (one row is kept from each group of duplicates)
    1. these two SELECT clauses are NOT equivalent, so be careful: SELECT DISTINCT(event_id, start_time) FROM ... SELECT DISTINCT event_id, start_time FROM ...
    2. Logically, if you just want a distinct list of event_id values, what order they occur in should be irrelevant. If order does matter, then you should add the start_time to the SELECT list so that there is context for the order.
    3. The ORDER BY clause can only be applied after the DISTINCT has been applied. Since only the fields in the SELECT statement are taken into consideration for the DISTINCT operations, those are the only fields may be used in the ORDER BY.
    4. an alternative to Matthew's answer is using an aggregate function like MIN or MAX for the sorting: SELECT event_id FROM Rsvp GROUP BY event_id ORDER BY MIN(start_time)

      .

    5. I just went through a small example in my head which helped me understand why Postgres has this seemingly odd restriction on SELECT DISTINCT / ORDER BY columns.

      .

    6. I know this is a rather old question, but
    1. Needing to use ruby2_keywords explicitly for delegation is unfortunate, I wish there would be a more natural way to express delegation in Ruby 2.7. Unfortunately there is not.
    2. I once proposed to enable ruby2_keywords by default to preserve compatibility, but this was rejected.

      rejected proposal

    1. Remove the commit from step 2. We will merge ignoring the failure. Remove the commit from the other, check it passes with the other commit now on main. Merge the other. We will trigger builds for the main branch of affected repositories to check if everything is in order. Steps 5-8 should happen continuously (e.g. one after another but within a short timespan) so that we don't leave a broken main around. It is important to triage that build process and revert if necessary.

      It is important to not leave a broken main around.

    1. With first-class keyword arguments in the language, we don’t have to write the boilerplate code to extract hash options. Unnecessary boilerplate code increases the opportunity for typos and bugs.
  5. May 2022
    1. Confusingly, if the police suspect you of a crime, you can be described as a “suspicious person” and if you constantly suspect others of crimes, you can also be called “suspicious.”
    2. It never makes sense to say “I am suspect that. . . .”
    1. It detects bots/spiders and serves them a clean page

      Seems like a vulnerability of some sort, though I'm not sure what sort...security/liability?

      A user could just set their user agent to be like a bot, and then it would skip the "protections" provided by the cookie consent code?

    1. The shared context worked though thanks! RSpec.shared_context "perform_enqueued_jobs" do around(:each) { |example| perform_enqueued_jobs { example.run } } end RSpec.configure do |config| config.include_context "perform_enqueued_jobs" end

      use case for around

    1. Where around hooks shine is when you want to run an example within a block. For instance, if your database library offers a transaction method that receives a block, you can use an around to cleanly open and close the transaction around the example.
    1. We document the order of hooks, but I don't think we document where in that order we integrate Rails helpers which makes this confusing, I do sort of think this is a bug but as we use RSpec to integrate Rails here and RSpec Core has no distinction that matches before / after teardown its sort of luck of the draw, we could possibly use prepend_after for Rails integrations which would sort of emulate these options.
    1. 1/ It fits into existing spec based testing infrastructure nicely, including running on travis, code coverage using SimpleCov, switching between generating a profile (RubyProf), a benchmark (Benchmark::IPS) or normal test run. 2/ Some of my benchmarks do have expect clauses to validate that things are working before invoking the benchmark.

      Answering the question:

      I don't understand the point of putting it in a spec. What does that gain you over using benchmark-ips the normal way?

    2. At the moment my open source time is much more limited than it used to be so I haven't gotten around to it yet.
    3. FWIW, I've changed my thinking on this a bit.
    4. I think RSpec should provide around(:context)/around(:all). Not because of any particular use case, but simply for API consistency. It's much simpler to tell users "there are 3 kinds of hooks (before, after and around) and each can be used with any of 3 scopes (example, context and suite)". Having some kinds of hooks work with only some kinds of scopes makes the API inconsistent and forces us to add special case code to emit warnings and also write extra documentation for this fact.
    5. That's cool, I get it, it's unpaid open source work :)
    6. I just wanted to mention there was, IMHO, a valid use case for this. It helps add to the validity of the ticket and the design of the feature.
    7. before(:all) do @fiber = Fiber.new do Benchmark.ips do |benchmark| @benchmark = benchmark Fiber.yield benchmark.compare! end end @fiber.resume end
    8. I've been thinking of looking into implementing this in rspec-core, primarily to make the API more consistent (e.g. so that you can combine any scope -- example/context/suite -- with any hook type before/after/around).
    9. In the meanwhile, my was born so I am not going to get back to this issue before a while :)
    10. It really looks like a few lines of code — https://github.com/seanwalbran/rspec_around_all/blob/master/lib/rspec_around_all.rb — which complete the DSL and make up for those 0.1% of the cases like mine.
    11. we routinely choose not to add or expand features we think are a bad idea, or simply that aren't in popular demand, not because we're "saving the dummies' asses" but because adding new features creates a maintenance burden upon ourselves which cannot easily be undone. Once a version of RSpec supports something its there until a next major version which could be a long time away, we have several features already that we don't recommend extensive use of (expect_any_instance_of for example, we'd recommend not using it but we know there is popular demand for it so we maintain despite the extra burden it causes) so we're understandably not keen to increase that number.
    12. Guys, I'm sorry to revive an old discussion, and if there's a new one, point me to it please.
    13. Now I'm puzzled by the apparently biggest obstacle to implementation of this feature: possible misuse. I love ruby community, but sometimes saving the dummies' asses goes a bit too far.
    14. 'm open to considering adding this to core but it's such a rare need (given that you're the first to ever ask for it, and I've never wanted or needed an around(:all) hook) I have a preference for keeping it in external gem if we can do so w/o hooking into rspec's internals
    15. does the microgem I published work for your use case?
    16. We actually already use this patch: http://myronmars.to/n/dev-blog/2012/03/building-an-around-hook-using-fibers
    1. Sponsorship allows me to focus my efforts on open source software. I also provide professional consulting services.
    1. As for publishing this as an actual gem on rubygems.org...I have enough open source I'm involved in all ready (or too much, as my wife would probably say) and I'm not really interested in maintaining another gem.
    2. I’ve been looking everywhere for examples of how to use Fibers that are complicated enough to do something useful but simple enough to understand. For an older feature it’s one of the least documented.
    3. I haven't done a lot with Fibers,so having you point out a potential use for them and then walk through it was great.
    1. This is an excellent opportunity to mix compost into the topsoil before sowing the seeds across the yard.
    2. You can cover grass seed with compost, but using too much can block sunlight and oxygen from reaching the seeds during their critical growth period.
    3. Seeds might be able to grow in topsoil without compost, but seeds can’t grow in compost without topsoil.
    1. Giants that prefer the hyphenated spelling—Merriam-Webster, The Chicago Manual of Style, and The New Yorker, have a good reason for doing so. E-mail is a compound noun, made out of two words—“electronic” and “mail.” The e in e-mail is an abbreviation for “electronic,” and it’s used in a lot of other words as well—e-commerce, e-learning, and e-business, for example. There are also other compound nouns formed from an abbreviation and a noun, like the H-bomb, which is short for hydrogen bomb. The general rule of hyphenation in compound words that combine a single letter (or a number) and a word is to hyphenate them. So, based on tradition, e-mail is the correct way to do it.