10,000 Matching Annotations
  1. Aug 2023
    1. The point of acts_as_paranoid is keeping old versions around, not really destroying them, so you can look at past state, or roll back to past version. Do you consider the attached file part of the state you should be able to look at? If you roll back to past version, should it have it's attachment there too, in the restored version?
    2. I think the problem with after_destroy is that it is triggered before the database commits. This means the change may not yet be seen by other processes querying the database; it also means the change could be rolled back, and never actually commited. Since shrine deletes the attachment in this hook, that would mean it might delete the attachment prematurely, or even delete the attachment when the record never ends up destroyed in the database at all (in case of rollback), which would be bad. For shrine's logic to work as expected here, it really does need to be triggered only after the DB commit in which the model destroy is committed.
    1. Shrine was heavily inspired by Refile and Roda. From Refile it borrows the idea of "backends" (here named "storages"), attachment interface, and direct uploads. From Roda it borrows the implementation of an extensible plugin system.
    1. 4. Too much couplingAnother issue I had with this is the coupling that it might introduce in your code. I do understand that our community in general is not very much interested in dealing with coupling and I am not an expert at this either, but I do get a feeling that this suppress method would lead to some very bad usages.For instance, why does the Copyable module needs to know about a class named Notification?
    1. Michael Sorens is passionate about productivity, process, and quality. Besides working at a variety of companies from Fortune 500 firms to Silicon Valley startups, he enjoys spreading the seeds of good design wherever possible, having written over 100 articles, more than a dozen wallcharts, and posted in excess of 200 answers on StackOverflow.
    1. As you say, you can use after_add and after_remove callbacks. Additionally set after_commit filter for association models and notify "parent" about change. class User < ActiveRecord::Base has_many :articles, :after_add => :read, :after_remove => :read def read(article) # ;-) end end class Article < ActiveRecord::Base belongs_to :user after_commit { user.read(self) } end
    2. As you say, you can use after_add and after_remove callbacks. Additionally set after_commit filter for association models and notify "parent" about change. class User < ActiveRecord::Base has_many :articles, :after_add => :read, :after_remove => :read def read(article) # ;-) end end class Article < ActiveRecord::Base belongs_to :user after_commit { user.read(self) } end
    1. It may seem like Testing is some sort of beta, unstable version but that’s not entirely true. Debian Testing is the next Debian stable version. The actual development branch is the Debian Unstable (also known as Sid). Debian Testing lies somewhere in between the unstable and stable branch where it gets the new features before the stable release.
    1. ActiveStorage has a different approach than what is suggested by @dhh here. The idea there seems to be to rule out a default and to explicitly set ActiveStorage::Current.url_options or by include ActiveStorage::SetCurrent. I don't understand why and asked about it in the Rails Forum. Maybe someone here can point out why we don't use a sensible default in ActiveStorage?
    1. Although it is possible to manually craft all the necessary HTTP requests to get the signed keys and manually upload the file to the storage backend, thankfully we don’t have to. Rails already provides us with the @rails/active_storage package, which greatly simplifies the process.
    1. I ran into the same problem and never really found a good answer via the test objects. The only solution I saw was to actually update the session via a controller. I defined a new action in one of my controllers from within test_helper (so the action does not exist when actually runnning the application). I also had to create an entry in routes. Maybe there’s a better way to update routes while testing. So from my integration test I can do the following and verfiy: assert(session[:fake].nil?, “starts empty”) v = ‘Yuck’ get ‘/user_session’, :fake => v assert_equal(v, session[:fake], “value was set”)
    1. TL;DR For classic Rails apps we have a built-in scope for preloading attachments (e.g. User.with_attached_avatar) or can generate the scope ourselves knowing the way Active Storage names internal associations.GraphQL makes preloading data a little bit trickier—we don’t know beforehand which data is needed by the client and cannot just add with_attached_<smth> to every Active Record collection (‘cause that would add an additional overhead when we don’t need this data).That’s why classic preloading approaches (includes, eager_load, etc.) are not very helpful for building GraphQL APIs. Instead, most of the applications use the batch loading technique.
    1. To upload a file, a client must perform the following steps:Obtain the file metadata (filename, size, content type and checksum)Request direct upload credentials and a blob ID via API—createDirectUpload mutationUpload the file using the credentials (no GraphQL involved, HTTP PUT request).
    1. I make a file named: app/models/active_storage/attachment.rb. Because it's in your project it takes loading precedence over the Gem version. Then inside we load the Gem version, and then monkeypatch it using class_eval: active_storage_gem_path = Gem::Specification.find_by_name('activestorage').gem_dir require "#{active_storage_gem_path}/app/models/active_storage/attachment" ActiveStorage::Attachment.class_eval do acts_as_taggable on: :tags end The slightly nasty part is locating the original file, since we can't find it normally because our new file takes precedence. This is not necessary in production, so you could put a if Rails.env.production? around it if you like I think.
    2. I assume that the ActiveStorage::Attachment class gets reloaded dynamically and that the extensions are lost as they are only added during initialization. You’re correct. Use Rails.configuration.to_prepare to mix your module in after application boot and every time code is reloaded: Rails.configuration.to_prepare do ActiveStorage::Attachment.send :include, ::ActiveStorageAttachmentExtension end
    1. application/xml: data-size: XML very verbose, but usually not an issue when using compression and thinking that the write access case (e.g. through POST or PUT) is much more rare as read-access (in many cases it is <3% of all traffic). Rarely there where cases where I had to optimize the write performance existence of non-ascii chars: you can use utf-8 as encoding in XML existence of binary data: would need to use base64 encoding filename data: you can encapsulate this inside field in XML application/json data-size: more compact less that XML, still text, but you can compress non-ascii chars: json is utf-8 binary data: base64 (also see json-binary-question) filename data: encapsulate as own field-section inside json
    1. The method name is generated by replacing spaces with underscores. The result does not need to be a valid Ruby identifier though — the name may contain punctuation characters, etc. That's because in Ruby technically any string may be a method name. This may require use of define_method and send calls to function properly, but formally there's little restriction on the name.
    1. Auto-update aside, you might also have found it hard to find a Chrome binary with a specific version. Google intentionally doesn’t make versioned Chrome downloads available, since users shouldn’t have to care about version numbers—they should always get updated to the latest version as soon as possible. This is great for users, but painful for developers needing to reproduce a bug report in an older Chrome version.
    2. Due to there being no good way to solve these issues, we know that many developers download Chromium (not Chrome) binaries instead, although this approach has some flaws. First, these Chromium binaries are not reliably available across all platforms. Second, they are built and published separately from the Chrome release process, making it impossible to map their versions back to real user-facing Chrome releases. Third, Chromium is different from Chrome.
    3. Auto-update: great for users, painful for developersOne of Chrome’s most notable features is its ability to auto-update. Users are happy to know they’re running an up-to-date and secure browser version including modern Web Platform features, browser features, and bug fixes at all times.However, as a developer running a suite of end-to-end tests you might have an entirely different perspective:You want consistent, reproducible results across repeated test runs—but this may not happen if the browser executable or binary decides to update itself in between two runs.You want to pin a specific browser version and check that version number into your source code repository, so that you can check out old commits and branches and re-run the tests against the browser binary from that point in time.None of this is possible with an auto-updating browser binary. As a result, you may not want to use your regular Chrome installation for automated testing. This is the fundamental mismatch between what’s good for regular browser users versus what’s good for developers doing automated testing.
  2. Jul 2023
    1. Unlike Turkophobic reviews I know real history behind WW1, Armenian gangs (most famous ones Henchak and Dashnak) killed lots of people and rob their neighbors. My mom side ancestors came from Van and I heard stories about Armenian neighbors came to rob them when their husbands went to war.Turkey suggested Armenia for a joint historians board for so-called Armenian Genocide research and opening state archives. Armenia denied their proposition.Let me explain this to you: you don't accept historians to decide, you praise (millions of) so-called Armenian Genocide supporter films without historical proof and if you see one film which just picture real situation it is propaganda. You say it is a propaganda because it conflicts with your propaganda.
    1. I know it might suck a little, but it sounds like the way to do it. As a thought though, if you're struggling with your tests it is usually a sign you'll struggle with it down the track and you should be redesigning the APIs. I can't see an alternative way of doing it, but you might be able to with the knowledge of the domain.

      ugly/messy/less-than-ideal

    1. Conceptual data model: describes the semantics of a domain, being the scope of the model. For example, it may be a model of the interest area of an organization or industry. This consists of entity classes, representing kinds of things of significance in the domain, and relationship assertions about associations between pairs of entity classes. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. In that sense, it defines the allowed expressions in an artificial 'language' with a scope that is limited by the scope of the model.
    1. Ruby 2.5+ If your substring is at the beginning of in the end of a string, then Ruby 2.5 has introduced the methods for this: delete_prefix for removing a substring from the beginning of the string delete_suffix for removing a substring from the end of the string
    1. I understand Duo follows the spec and attempts to make life easier by giving users the full 30 seconds, but unfortunately service providers don’t honor that recommendation, which leads to lockouts and a bunch of calls to our 1st line teams. You can’t tell users to stop using {platform}, but we can tell them to switch TOTP providers.
    1. The threat is that you're posting a secret key to a third party which violates a dozen of security best practices, nullifies the assumption of the key being "secret" and most likely violates your organization's security policy. In authentication all the remaining information can be guessed or derived from other sources - for example Referrer header in case of Google - and this is precisely why secrets should be, well, secret.
    1. One federal judge in the Northern District of Texas issued a standing order in late May after Schwartz’s situation was in headlines that anyone appearing before the court must either attest that “no portion of any filing will be drafted by generative artificial intelligence” or flag any language that was drafted by AI to be checked for accuracy. He wrote that while these “platforms are incredibly powerful and have many uses in the law,” briefings are not one of them as the platforms are “prone to hallucinations and bias” in their current states.

      Seems like this judge has a strong bias against the use of AI. I think this broad ban is too broad and unfair. Maybe they should ban spell check and every other tool that could make mistakes too? Ultimately, the humans using the tool shoudl be the ones responsible for checking the generaetd draft for accuracy and the ones to hold responsible for any mistakes; they shouldn't simply be forbidden from using the tool.

    1. How we arrived in a post-truth era, when “alternative facts” replace actual facts, and feelings have more weight than evidence.Are we living in a post-truth world, where “alternative facts” replace actual facts and feelings have more weight than evidence? How did we get here? In this volume in the MIT Press Essential Knowledge series, Lee McIntyre traces the development of the post-truth phenomenon from science denial through the rise of “fake news,” from our psychological blind spots to the public's retreat into “information silos.”What, exactly, is post-truth? Is it wishful thinking, political spin, mass delusion, bold-faced lying? McIntyre analyzes recent examples—claims about inauguration crowd size, crime statistics, and the popular vote—and finds that post-truth is an assertion of ideological supremacy by which its practitioners try to compel someone to believe something regardless of the evidence. Yet post-truth didn't begin with the 2016 election; the denial of scientific facts about smoking, evolution, vaccines, and climate change offers a road map for more widespread fact denial. Add to this the wired-in cognitive biases that make us feel that our conclusions are based on good reasoning even when they are not, the decline of traditional media and the rise of social media, and the emergence of fake news as a political tool, and we have the ideal conditions for post-truth. McIntyre also argues provocatively that the right wing borrowed from postmodernism—specifically, the idea that there is no such thing as objective truth—in its attacks on science and facts.McIntyre argues that we can fight post-truth, and that the first step in fighting post-truth is to understand it.
    1. A timestamp like 2019-07-18 01:21:29.826484 will be truncated to 2019-07-18. But for your Rails application 2019-07-18 01:21:29.826484 is 2019-07-17 22:21:29.826484 at your time zone (GMT -03:00). So it should be truncated to 2019-07-17 instead.So, you should convert the timestamp to your current Rails time zone before extracting the date.
    1. Rails' default approach to log everything is great during development, it's terrible when running it in production. It pretty much renders Rails logs useless to me.

      Really? I find it even more annoying in development, where I do most of my log viewing. In production, since I can't as easily just reproduce the request that just happened, I need more detail, not less, so that I have enough clues about how to reproduce and what went wrong.

    1. This lets your test express a concept (similar to a trait), without knowing anything about the implementation: FactoryBot.create(:car, make: 'Saturn', accident_count: 3) FactoryBot.create(:car, make: 'Toyota', unsold: true) IMO, I would stick with traits when they work (e.g. unsold, above). But when you need to pass a non-model value (e.g. accident_count), transient attributes are the way to go.

      traits and transient attributes are very similar

      traits are kind of like boolean (either has it or doesn't) transient attribute flags

    1. Shareable Code

      For a second there, I thought this website was promoting open source and was actually sharing the (source) code for their website...

      But alas, it's actually for sharing a different kind (QR) of code instead...

  3. www.kickstarter.com www.kickstarter.com
  4. Jun 2023
    1. I’ve heard-suggested that ActiveSupport, which does a ton of monkey-patching of core classes, would make potentially-nice refinements. I don’t hold this opinion strongly, but I disagree with that idea. A big value proposition of ActiveSupport is that it is “omnipresent” and sets a new baseline for ruby behaviors - as such, being global really makes the most sense. I don’t know that anyone would be pleased to sprinkle using ActiveSupport in all their files that use it - they don’t even want to THINK about the fact that they’re using it.
    1. I think we have a responsibility not only to ourselves, but also to each other, to our community, not to use Ruby only in the ways that are either implicitly or explicitly promoted to us, but to explore the fringes, and wrestle with new and experimental features and techniques, so that as many different perspectives as possible inform on the question of “is this good or not”.
    2. If you’ll forgive the pun, there are no constants in programming – the opinions that Rails enshrines, even for great benefit, will change, and even the principles of O-O design are only principles, not immutable laws that should be blindly followed for the rest of time. There will be other ways of doing things. Change is inevitable.
    1. Using Time.now (which returns the wall-clock time) as base-lines has a couple of issues which can result in unexpected behavior. This is caused by the fact that the wallclock time is subject to changes like inserted leap-seconds or time slewing to adjust the local time to a reference time. If there is e.g. a leap second inserted during measurement, it will be off by a second. Similarly, depending on local system conditions, you might have to deal with daylight-saving-times, quicker or slower running clocks, or the clock even jumping back in time, resulting in a negative duration, and many other issues. A solution to this issue is to use a different time of clock: a monotonic clock.
    1. Are protected members/fields really that bad? No. They are way, way worse. As soon as a member is more accessible than private, you are making guarantees to other classes about how that member will behave. Since a field is totally uncontrolled, putting it "out in the wild" opens your class and classes that inherit from or interact with your class to higher bug risk. There is no way to know when a field changes, no way to control who or what changes it. If now, or at some point in the future, any of your code ever depends on a field some certain value, you now have to add validity checks and fallback logic in case it's not the expected value - every place you use it. That's a huge amount of wasted effort when you could've just made it a damn property instead ;) The best way to share information with deriving classes is the read-only property: protected object MyProperty { get; } If you absolutely have to make it read/write, don't. If you really, really have to make it read-write, rethink your design. If you still need it to be read-write, apologize to your colleagues and don't do it again :) A lot of developers believe - and will tell you - that this is overly strict. And it's true that you can get by just fine without being this strict. But taking this approach will help you go from just getting by to remarkably robust software. You'll spend far less time fixing bugs.

      In other words, make the member variable itself private, but can be abstracted (and access provided) via public methods/properties

    1. Have you ever: Been disappointed, surprised or hurt by a library etc. that had a bug that could have been fixed with inheritance and few lines of code, but due to private / final methods and classes were forced to wait for an official patch that might never come? I have. Wanted to use a library for a slightly different use case than was imagined by the authors but were unable to do so because of private / final methods and classes? I have.