10,000 Matching Annotations
  1. Aug 2022
    1. # Do this the first time: $ git remote add -f -t master --no-tags gitgit https://github.com/git/git.git $ git subtree add --squash --prefix=third_party/git gitgit/master # In future, you can merge in additional changes as follows: $ git subtree pull --squash --prefix=third_party/git gitgit/master # And you can push changes back upstream as follows: $ git subtree push --prefix=third_party/git gitgit/master # Or possibly (not sure what the difference is): $ git subtree push --squash --prefix=third_party/git gitgit/master
    1. I intend to keep it around and maybe fix up minor things here and there if needed, but don't really have any plans for new features at this point. I think it's great to give people the option to choose the Go port if the advanced features is what they're after.

    Tags

    Annotators

    URL

    1. Thus my docs recommendation of public function beforeFilter(Event $event) // do not render out the now inconsistent one for is(json) if (!$this->request->is('jsonapi')) { throw new NotFoundException('Invalid access, use application/vnd.api+json for Content-Type and Accept.'); } } to specifically only whitelist the desired jsonapi for the general use case.
    1. In a clickjacking attack, the attacker creates a malicious website in which it loads the authorization server URL in a transparent iframe above the attacker’s web page. The attacker’s web page is stacked below the iframe, and has some innocuous-looking buttons or links, placed very carefully to be directly under the authorization server’s confirmation button. When the user clicks the misleading visible button, they are actually clicking the invisible button on the authorization page, thereby granting access to the attacker’s application. This allows the attacker to trick the user into granting access without their knowledge.

      Maybe browsers should prevent transparent iframes?! Most people would never suspect this is even possible.

    1. it's also one of the smartest games I've ever played and I can't recommend it enough if you enjoy system-driven narrative, which it handles exquisitely.

      .

    2. The Quiet Sleep has 'cult classic' written all over it. It uses strategy, management, and tower defence mechanics to take you inside someone's head in a way that I don't think has ever been done before. It's really a bold experiment, and you'll be glad you played it.

      .

    1. Well I would like to express my huge concern regarding the withdrawal of support for the SMB 1.0 network protocol in Windows 11, and future versions of the Microsoft OS, as there are many, many users who need to make use of this communication protocol, especially users households, since there are hundreds of thousands of products that use the embedded Linux operating system on devices that still use the SMB 1.0 protocol, and many devices, such as media players and NAS, that have been discontinued and companies no longer update their firmware.
    1. With Windows 10 version 1511, support for SMBv1 and thus NetBIOS device discovery was disabled by default. Depending on the actual edition, later versions of Windows starting from version 1709 ("Fall Creators Update") do not allow the installation of the SMBv1 client anymore. This causes hosts running Samba not to be listed in the Explorer's "Network (Neighborhood)" views.

      .

    1. to see the changes the commands make. Among the commands, I'd like to use useradd, userdel, usermod, groupadd, groupmod, & groupdel. And, as I'm guessing you are understanding, these are just the ones I've read about today. If I can get away without modifying any files directly, I'd rather be able to do that because it means I'll have a strong grasp of the commands, and I'd be able to learn the editing of smb.conf (& the other files) by seeing how it/they change as I use the commands.

      .

    2. I'm trying to learn enough about Samba that I'm able to do complete administration from the command line. That's a big task, I know, like learning DOS when all I know is French (I know far more DOS than French, but that's the idea).

      .

    3. I have definitely looked at some of the Samba.org instructions. The problem is mine - I'm either too busy dealing with the kids in the morning, or too tired in the evenings, to be able to - within my realm of patience - find what I need, implement it, test it, and confirm that it works or try something else. Finding it, and recognizing that I've found it, is usually the hard part. That's why a book does me worlds of good - I can read it during the work day when I'm taking a few minutes break, and it's uninterrupted concentration time.

      .

    1. The custom title bar has been a success on Windows, but the customer response on Linux suggests otherwise. Based on feedback, we have decided to make this setting opt-in on Linux and leave the native title bar as the default. The custom title bar provides many benefits including great theming support and better accessibility through keyboard navigation and screen readers. Unfortunately, these benefits do not translate as well to the Linux platform. Linux has a variety of desktop environments and window managers that can make the VS Code theming look foreign to users.
    1. If you insist on having the user id in the version table, you can do this: ActiveRecord::Base.transaction do @user.save! @user.versions.last.update_attributes!(:whodunnit => @user.id) end

      Not ideal... but we can't set it any earlier because we don't know the id until after the save

    1. Wouldn't it be easier to do a squash merge instead? git merge --squash [branch] Like comment: Like comment: 1 like Like Comment button Reply Collapse Expand Brack Carmony Brack Carmony Brack Carmony Follow Joined Jan 3, 2022 • Jan 3 Dropdown menu Copy link Hide Report abuse It would, if the assumption that every commit in the chain is what you want, this lets you keep the power of the rebase available if you want to cherry-pick commits or any of the other crazy features it seems to let you use.
    1. Would be more of a neutral rating for me but seeing that I have only two options (or no review at all), I'll go with the upvote for encouragement as they do appear to be putting some effort into the game.

      .

    1. I created a gem called rspec_n that installs an executable that will do this. It will re-run the test suite N times by default. You can make it stop as soon as it hits a failing iteration via the -s cli option. It will display other stats about the iterations as well.
    1. This very much appears to be a bug or design flaw in puma - The fact that a persistent connection ties up a thread on the chance a request might come over that connection seems like not great behavior. This would really only be an issue when puma is run with no workers (which wouldn't be done in production) but it still seems a little nuts.
    1. "It's difficult because we can't tell people exactly what's allowed and not allowed," said Chris Castelli, a manager for the Department of State Lands. "It's even tougher for law enforcement that gets called out to very heated disputes and doesn't have strict laws they can apply." 
    1. The extent of public use varies, with Montana affording the greatest access. Rafters can float and fishermen can wade in rivers that flow through private land so long as they enter from public property. They can even leave the river and walk up to the high-water mark.
    1. I understand that you are bound to specification. And also understand that it could take months to decide wether the specification should be changed.

      .

    1. I thought something like git rev-parse --abbrev-ref origin/HEAD would work, but that just seems to show what the default branch was of the repo it was cloned from, at the time of cloning, provided that the remote we cloned from was named origin.

      good enough for my purposes (local git scripts/aliases)!

      ⟫ cat .git/refs/remotes/origin/HEAD ref: refs/remotes/origin/main

    2. This is a terrific answer! Without something like locks or transactions, we indeed will only ever be able to get an updated-as-of-when-the-repository-just-told-us point of accuracy that gets stale if changed in the time since then
    3. It's a great way to test various limits. When you think about this even more, it's a little mind-bending, as we're trying to impose a global clock ("who is the most up to date") on a system that inherently doesn't have a global clock. When we scale time down to nanoseconds, this affects us in the real world of today: a light-nanosecond is not very far.
    4. Which of these to use depends on the result you want. Note that by the time you get the answer, it may be incorrect (out of date). There is no way to fix this locally. Using some ESP,2 imagine the remote you're contacting is in orbit around Saturn. It takes light about 8 minutes to travel from the sun to Earth, and about 80 to travel from the sun to Saturn, so depending on where we are orbitally, they're 72 to 88 minutes away. Any answer you get back from them will necessarily be over an hour out of date.
    5. There are many questions we can ask and answer about branch names. Each one is specific to one particular repository because all branch names are local to that particular repository. Any changes anyone makes in that repository affect only that one repository, at least at the time they make them.

      which assumption? well, people make the assumption that our local repo should know some fact about the remote repo, like its default branch, without actually asking the remote about itself

    6. Using git remote set-head has the advantage of updating a cached answer, which you can then use for some set period. A direct query with git ls-remote has the advantage of getting a fresh answer and being fairly robust. The git remote show method seems like a compromise between these two that has most of their disadvantages with few of their advantages, so that's the one I would avoid.)
    1. You can use the lsblk command. If the disk is already unlocked, it will display two lines: the device and the mapped device, where the mapped device should be of type crypt. # lsblk -l -n /dev/sdaX sdaX 253:11 0 2G 0 part sdaX_crypt (dm-6) 253:11 0 2G 0 crypt If the disk is not yet unlocked, it will only show the device. # lsblk -l -n /dev/sdaX sdaX 253:11 0 2G 0 part
    1. Bear in mind that lsof doesn't seem to present an easy solution because, once the device is disconnected, the associated names provided by lsof no longer include the name of the disconnected device.
  2. Jul 2022
    1. Patrician IV is an overhauling upgrade to Patrician III; so if you have not played the previous games in the Patrician series, starting with IV is really all you need. Also, the game of Patrician is very straightforward and addicting, so playing previous versions won't offer you anything unseen in Patrician IV.
    1. Process Substitution is something everyone should be using regularly! It is super useful. I do something like vimdiff <(grep WARN log.1 | sort | uniq) <(grep WARN log.2 | sort | uniq) every day.

      underused

    1. What ever you do, don't use a for loop: # Don't do this for file in $(find . -name "*.txt") do …code using "$file" done Three reasons: For the for loop to even start, the find must run to completion. If a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names. Although now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it.
    1. hile this warning is helpful, it could be more precise, because you won't necessarily get the first element: It is specifically the element at index 0 that is returned, so if the first element has a higher index - which is possible in Bash - you'll get the empty string; try 'a[1]='hi'; echo "$a"'.
    1. Pre and post commands with matching names will be run for those as well (e.g. premyscript, myscript, postmyscript)

      Could potentially be confusing behavior if running a script does something extra and you don't know why. They might look at the definition of myscript and not see the additional commands and wonder how/why they are running. The premyscript might be lost in a lost unsorted script list.

    2. Since npm@1.1.71, the npm CLI has run the prepublish script for both npm publish and npm install, because it's a convenient way to prepare a package for use (some common use cases are described in the section below). It has also turned out to be, in practice, very confusing. As of npm@4.0.0, a new event has been introduced, prepare, that preserves this existing behavior. A new event, prepublishOnly has been added as a transitional strategy to allow users to avoid the confusing behavior of existing npm versions and only run on npm publish (for instance, running the tests one last time to ensure they're in good shape).
    1. Here’s a quick blog post about a specific thing (making FactoryBot.lint more verbose) but actually, secretly, about a more general thing (taking advantage of Ruby’s flexibility to bend the universe to your will). Let’s start with the specific thing and then come back around to the general thing.
    1. The amount of time wasted on this is ridiculous. Thanks. This is about the only thing that worked. Why in the world this wouldn't "just work" by defining the default url options in Rails config/environments/test.rb is beyond me.
    1. The goal of this project is to have a single gem that contains all the helper methods needed to resize and process images. Currently, existing attachment gems (like Paperclip, CarrierWave, Refile, Dragonfly, ActiveStorage, and others) implement their own custom image helper methods. But why? That's not very DRY, is it? Let's be honest. Image processing is a dark, mysterious art. So we want to combine every great idea from all of these separate gems into a single awesome library that is constantly updated with best-practice thinking about how to resize and process images.
    1. It is sublimely annoying to have to configure the exact same parameters in config/environments, spec/spec_helper.rb and again here... all in marginally different ways (with 'http://' or without, with port number or port specified separately). Even Capybara.configure syntax can't seem to stay consistent to itself between versions...
    1. Thanks for your making your first contribution to Cucumber, and welcome to the Cucumber committers team! You can now push directly to this repo and all other repos under the cucumber organization! In return for this generous offer we hope you will: Continue to use branches and pull requests. When someone on the core team approves a pull request (yours or someone else's), you're welcome to merge it yourself. Commit to setting a good example by following and upholding our code of conduct in your interactions with other collaborators and users. Join the community Slack channel to meet the rest of the team and make yourself at home. Don't feel obliged to help, just do what you can if you have the time and the energy. Ask if you need anything. We're looking for feedback about how to make the project more welcoming, so please tell us!
    1. These directives are inherited from the previous configuration level if and only if there are no proxy_set_header directives defined on the current level.

      This conditional rule for inheritance is different than most other apps/contexts. Usually it just always inherits, and any local config at the current level gets merged with or overrides what is inherited.

    1. # ActiveStorage defaults to security via obscurity approach to serving links # If this is acceptable for your use case then this authenticable test can be # removed. If not then code should be added to only serve files appropriately. # https://edgeguides.rubyonrails.org/active_storage_overview.html#proxy-mode def authenticated? raise StandardError.new "No authentication is configured for ActiveStorage" end
    1. Stop autoclosing of PRs While the idea of cleaning up the the PRs list by nudging reviewers with the stale message and closing PRs that didn't got a review in time cloud work for the maintainers, in practice it discourages contributors to submit contributions. Keeping PRs open and not providing feedback also doesn't help with contributors motivation, so while I'm disabling this feature of the bot we still need to come up with a process that will help us to keep the number of PRs in check, but celebrate the work contributors already did instead of ignoring it, or dismissing in the form of a "stale" alerts, and automatically closing PRs.

      Yes!! Thank you!!

      typo: cloud work -> could work

    1. I don't understand why it should be so hard to keep issues open / reopen them. That's just going to cause people to open a duplicate issue/PR — or (if they notice in time) cause people to add extra "not stale" noise when the bot warns it's about to be closed. Wouldn't it be preferable to keep the discussion together in one place instead of spreading across duplicate issues? (Similarly, moving the meta conversation about an issue out to a completely separate system (Discord) seems like the wrong direction, because it wouldn't be visible to/discoverable by those arriving at the closed issue.) I get how it's useful to have stale issues not cluttering the list. But if interes/activity later picks up again, then "stale" is no longer accurate and its status should be automatically updated to reflect its newfound freshness... like it did back here:
    2. ActiveSupport.on_load :active_storage_blob do def accessible_to?(accessor) attachments.includes(:record).any? { |attachment| attachment.accessible_to?(accessor) } || attachments.none? end end ActiveSupport.on_load :active_storage_attachment do def accessible_to?(accessor) record.try(:accessible_to?, accessor) end end ActiveSupport.on_load :action_text_rich_text do def accessible_to?(accessor) record.try(:accessible_to?, accessor) end end module ActiveStorage::Authorize extend ActiveSupport::Concern included do before_action :require_authorization end private def require_authorization head :forbidden unless authorized? end def authorized? @blob.accessible_to?(Current.identity) end end Rails.application.config.to_prepare do ActiveStorage::Blobs::RedirectController.include ActiveStorage::Authorize ActiveStorage::Blobs::ProxyController.include ActiveStorage::Authorize ActiveStorage::Representations::RedirectController.include ActiveStorage::Authorize ActiveStorage::Representations::ProxyController.include ActiveStorage::Authorize end

      Interesting, rather clean approach, I think

    3. I'm partial to the solution originally proposed. It follows a pattern already established in Rails. For example, using an application-specific ApplicationStorageController which inherits from ActiveStorage::BaseController is very similar to the ApplicationRecord which inherits from ActiveRecord::Base or ApplicationJob which inherits from ActiveJob::Base.
    4. it should be normal for production apps to add authentication and authorization to their ActiveStorage controllers. Unfortunately, there are 2 possible ways to achieve it currently: Not drawing ActiveStorage routes and do everything by yourself Override/monkey patch ActiveStorage controllers None of them is ideal because in the end you can't benefit from Rails upgrades (bug fixes, etc) so the intention of this PR is to let people define a parent controller (inspired by Devise, maybe @carlosantoniodasilva can tell us his experience on this feature) so that people can add authentication and authorization in a single place and still benefit from the default controllers.
    1. Create a new controller to override the original: app/controllers/active_storage/blobs_controller.rb

      Original comment:

      I've never seen monkey patching done quite like this.

      Usually you can't just "override" a class. You can only reopen it. You can't change its superclass. (If you needed to, you'd have to remove the old constant first.)

      Rails has already defined ActiveStorage::BlobsController!

      I believe the only reason this works:

      class ActiveStorage::BlobsController < ActiveStorage::BaseController

      is because it's reopening the existing class. We don't even need to specify the < Base class. (We can't change it, in any case.)

      They do the same thing here: - https://github.com/ackama/rails-template/pull/284/files#diff-2688f6f31a499b82cb87617d6643a0a5277dc14f35f15535fd27ef80a68da520

      Correction: I guess this doesn't actually monkey patch it. I guess it really does override the original from activestorage gem and prevent it from getting loaded. How does it do that? I'm guessing it's because activestorage relies on autoloading constants, and when the constant ActiveStorage::BlobsController is first encountered/referenced, autoloading looks in paths in a certain order, and finds the version in the app's app/controllers/active_storage/blobs_controller.rb before it ever gets a chance to look in the gem's paths for that same path/file.

      If instead of using autoloading, it had used require_relative (or even require?? but that might have still found the app-defined version earlier in the load path), then it would have loaded the model from activestorage first, and then (possibly) loaded the model from our app, which (probably) would have reopened it, as I originally commented.

    1. meat: https://github.com/musaffa/file_validators/blob/master/lib/file_validators/validators/file_content_type_validator.rb

      Compared to https://github.com/aki77/activestorage-validator, I slightly prefer this because - it has more users and has been battle tested more - is more flexible: can specify exclude as well as allow - has more expansive Readme documentation - is mentioned by https://github.com/thoughtbot/paperclip/blob/master/MIGRATING.md#migrating-from-paperclip-to-activestorage - mentions security: whether or not it's needed, at least this makes extra attempt to be secure by using external tool to check content_type; https://github.com/aki77/activestorage-validator/blob/master/lib/activestorage/validator/blob.rb just uses blob.content_type, which I guess just trusts whatever ActiveStorage gives us (which seems fair too: perhaps this should be kicked up to them to be their concern)

      In fact, it looks like ActiveStorage does do some kind of mime type checking...

      activestorage-6.1.6/app/models/active_storage/blob/identifiable.rb ``` def identify_without_saving unless identified? self.content_type = identify_content_type self.identified = true end end

      def identify_content_type
        Marcel::MimeType.for download_identifiable_chunk, name: filename.to_s, declared_type: content_type
      end
      

      ```

    1. It really slows down your test suite accessing the disk.So yes, in principle it slows down your tests. There is a "school of testing" where developer should isolate the layer responsible for retrieving state and just set some state in memory and test functionality (as if Repository pattern). The thing is Rails is a tightly coupled with implementation logic of state retrieval on core level and prefers "school of testing" in which you couple logic with state retrial to some degree.Good example of this is how models are tested in Rails. You could just build entire test suite calling `FactoryBot.build` and never ever use `FactoryBot.create` and stub method all around and your tests will be lighting fast (like 5s to run your entire test suite). This is highly unproductive to achieve and I failed many times trying to achieve that because I was spending more time maintaining my tests then writing something productive for business.Or you can took more pragmatic route and save database record where is too difficult to just 'build' the factory (e.g. Controller tests, association tests etc)Same I would say for saving the file to the Disk. Yes you are right You could just "not save the file to disk" and save few milliseconds. But at the same time you will in future stumble upon scenarios where your tests are not passing because the file is not there (e.g. file processing validations) Is it really worth it ? I never worked on a project where saving file to a disk would slow down tests significantly enough that would be an issue (and I work for company where core business is related to file uploading) Especially now that we have SSD drives in every laptop/server it's blazing fast so at best you would save 1 seconds for entire test suite (given you call FactoryBot traits to set/store file where it make sense. Not when every time you build an object.)
    1. # Some Rails projects include ActionDispatch::TestProcess globally for the # use of `fixture_file_upload` in tests. This is a bad practice because it # includes other methods -- such as #session -- which override existing # methods on *all objects*.
    1. # This ensures that the pid namespace is shared between the host # and the container. It's not necessary to be able to run spring # commands, but it is necessary for "spring status" and "spring stop" # to work properly. pid: host
    1. I'm fully serious: If your accounts and data are important, then just don't make such mistakes. Being careful is completely possible.

      Being careful is completely possible.

    1. I can't reverse it, but maybe somebody who understands how Chrome does the decryption can. The ability is there, its not that Chrome can't decrypt them, it is that Chrome won't decrypt them due to false "security".And if Chrome actually, genuinely can no longer decrypt passwords after they have been restored from backup, then that is a shockingly bad bug in their password manager.
    2. If your security locks you out of your own home just because you changed your trousers, that would be shockingly bad security.If your security permanently locks you out of your accounts because you restored your Chrome settings from backup, how is that any better?
  3. Jun 2022
    1. Data protection authorities have found that the U.S. legal system does not guarantee the same standards of protection as the EU. The situation stems from a set of U.S. laws that allow government organizations to request access to consumers’ personal data from US-based services, regardless of where the data centers or servers are located. In light of this, NOYB filed 101 complaints with European DPAs to find that transferring European users’ data to the U.S. was unlawful. The decisions, which have noted the illegitimacy of the transfers, focus on the analysis of additional technical, contractual and organizational measures.
    1. A custom component might be interesting for you if your views look something like this: <%= simple_form_for @blog do |f| %> <div class="row"> <div class="span1 number"> 1 </div> <div class="span8"> <%= f.input :title %> </div> </div> <div class="row"> <div class="span1 number"> 2 </div> <div class="span8"> <%= f.input :body, as: :text %> </div> </div> <% end %> A cleaner method to create your views would be: <%= simple_form_for @blog, wrapper: :with_numbers do |f| %> <%= f.input :title, number: 1 %> <%= f.input :body, as: :text, number: 2 %> <% end %>
    1. Valve long ago proved that piracy is a service issue. Make it more convenient to pay for something, and people pay. Just look at what they did to bring AAA games to Linux!Apple, Amazon, and others proved it as well when they removed DRM (or never had it in the first place) on digital music purchases! People still paid for music downloads! They figured out how to keep people paying by making subscriptions to pretty much all music cheap and convenient. The service is more convenient than piracy, and you have a useful option for anything you want more permanent than a subscription.
    2. Linux users flood developers on projects in github. On Opensource projects were you can actually somehow talk to the developers as an end user. Or maybe on Twitter if a developer of a proprietary software is somehow known and you can contact him on social media.But Developers dont talk to first level customer support of a proprietary software like Adobe InDesign or a service like Netflix,But this is were these companies get their data. And they base their decisions on this data.

      .

    3. Linux users flood developers with bugs and requests because we actually know how to debug our systems. The creators then tend to get annoyed at the flood, because even if they resolved them all, it would be spending a lot of energy for less than 1% of their userbase.

      .

    4. The main problem of the Linux community is that it is divided. I know this division represents freedom of choice but when your rivals are successful, you must inspect them carefully. And both rivals here (MacOS and Windows) get their power from the "less is more approach".This division in Linux communities make people turn into their communities when they have problems and never be heard as a big, unified voice.When something goes wrong with other OSes, people start complaining in many forums and support sites, some of them writing to multiple places and others support them by saying "yeah, I have that problem, too".In the Linux world, the answers to such forums come as "don't use that shitty distro" or "use that command and circumvent the problem".Long story short" average Linux user doesn't know that they are:still customers and have all the rights to demand from companiesthey can get together and act up louder.Imagine such an organizing that most of the Linux users manage to get together and writing to Netflix. Maybe not all of them use Netflix but the number of the Linux users are greater than Netflix members. What a domination it would be!But instead we turn into our communities and act like a survival tribe who has to solve all their problems themselves .
    5. Big Software companies like Adobe or Netflix do two things that are relevant for us and currently go wrong:They analyse the systems their customers use. They don't see their Linux users because we tend to either not use the product at all under Linux (just boot windows, just use a firertv stick and so one) or we use emulators or other tools that basically hide that we actually run Linux. --> The result is that they don't know how many we actually are. They think we are irrelevant because thats what the statistics tell them (they are completely driven by numbers).They analyze the feature requests and complains they get from their customers. The problem is: Linux users don't complain that much or try to request better linux support. We usually somehow work around the issues. --> The result is that these companies to neither get feature requests for better Linux support nor bug reports from linux users (cause its not expected to work anyways).
    1. Mine you at the time Valve was trying to get developers to make Linux ports of the games so targeting Debian made some sense in terms of platform stability, this didn't work out well and developers did no such thing. Valve then moved to making WINE work better through spending dev time adding patches and making the Proton later on top of it.Valve likely moved to an Arch base to get bleeding edge support for new hardware and for performance enhancements that come along with it as they were no longer shackled trying to get developers to make native Linux ports.
    2. Compare that to bugfixes coming to a Ubuntu LTS or 6 month and you might not get it before the version is End Of Life making collaborating difficult & fruitless.Arch is where developers are so it makes sense from the massive array of software available in the AUR & repos too.Its like a software flee market, occasionally AUR software isn't up to the bar or theoretically there COULD be a bad actor once every few years otherwise its something truly special.
    3. Bug triage is so much easier & faster on Arch. Everyone is on the same latest version and engaging developers usually lead to fixes that users can consume right away or within a week.
    1. The linux-based open-source mobile operating system Android is not only the most popular mobile operating system in the world, it’s also on the way to becoming a proprietary operating system. How is that?

    Tags

    Annotators

    URL

    1. Our Camera app provides the system media intents used by other apps to capture images / record videos via the OS provided camera implementation. These intents can only be provided by a system app since Android 11, so the quality of the system camera is quite important.

      .

    1. No, GrapheneOS will remain a non-profit open source project / organization. It will remain an independent organization not strongly associated with any specific company. We partner with a variety of companies and other organizations, and we're interested in more partnerships in the future. Keeping it as an non-profit avoids the conflicts of interest created by a profit-based model. It allows us to focus on improving privacy/security without struggling to build a viable business model that's not in conflict with the success of the open source project.

      .