45 Matching Annotations
  1. Jun 2024
    1. this is a serious problem because all they need to do is automate AI research 00:41:53 build super intelligence and any lead that the US had would vanish the power dynamics would shift immediately

      for - AI - security risk - once automated AI research is known, bad actors can easily build superintelligence

      AI - security risk - once automated AI research is known, bad actors can easily build superintelligence - Any lead that the US had would immediately vanish.

    2. Sam mman has said that's his entire goal that's what opening eye are trying to build they're not really trying to build super intelligence but they Define AGI as a 00:24:03 system that can do automated AI research and once that does occur

      for - key insight - AGI as automated AI researchers to create superintelligence

      key insight - AGI as automated AI researchers to create superintelligence - We will reach a period of explosive, exponential AI research growth once AGI has been produced - The key is to deploy AGI as AI researchers that can do AI research 24/7 - 5,000 of such AGI research agents could result in superintelligence in a very short time period (years) - because every time any one of them makes a breakthrough, it is immediately sent to all 4,999 other AGI researchers

    3. having an automated AI research engineer by 2027 00:05:14 to 2028 is not something that is far far off

      for - progress trap - AI - milestone - automated AI researcher

      progress trap - AI - milestone - automated AI researcher - This is a serious concern that must be debated - An AI researcher that does research on itself has no moral compass and can encode undecipherable code into future generations of AI that provides no back door to AI if something goes wrong. - For instance, if AI reached the conclusion that humans need to be eliminated in order to save the biosphere, - it can disseminate its strategies covertly under secret communications with unbreakable code

  2. Nov 2023
    1. The nightmares of AI discrimination and exploitation are the lived reality of those I call the excoded

      Defining 'excoded'

    2. AI raises the stakes because now that data is not only used to make decisions about you, but rather to make deeply powerful inferences about people and communities. That data is training models that can be deployed, mobilized through automated systems that affect our fundamental rights and our access to whether you get a mortgage, a job interview, or even how much you’re paid. Thinking individually is only part of the equation now; you really need to think in terms of collective harm. Do I want to give up this data and have it be used to make decisions about people like me—a woman, a mother, a person with particular political beliefs?

      Adding your data to AI models is a collective decision

  3. Feb 2023
    1. Staff and studentsare rarely in a position to understand the extent to which data is being used, nor are they able todetermine the extent to which automated decision-making is leveraged in the curation oramplification of content.

      Is this a data (or privacy) literacy problem? A lack of regulation by experts in this field?

  4. Jan 2023
    1. View closed captioning or live transcription during a meeting or webinar Sign in to the Zoom desktop client. Join a meeting or webinar. Click the Show Captions button .
    2. If closed captioning or live transcripts are available during a meeting or webinar, you can view these as a participant
    1. User To enable automated captioning for your own use: Sign in to the Zoom web portal. In the navigation menu, click Settings. Click the Meeting tab. Under In Meeting (Advanced), click the Automated captions toggle to enable or disable it. If a verification dialog displays, click Enable or Disable to verify the change.Note: If the option is grayed out, it has been locked at either the group or account level. You need to contact your Zoom admin. (Optional) Click the edit option to select which languages you want to be available for captioning. Note: Step 7 may not appear for some users until September 2022, as a set of captioning enhancements are rolling out to users over the course of August.
  5. Dec 2022
    1. It’s tempting to believe incredible human-seeming software is in a way superhuman, Block-Wehba warned, and incapable of human error. “Something scholars of law and technology talk about a lot is the ‘veneer of objectivity’ — a decision that might be scrutinized sharply if made by a human gains a sense of legitimacy once it is automated,” she said.

      Veneer of Objectivity

      Quote by Hannah Bloch-Wehba, TAMU law professor

  6. May 2022
    1. This model was tasked with predicting whether a future comment on a thread will be abusive. This is a difficult task without any features provided on the target comment. Despite the challenges of this task, the model had a relatively high AUC over 0.83, and was able to achieve double digit precision and recall at certain thresholds.

      Predicting Abusive Conversation Without Target Comment

      This is fascinating. The model is predicting if the next, new comment will be abusive by examining the existing conversation, and doing this without knowing what the next comment will be.

  7. Apr 2022
    1. And therefore, to accept the dictates of algorithms in deciding what, for example, the next song we should listen to on Spotify is, accepting that it will be an algorithm that dictates this because we no longer recognize our non-algorithmic nature and we take ourselves to be the same sort of beings that don’t make spontaneous irreducible decisions about what song to listen to next, but simply outsource the duty for this sort of thing, once governed by inspiration now to a machine that is not capable of inspiration.

      Outsourcing decisions to algorithms

  8. Mar 2022
    1. The growing prevalence of AI systems, as well as their growing impact on every aspect of our daily life create a great need to that AI systems are "responsible" and incorporate important social values such as fairness, accountability and privacy.

      An AI is the sum of its programming along with its training data. Its "perspecitive" of social values such as fairness, accountability, and privacy are a function of the data used to create it.

  9. Dec 2021
  10. Jul 2021
  11. Jun 2021
  12. Feb 2021
    1. Keeping bootstrap-sass in sync with upstream changes from Bootstrap used to be an error prone and time consuming manual process. With Bootstrap 3 we have introduced a converter that automates this.
  13. Nov 2020
  14. Oct 2020
    1. the actual upgrade path should be very simple for most people since the deprecated things are mostly edge cases and any common ones can be codemodded
  15. Sep 2020
  16. Jul 2020
  17. May 2020
    1. I originally did not use this approach because many pages that require translation are behind authentication that cannot/should not be run through these proxies.
    2. It shouldn't be problem to watch the remote scripts for changes using Travis and repack and submit a new version automatically (depends on licensing). It does not put the script under your control, but at least it's in the package and can be reviewed.
    1. You might try this extension: https://github.com/andreicristianpetcu/google_translate_this It does the same thing in the same way as Page Translator and likely will be blocked by Mozilla, but this is a cat and mouse game worth playing if you rely on full-page in-line language translation.
  18. Mar 2020
    1. For automated testing, include the parameter is_test=1 in your tests. That will tell Akismet not to change its behaviour based on those API calls – they will have no training effect. That means your tests will be somewhat repeatable, in the sense that one test won’t influence subsequent calls.
  19. Jan 2020
  20. Dec 2019
  21. Dec 2015
    1. With SmartBooks, students can see the important content highlighted

      Like an algorithmic version of Hypothesis? Is McGraw-Hill part of the Coalition? Looks like it isn’t. Is it a “for us or against us” situation?

  22. Feb 2014
    1. Alternatively, Daphne Koller and Andrew Ng who are the founders of Coursera, a Stanford MOOC startup, have decided to use peer evaluation to assess writing. Koller and Ng (2012) specifically used the term “calibrated peer review” to refer to a method of peer review distinct from an application developed by UCLA with National Science Foundation funding called Calibrated Peer Review™ (CPR). For Koller and Ng, “calibrated peer review” is a specific form of peer review in which students are trained on a particular scoring rubric for an assignment using practice essays before they begin the peer review process.
  23. Jan 2014
    1. A rigorous understanding of these developmental processes requires automated methods that quantitatively record and analyze complex morphologies and their associated patterns of gene expression at cellular resolution.

      Rigorous understanding requires automated methods using quantitative recording and analysis.