7 Matching Annotations
  1. Nov 2023
    1. The nightmares of AI discrimination and exploitation are the lived reality of those I call the excoded

      Defining 'excoded'

    2. AI raises the stakes because now that data is not only used to make decisions about you, but rather to make deeply powerful inferences about people and communities. That data is training models that can be deployed, mobilized through automated systems that affect our fundamental rights and our access to whether you get a mortgage, a job interview, or even how much you’re paid. Thinking individually is only part of the equation now; you really need to think in terms of collective harm. Do I want to give up this data and have it be used to make decisions about people like me—a woman, a mother, a person with particular political beliefs?

      Adding your data to AI models is a collective decision

  2. Feb 2023
    1. Staff and studentsare rarely in a position to understand the extent to which data is being used, nor are they able todetermine the extent to which automated decision-making is leveraged in the curation oramplification of content.

      Is this a data (or privacy) literacy problem? A lack of regulation by experts in this field?

  3. Dec 2022
    1. It’s tempting to believe incredible human-seeming software is in a way superhuman, Block-Wehba warned, and incapable of human error. “Something scholars of law and technology talk about a lot is the ‘veneer of objectivity’ — a decision that might be scrutinized sharply if made by a human gains a sense of legitimacy once it is automated,” she said.

      Veneer of Objectivity

      Quote by Hannah Bloch-Wehba, TAMU law professor

  4. May 2022
    1. This model was tasked with predicting whether a future comment on a thread will be abusive. This is a difficult task without any features provided on the target comment. Despite the challenges of this task, the model had a relatively high AUC over 0.83, and was able to achieve double digit precision and recall at certain thresholds.

      Predicting Abusive Conversation Without Target Comment

      This is fascinating. The model is predicting if the next, new comment will be abusive by examining the existing conversation, and doing this without knowing what the next comment will be.

  5. Apr 2022
    1. And therefore, to accept the dictates of algorithms in deciding what, for example, the next song we should listen to on Spotify is, accepting that it will be an algorithm that dictates this because we no longer recognize our non-algorithmic nature and we take ourselves to be the same sort of beings that don’t make spontaneous irreducible decisions about what song to listen to next, but simply outsource the duty for this sort of thing, once governed by inspiration now to a machine that is not capable of inspiration.

      Outsourcing decisions to algorithms

  6. Mar 2022
    1. The growing prevalence of AI systems, as well as their growing impact on every aspect of our daily life create a great need to that AI systems are "responsible" and incorporate important social values such as fairness, accountability and privacy.

      An AI is the sum of its programming along with its training data. Its "perspecitive" of social values such as fairness, accountability, and privacy are a function of the data used to create it.