450 Matching Annotations
  1. Sep 2025
    1. criminal sentencing and policing

      Connects to one of my earlier annotations. I really wonder if the validity of these systems have really been called into question.

    2. reporting that darker-skinned females are the most likely to be misclassified,with error rates up to 34.7 percent. The error rate for white men: 0.8 percent

      This makes me think of how facial recognition software is used in police dramas on TV. Do cops/detectives/federal agencies actually use facial recognition to find/identify/convict criminals? If they do, does this concern of misclassification also apply to that field?

    3. with often didn’t pick upon her dark-skinned face.

      Reminds me of the issue that apple facial recognition had with people of Asian, specifically East Asian, descent. I recall seeing a video where a women got her friend to unlock her IPhone using the facial recognition tech, even though they looked fairly different.

    4. will wipe out the jobs of some marginalized communities

      The conversation about wiping out jobs, is one that I have seen, but I would love to look into the specifics of how it affects marginalized groups.

    5. Content moderators in Kenya have reported experiencing severe trauma, anxiety,and depression from watching videos of child sexual abuse, murders, rapes, and suicide

      I think I recently saw a trailer for a horror/thriller movie that is set in the premise of a women in the US starting a job as a content moderator and, feeling traumatized yes but also going out of her way to track down the people hurt in the content and the people posting the content. I have been aware of content moderation and things being reported or tagged but never about the people who have to do that moderation.

    6. How would that risk have changed if we’d listened to Gebru? What if we had heard thevoices of the women like her who’ve been waving the flag about AI and machine learning

      Seems like the thesis (?) of the article.

    7. As AI has exploded into the public consciousness, the men who created them have criedcrisis

      The regret is crazy. They spent years working on it, with people warning them, and now they're worried.

    1. that writing well is the hardest subject to learn

      I feel like a lot of my STEM friends and colleagues would disagree with me if I said this to them.

    2. Linguistics attributes this to the concept of “bursts” in writing.

      This is a new concept to me, but I can recognize that I have done it in my own writing. This is interesting.

    1. They’ll never haveto write essays in the adult workforce, so why bother putting effort into them

      But they will have to write and speak (I think a lot of writing skills translate over into speaking) for the rest of their lives and careers. A friend of mine that just started teaching recently talked to me about how she had to emphasize to her students that no matter their field they will need to write.

    1. Leehopes people will use Cluely to continue AI’s siege on education.

      Lee seems like a villain, I wonder if that is based on my reactions to/perception of him, bias, or the way he has been portrayed by the author.

    2. it might rely on something that isfactually inaccurate or just make something up entirely — with the ruinous effect social media has hadon Gen Z’s ability to tell fact from fiction

      Interesting and something I have recognized, but I dont think it is just Gen Z. I think this is a multi-generational problem, especially when it comes to recognizing how truthful AI content is.

    3. How can we expectthem to grasp what education means when we, as educators, haven’t begun to undo the years ofcognitive and spiritual damage inflicted by a society that treats schooling as a means to a high-payingjob, maybe some social status, but nothing more?”

      This is so interesting!

    4. The students kind of recognize that the system is broken and that there’s not really apoint in doing this.

      Mirrors what Lee said at the beginning of the article.

    5. Every time I brought it up with the professor, I got the sense he was underestimating the power ofChatGPT

      Another point of interest for this conversation is the power dynamic between Williams and the professor.

    6. whenever they encounter a little bit ofdifficulty, instead of fighting their way through that and growing from it, they retreat to something thatmakes it a lot easier for them.

      I think this is reflective of a larger societal issue with patience, effort, and attention.

    7. studies have shown they trigger more false positives for essays written by neurodivergentstudents and students who speak English as a second language

      Is it bias in the AI detector? or is it just that the way that these students write is similar to how AI was trained to respond?

    8. counterpoints tend to be presented just asrigorously as the paper’s central thesis

      I wonder if I can find examples of this online. I have an idea of what the author is discussing but I have a hard time visualizing it in my head.

    9. learning is what “makes us truly human.”

      I was not aware of critical pedagogy before this article, but I do agree that learning is part of our humanity.

    10. But she’d rather get good grades

      I honestly agree. I love to learn, I do, but sometimes my fear of failing gets so overwhelming. I think this highlights alot of the anxiety students feel about getting good grades and passing.

    11. Professors and teaching assistants increasingly found themselves staring at essays filled withclunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student —or even a human.

      Sounds like the "flattening your voice" argument

    12. Lee thought it absurd that Columbia, which had a partnership with ChatGPT’s parent company,OpenAI, would punish him for innovating with AI.

      This is so crazy. I agree with his notion that lots of students are using AI for classwork, with and without permission from their teachers. However, that, as well as Columbia's partnership, does not justify his actions.

    1. best be wielded by people who have a knowledge of that heritage

      people with prior knowledge and understanding of the subject, so that they can verify that the information they're receiving is correct. edit: While this is still valid, but I believe my opinion has changed after further research.

    2. There are glyphs that other AIs cannot see. Still other AIs seem to have invented their own languages by which you can invoke them.

      I looked into the two articles linked here and I found the additional information fascinating.