6 Matching Annotations
  1. Last 7 days
    1. GoogleAI head Jeff Dean acknowledged that the paper “surveyed validconcerns about LLMs,” but claimed it “ignored too much relevantresearch.” When asked for comment by Rolling Stone,

      It doesn't seem like he necessarily cared she was leaving and just wanted her gone, and found any excuse to get her off the Google team. Like Buolamwini, she talked to important people to try to get things changed and bettered and it seemed like there wasn't some interest in anything she had to say.

    2. The mask worked,” Buolamwini says, “and I felt like, ‘All right, thatkind of sucks.

      I think there is definitely a connection between the two because they are both struggling with the same problems in their jobs, and as soon as they change skin tones, they are more relevant and people care more about their opinions.

  2. Sep 2024
    1. Each layer of an LLM is a transformer,

      This is related to the video because ChatGPT 3.5 uses the same method to change wording. LLM is a very important thing when using ChatGPT because it is able to make sentences transform into more proper and correct sentences

    2. In the attention step, words “look around” for other words that have relevant context and share information with one another. In the feed-forward step, each word “thinks about” information gathered in previous attention steps and tries to predict the next word.

      This is very similar to the ChatGPT video because how ChatGPT produces their answers, is by other peoples answers or searches. Words look for each other to make everything relevant and be put together well, while ChatGPT puts peoples searches together to create one answer.

    1. language models end up learning a lot about how human language works simplyby figuring out how to best predict the next word.

      I'm using this quote again because it connects to the video on where ChatGPT models learn how to predict words for their responses. The prediction from ChatGPT leads to complex possibilities and Dobrin connects to this where context is the main reason why AI can make these predictions.

    2. Some peopleargue that such examples demonstrate that the models are starting to truly understand the meaningsof the words in their training set. Others insist that language models are “stochastic parrots”

      This sentence connects to Dobrin's perspective on ChatGPT limits, arguing whether ChatGPT actually understands the language or just copies answers that sound right. Even in the video, it talks about ChatGPT's model where it creates responses based on context rather that being conscious.