6 Matching Annotations
  1. Last 7 days
  2. drive.google.com drive.google.com
    1. It does not propose toidentify the risk level of AI systems but rather focuses on its outcome toregulate.

      If the UK’s plan doesn’t identify risk levels and only focuses on outcomes, they wouldn't know which systems are dangerous. It feels like they’re assuming outcome-based regulation will work just as well without proving it. I would say that shows hasty generalization because skipping risk evaluation could mean missing red flags early on. The author should explain why this approach is still reliable or give an example of it actually working.

    2. s of May and June2020, the Global Privacy Assembly found that 68 per cent of the 38 memberssurveyed did not have laws or guidelines specific to accountability in theuse of AI.

      It was shocking to me that most countries that were surveyed did not have specific laws for AI accountability in 2020. This shows that even though people have been talking about regulating AI for a while, actual rules haven’t caught up. The paragraph says that having laws automatically makes AI safer, but it doesn’t explain how these rules would actually be enforced. And I feel like there is some false cause in this because it just assumes that passing laws will simply make AI safer. It is also important to see what countries created good regulations or showed cases where the lack of laws caused real problems.

  3. Oct 2025
  4. drive.google.com drive.google.com
    1. In March 2023, a bug was discovered in ChatGPT, which caused aleak of information (in particular the titles of conversations with the chatbot)from one user’s chat history to other users

      A ChatGPT bug in March 2023 leaked some users’ chat info to others which is kind of scary, but it makes sense since the software isn’t perfect. The paragraph makes it seem like it’s over and done with, but stuff like this could happen again. It would be better if they explained how OpenAI actually fixed it or what they’re doing to prevent future leaks, but it shows that AI isn’t totally risk-free, and people should still be careful with their data.

    2. If, withoutthe data subject’s consent, the personal data is scraped (ie, extracted usingweb scraping technology) and used for generating new images of that namedstaff for an advertisement on tourism, for example, personal data privacylaws might have been breached.

      Using people’s personal data without permission for AI could break privacy laws. That makes sense cause privacy laws are supposed to protect people, and using data for something totally different seems wrong. But the paragraph kind of assumes all countries handle this the same way, which isn’t true and makes the point weaker. It would be stronger if it gave like examples of real laws or cases where AI misused personal data.

    3. However,this verification process may not be straightforward, as the current versionof some chatbots does not readily identify where its sources of informationare located. Improvements to ensure accountability and transparency arelikely to need to be further considered for future advances in technology.

      AI tools being reliable if they are verified is what the author is saying here. Verification could be an easy step, but chatbots don’t always show sources. We are not told though how verification could be done, so this is missing that point. And the author could make the argument stronger by showing examples of AI mistakes and explaining verification steps.

    4. As the responses from chatbots are generated from information collectedby the program rather than the fallible human memory, one may incorrectlyassume that no errors exist. However, there is material risk in relying exclusivelyon AI-generated responses without verification of the generated content.

      The argument the author makes here is reasonable and sound because there are still errors that can be made using AI. There is a common belief that AI is super trusting when it comes to its information, and that is because AI is not a person, so we think it only gives correct and accurate information. Since this is not true, there is still risk associated with AI usage, since its information can be misleading.