11 Matching Annotations
  1. Last 7 days
    1. If a random person feeds your work into an AI to copy your style and flood the market with similar content, that could kill demand for your work and blossoming career before you have much chance to build up a body of work and establish yourself as you would want.

      Yes, this is already happening in many markets. There’s no way to stop it, so it really pushes creators to develop more unique and distinctive styles to stand out.

    2. Besides, no company is immune from having its data hacked or leaked. Already, some users have accidentally let their private AI queries to be posted publicly. It’s also possible your psychological vulnerabilities and stressors could be deduced from your AI chats, which means a risk of being manipulated by bad actors.

      I completely agree. Technology itself isn’t the problem; the real issue is who controls and uses it. As I said before, no one can completely avoid modern technology, and it’s impossible to refuse it in today’s world. What we can do is ensure proper regulation, standards, and oversight so that AI is used responsibly and safely.

    3. But if you’re not putting in the work to receive a minimum education, then how can you know when AI is hallucinating?

      I totally agree! Just like with critical literacy, AI is a powerful tool, but it requires the user to have enough knowledge and discernment to tell when it’s wrong. I can only imagine how challenging it will be for future early childhood and elementary teachers to manage this… and yes, that sounds like a lot of work, just joking!

    4. This doesn’t necessarily mean that everyone is pushed to the same ideas, but AI “can funnel users with similar personalities and chat histories toward similar conclusions.”

      I hadn’t realized that before—it’s a really important point. I can see now how AI might guide people with similar backgrounds or habits toward the same kinds of conclusions, even without them noticing.

    5. o appreciate the difference, imagine you didn’t know English and had to rely on a translation app: how different and worse would your daily life be?

      I think people often forget the original purpose of AI tools like translation apps: to make life easier and communication more convenient. Not everyone has the time or money to learn a new language, especially just for short-term travel, so translation software provides access for the general public. It’s only a medium, not a replacement. Some may prefer to stay in their comfort zone, and that’s fine—we don’t need to force everyone to learn deeply. But for those who truly want to grow, it’s important to remember that AI is just a tool to serve us, not a substitute for our own effort.

    6. It seems so widescale that AI has been called a “mass-delusion event.” Several users have been led by AI to commit suicide.

      I think the idea of AI as a “mass-delusion event” sounds exaggerated. When I looked into “chatgpt psychosis” cases, most involved people who already had mental health challenges or were socially marginalized—these are extreme examples, not the norm. It reminds me of nuclear energy: the real danger is not the technology itself, but how people use and control it. For example, in the Windsor Castle intruder case, the key questions are not simply “AI caused this,” but rather: why did this person only listen to a machine’s encouragement? Who was truly behind that encouragement? Why would someone prefer to confide in a robot rather than a human? And why did the operators of that AI system fail to detect and report it in time? These deeper issues of responsibility and oversight are more important to examine than blaming AI for causing psychosis.

    7. You’re here at a university to develop as a human being—to become a better, more educated person and citizen of the world—and learning how to productively disagree (and to resolve that) is a critical part of education.

      I strongly agree and appreciate this perspective. This is exactly the advantage of a university environment: unlike social media, where expressing an opinion can expose personal information and invite insults or extreme reactions, universities provide a space for free, respectful expression. It’s where diverse ideas can flourish, and people can learn to disagree productively, which is essential for human progress.

    8. With the accumulated knowledge of the world now at their fingertips, students need their teachers more, not less.

      I see your point about students needing teachers more in the AI age, but I think this mainly applies to younger students. For university and graduate students, reliance on teachers is often limited. In my own experience, most of my university courses were largely self-directed; lectures and teaching methods didn’t suit classes of 50+ students. Students sometimes had to focus on navigating relationships with professors to get higher grades, rather than deeply learning content. Good professors are rare, and even when found, individual guidance depends on the match between the teacher’s style and the student’s needs. In extreme cases, like in some Asian graduate programs, students may even feel pressured to help professors with personal tasks to gain favor, which can compromise the purity of academic learning.

    9. For instance, AI prompt engineering went from the “hottest job in 2023” to “obsolete” in two years.

      This accelerates a “fast-food era” of work, where adaptability is crucial. AI is now indispensable in many workplaces, so the question is not whether we use it, but how we use it. What we need is an AI critical literacy curriculum: learning to master AI tools, apply them effectively, and develop critical thinking skills through this process.