8 Matching Annotations
  1. Last 7 days
    1. But even if you can whip out a phone-calculator from your pocket, that extra step is still a hurdle that disincentivizes actually doing that work.

      If AI removes cognitive effort entirely, students might be less likely to engage deeply with school material. Personally, it reinforces my concern that convenience can sometimes come at the cost of learning, and that thoughtful scaffolding is necessary to make sure students still develop essential skills before relying on technology.

    2. Last but certainly not least here, LLMs require a tremendous amount of energy throughout the entire lifecycle, from building data centers to training AI to processing user queries. Looking at just one company, Google’s carbon emissions have gone up by more than 50% in recent years because of its AI energy needs; by 2026, this will be about the energy demand for Japan, which is the #5 country in terms of annual energy consumption. Given that we’re facing both an energy crisis and a climate crisis, widespread use of AI will make both worse, so much so that lawsuits are being filed or contemplated related to this environmental impac

      I hadn't previously considered the environmental side of AI, but it’s alarming that the energy demands of LLMs are so massive. Training AI and running data centers at this scale clearly has a real impact on both energy consumption and carbon emissions. Comparing Google’s AI energy use to the consumption of an entire country really puts things into perspective.

    3. Likewise, it is very surprising when ChatGPT can’t even beat a 1977 Atari gaming machine in chess. Sure, ChatGPT also isn’t designed to be a chess engine, but you would expect something that is otherwise so capable—even superhuman for many tasks—to not fail so badly at games that rely on critical thinking and planning.

      This was surprising to read! This shows me how AI can seem quite advanced in some areas but fail badly in others. ChatGPT isn’t actually reasoning; it’s predicting human language, not planning moves like a chess engine. For me, this is a reminder that even though AI presents itself as 'intelligent', like everything else in the world, it has its limits.

    4. As with AI art, AI writing can look weirdly the same no matter which AI app created it.

      I've noticed that a lot of AI-generated writing has a polished but formulaic feel, almost like it’s missing the quirk and unique voice that makes human writing engaging. Especially with the inclusion of em dashes mid sentence. For me, that sameness is one of the reasons I’m cautious about relying on AI for creative work. It can be useful for brainstorming or structuring ideas, but if everything starts to sound the same in our writing, we risk losing the diversity of voices and styles that make writing meaningful.

    5. This isn’t just a psychological effect, but research is showing that relying on AI can change how your brain works in a way that can resemble brain damage. Some people have already been involuntarily hospitalized for “ChatGPT psychosis”, which even the tech industry acknowledges is a big problem.

      Out of everything I've read thus far in this article, this excerpt made me pause and truly reflect. I find it alarming that relying on AI could have such serious cognitive and psychological consequences. The idea that AI use could actually change the way our brains function, or even trigger severe mental health issues like “ChatGPT psychosis”, makes me question how we integrate these tools into daily life and education. For me, this emphasizes the importance of setting boundaries, using AI deliberately rather than excessively, and ensuring that we maintain our own reasoning and critical thinking skills. It also makes me think that conversations around AI shouldn’t just focus on convenience or productivity, but also on mental health and long-term cognitive well-being.

    6. AI can be more creative, especially if you’re a person who’s not that creative to begin with. Not everyone is, and that’s ok. With AI, you can now do things that you previously couldn’t, such as to effortlessly create art and music, even if you have no skill or training.

      I can definitely see the value in that perspective. For people who don’t feel naturally creative or haven’t had training, AI can open up possibilities that were previously out of reach, like making art, music, or written work with ease. For me, that’s exciting because it lowers barriers and allows more people to experiment, express themselves, and engage with creative processes. At the same time, I think it raises interesting questions about what creativity really means—if AI is generating something for you, is it your creativity, the AI’s, or a mix of both? I feel like the key is how we use AI: it can be a tool to enhance our ideas, explore new possibilities, and build skills we wouldn’t otherwise develop. For someone like me, who might struggle with certain creative tasks, AI could act as both a learning aid and a springboard for personal expression, as long as I remain actively involved in shaping the final product.

      However, as someone who is very creative and values creative expression as something uniquly human, I would be lying if I said that AI's capacity to write books and create art didn't concern me. Part of what makes creative work meaningful for me is the process—the struggle, the experimentation, and the personal choices that shape a piece of art, music, or writing. With AI, there’s a risk that these processes could be shortcut or devalued, producing work that is polished but lacks the nuance and personal perspective that comes from human effort. I worry that if AI becomes the default way to create, it could shift expectations and standards, making it harder for people like me, who take pride in crafting ideas from scratch, to have our work recognized or appreciated. At the same time, I also see that AI could be a tool if used intentionally, but for me, the concern is that it might overshadow the very human creativity that defines our contributions and distinguishes our work from what a machine can generate.

    7. Therefore, we need to strike a balance between the two possibilities if we don’t know how the future will play out. This means an open conversation about AI’s pros and cons, since ultimately it will be your decision to use AI in your coursework or not, even when an instructor prohibits it.

      Finding a balance with using AI in school is so important. Its not just about whether or not we use it, but also understanding the risks and benefits so that we make informed, smart choices. Even if an instructor discourages or bans AI, like the author says, we still make the decision to use it or not. So, learning to decide responsibly and thoughtfully seems just as important as learning how to use the tool itself. For me, this highlights that part of being prepared for the future is developing judgment, not just technical skills.

    8. If it’s a game-changer that will be regularly used in future jobs, then students will need to know how to use it expertly; thus it may be premature and potentially a disservice to students to ban AI in the classroom. Without learning how to “wrangle” AI, you could be at a competitive disadvantage once you graduate and enter the workforce.

      I read this as a strong case for why banning AI in the classroom could actually harm students. If AI is going to be central to the workforce in the future, it is a skill that students need to learn to use responsibly, much like digital literacy.