12 Matching Annotations
  1. Oct 2025
    1. Imagine an app that provides Students-as-a-Service (SaaS): you can basically rent a human being to pretend they’re you, showing up to your classes and doing all of your work. If this were inexpensive, “everyone is using it”, and hard for instructors to detect, should you use it in school? If not, why not, and is SaaS even a good analogy to LLM use in the classroom?

      I find this analogy really provocative because it forces us to confront the ethical and educational implications of outsourcing our learning. Lin (2025) is showing that if we instinctively reject the idea of renting a human to attend class for us, we should question why we view AI-generated work any differently. Both approaches bypass the essential struggle and growth that come from doing the learning ourselves. It also highlights how AI use isn’t just about efficiency, it raises deeper questions about authorship, authenticity, and the purpose of education. If the goal is personal development and critical thinking, then neither SaaS nor AI substitution truly fulfills that purpose.

    2. Humans already have a tendency to see what they want to see, finding deep meaning where there is none (i.e., apophenia), whether it’s in mirrors, grilled-cheese sandwiches, horoscopes, fortune cookies, AI chats, or anything else.

      I believe this is a crucial point as it emphasizes how our cognitive biases influence our interpretation of information and AI can enhance that effect. Lin (2025) points out that people have a tendency to perceive patterns or significance even in their absence, and during interactions with AI, we may attribute intelligence, purpose, or complexity to outputs that are merely statistical predictions. This renders critical thinking increasingly vital: we must interrogate not only what AI generates but also how our own thinking could be deceived by it.

    3. Your ability to read (and to write and think critically) was painstakingly developed throughout your life. Given the state of the world, now is definitely not the right time to give up on reading and other human abilities. Just because AI can be your go-between and do something for you doesn’t mean it should—there are already too many filters between us and reality. (Here’s why reality is a good thing.)

      I truly connect with this point as it’s a strong reminder that abilities such as reading, writing, and critical thinking are not merely educational tasks but fundamental human skills we've honed over many years. Lin (2025) correctly points out that delegating tasks to AI may further detach us from reality, particularly at a time when misinformation and mediated experiences already distort our perception of the world. It also prompts me to consider how crucial direct involvement is for genuine learning. AI can analyze and condense information, but it cannot substitute for the profound understanding, subtlety, and personal significance gained from engaging with texts independently. Maintaining and exercising those skills seems more crucial now than at any other time.

    4. So, just because we have access to those tools is no guarantee that we will actually use them, even when it’s crucial to our understanding.

      This insight strongly aligns with educational practice. Access by itself does not lead to learning; motivation, discipline, and critical thinking are crucial. It serves as a reminder that despite having tools like AI, students require organized opportunities and direction to utilize them in a meaningful way rather than just superficially.

    5. Being different and authentic would best help you to contribute new ideas to your work, not old ideas that are recycled and repackaged by AI, as well as to demonstrate your uniqueness that will be hard to replace.

      Authenticity, extends beyond being an ethical principle, it is a competitive edge in a job market flooded with AI. Lin’s point that original thought is more challenging to automate highlights the importance of developing our own perspective and concepts.

    6. This invisible, menial job is called “ghost work” and powers much of Silicon Valley—any company that relies on refining and annotating data—and is far from a “good job” that might be created by new technologies.

      I had not thought about the unseen work involved in AI systems. Lin's argument reconceptualizes AI as more than a neutral instrument; it is a result of worldwide inequalities and exploitative labor, which complicates the ethical considerations of using it in education.

    7. Google’s carbon emissions have gone up by more than 50% in recent years because of its AI energy needs; by 2026, this will be about the energy demand for Japan,

      This aspect of the environment is revealing. It shifts the perspective of AI usage from an individual choice to one with global, communal impacts is an essential ethical dimension frequently absent in discussions regarding AI in the classroom.

    8. Even if calculators are typically banned in basic math classes, given the usual reason that students are still learning how to perform those calculations in their heads, is that really necessary for everyone to know how to do basic math? If not, then does everyone really need to know how to read, write, and think for themselves?

      This comparison is impactful as it elucidates the type of mental effort that is too essential to delegate. Calculators assist with calculations, yet they do not substitute for mathematical thought. AI, conversely, can substitute the process of thinking and that transforms everything.

    9. Your use of AI to write an essay isn’t going to hurt much by itself, but multiply that by billions of AI queries per day worldwide, that adds up to real trouble.

      It's simple to overlook the individual environmental impacts as insignificant, but Lin's holistic view is important. As teachers and members of society, we ought to demonstrate an understanding of these shared impacts, rather than merely concentrating on personal ease.

    10. Even if AI will be important in the future, for you to avoid becoming the tool for AI, you will also want to be as human as you can.

      Lin's analogy to calculators demonstrates why relying excessively today is unjustified by future job importance. Students require a strong foundation of knowledge to manage AI competently, failure to do so may lead to an inability to identify mistakes or improper use.

    11. LLMs themselves struggle with explaining their own chain of thought or reasoning process accurately, often making things up.

      This emphasizes a fundamental epistemic issue: AI is not created to focus on truth, but on plausible language. That restriction fundamentally contradicts the objectives of critical thinking and philosophical reasoning, where comprehension is prioritized over content creation.

    12. The university is a gym for your mind.

      This comparison reshapes the role of higher education. It highlights that the aim is not merely to complete courses or obtain certifications, but to build intellectual strength through diligent practice. Delegating thought and writing tasks to AI, similar to sending a machine to exercise, undermines the fundamental goal of education.