18 Matching Annotations
  1. Last 7 days
    1. Efficiency is only one of many goals we can have; it’s not the only goal, even if some people have a fetish or obsession with it. For instance, right away, efficiency is biased toward things that can be easily measured, i.e., activities and outcomes that can be turned into metrics to optimize. But not everything important is easily quantifiable, such as subjective features.

      In education, efficiency often prioritizes outputs—grades, test scores, word counts—while undervaluing subtler aspects of learning like curiosity, creativity, or ethical reflection. Lin is warning, that not everything of value such as the joy of learning fits into measurable categories.

    2. Using Grammarly in school vs. using LLMs in school. Grammarly used to be for grammar and wordsmithing, but now it uses LLMs and can help students slip past AI detectors and much more. If it was ok to use Grammarly when it just corrected grammar and awkward sentences, which still used some AI but not LLMs, what exactly is the difference between that and ChatGPT?

      This raises an important question about shifting boundaries in technology use: where do we draw the line between permissible assistance and impermissible outsourcing? Grammarly functioned as a tool, polishing grammar, correcting mechanics, and smoothing awkward / lengthy sentences without replacing a student's intellectual labour.

    3. And what would be a competitive advantage in a job market filled with AI wranglers? In a word, that’s authenticity. Having a different, special perspective will separate you from the masses who are using AI to produce more or less the same content with the same, ordinary, and generic voice. Being different and authentic would best help you to contribute new ideas to your work, not old ideas that are recycled and repackaged by AI,

      If most workers rely on AI for content, then originality, voice, and unique perspective become scarce—and therefore valuable. The warning is that AI tends to reproduce “average” responses, so individuals who cultivate their own ideas and styles will stand out in a crowded job market. This places education in a new light: the classroom becomes less about mastering efficiency and more about nurturing individuality, creativity, and critical perspective. There is a true advantage of imaginative contributions.

    4. It's better to know things and have them ready to connect to other things you know, in order to generate new insights. This ability to synthesize information is often held up as one of the most important skills needed for the future.

      Memorization or familiarity with facts is not about stockpiling trivia, but about building a mental library that allows students to make unexpected connections. AI can provide quick answers, but it cannot replace the deeper cognitive skill of synthesis—linking disparate ideas into something original.

    5. In olden times, classrooms banned the use of calculators for math classes, at least at the pre-calculus levels where it was still important for students to learn how to do basic operations for themselves. The usual rationale is laughable now; we were told, “You won’t always have access to calculators!” And that was mostly true…until the rise of mobile phones in the 1990s and then smartphones in the 2000s, when calculators became a standard feature on this one device we always had on us.

      What once seemed like a reasonable restriction now looks outdated, since calculators became universally accessible. The comparison is while banning calculators seemed absurd to our generation (avg university student age in 2025), the rationale for banning or limiting AI is not the same. Calculators automate mechanical processes, while AI risks automating cognitive and creative processes which are central purposes of education. Thus, there is a necessary skill(s) that must be built before technology is introduced.

    6. Protecting your privacy and intellectual property (IP) are also ethical and practical concerns. Your AI queries contain valuable clues about you and your dispositions, which could be exploited by marketeers or even weaponized. For instance, say you were writing about some politically charged topic, such as Gaza or abortion or immigration: the fact that you were exploring certain positions, even if you don’t believe them but just wanted to better understand them, could be used against you in any number of possible situations,

      The vulnerability of personal data and intellectual property should be a major consideration for students' use of AI. It may not feel relevant or real and perhaps can be easily ignored, but every query leaves a digital trace that reveals sensitive information about a person's values, beliefs, and thought processes, which then can be commodified, surveilled or even weaponized against you. These tools can often profit from exploiting user data - its a matter of digital safety and civic responsibility.

    7. Punting your work to AI, whether in school or at a job, also means depriving yourself of the personal satisfaction that comes from achievement and knowledge, such as actually drawing an artful image instead of typing in words that gets an AI to produce the same thing.

      There is an intrinsic reward inherent in the effort and mastery of a skill, reminding us that learning is not only about external outcomes, getting a higher grade, but also about internal fulfillment. The value is not just in the final product, but in the act of honing a skill and overall experiencing growth. By constantly outsourcing outputs, the achievements then result in hollow victories.

    8. Worse, LLMs sound confident in their outputs, even when they’re factually wrong, and this makes it even harder to know what claim needs to be double-checked.

      One of the most deceptive qualities of AI: its rhetorical confidence. AI's fluent prose can easily persuade readers even when the content is inaccurate or fabricated. This lack of confidence can blur the line between trustworthy information and misinformation, making it difficult for students to exercise critical judgement about what to verify.

    9. Research is also showing that AI is homogenizing our thoughts, i.e., our ideas are “regressing to the mean” or collapsing to the most popular or average takes, which means unoriginal ideas. This doesn’t necessarily mean that everyone is pushed to the same ideas, but AI “can funnel users with similar personalities and chat histories toward similar conclusions.”

      There is a subtle but serious intellectual risk: the narrowing of thought. By design, large language models produce outputs that reflect the most statistically probable or average responses which discourages originality and nuance. This endangers diverse perspectives and flattens them into predictable patterns shaped by prior date or algorithms.

    10. The university is a gym for your mind. For both body and mind, your abilities and skills atrophy or decline when they’re not used, like a dead limb, for efficiency and energy-savings. This deskilling is already happening with doctors and other professionals, not just students.

      This metaphor frames education as a kind of mental training ground, where the workout is quite simply, doing the reading, writing, and critical thinking and as a result, building intellectual strength. Just like any other muscle in our body, our mind weakens, cognitively speaking, with the overall reliance on AI. Extending this concern to doctors, and other professionals which may connect to a larger debate in educational philosophy about resilience, discipline, and long-term costs of outsourcing efforts to machines. I see Lin's point as preserving mental fitness necessary for intellectual and professional life.

    11. To begin with, why are you at a university in the first place? Maybe you’re here just to get training for a particular job or career path, but this education is generally not free in America, and it likely costs you or your parents a substantial amount of money every year. Even if you have scholarships or a full ride (which someone is paying for), you still incur opportunity costs, i.e., the loss of other things you could have been doing with your time if you weren’t here.

      University is not costless - it involves financial investment, by raising this point, Lin is highlighting that education is a scarce and valuable resource, not something to be treated lightly or bypassed with shortcuts like AI. It reframes AI misuse as not just a violation of rules, but as a kind of self-sabotage that undermines the very reason for being at university, essentially pushing students to reconsider whether leaning on AI aligns with their educational goals or wastes the financial sacrifice being made to be afforded this opportunity.

    12. There are other possible benefits, of course. AI can help tackle hard problems facing the world, such as climate change, energy sustainability, diseases, and other serious challenges, even aging and death itself. Some predict the end of scarcity (of food, energy, etc.) because of AI and therefore the end of wars once there’s radical abundance.

      This gestures to the almost utopian promises often associated with AI. By listing global crises like climate change, disease, and scarcity - though, the scale of these claims contrasts sharply with the classroom context: while AI may one day help solve humanity's hardest problems, it does not necessarily follow that it should solve a student's homework struggles. Framing AI as some saviour of civilization is inappropriate because this obscures its limitations, risks, and unintended consequences. What might be losing in human intellectual development?

    13. AI can be smarter or more informed, especially if you’re a person who’s not that educated to begin with. With AI, you can now produce well-written, grammatically correct, thoughtful papers and other content, which might have been a great struggle for you before.

      This is both appealing, and a danger of AI in education. On the one hand, it democratizes access to polished language and complex information, ultimately giving students who struggle with writing mechanics or academic conventions a powerful tool. This acknowledges the real barriers that less experiences or less confident learners face which may feel liberating for these students. However, the bottom line is that while AI masks those struggles, it does not resolve them. Students skip over the process of learning how to write and think for themselves, leaving them entirely dependent on a system that performs the task for them - so there is disempowerment through dependence.

    14. Because students need teachers to get them to do hard things now that are good for them later.

      This statement speaks to the disciplinary role of education. Lin is arguing that part of a teacher’s responsibility is to create conditions where students stretch beyond comfort, engaging in struggles they might otherwise avoid. Discipline here isn’t about punishment but about guidance: helping students practice perseverance, delayed gratification, and critical thinking. In a culture that often prizes convenience and instant results, AI tempts students to bypass the very difficulties that cultivate growth. Teachers step in to enforce boundaries—not to limit freedom, but to protect the long-term benefits of wrestling with hard ideas. Intellectual resilience is worth the challenge and it is not possible without structured discipline.

    15. even if it’s ok to use it in future jobs where producing work is more important than learning.

      The contrast between doing, and producing. In particular, this emphasizes the difference between active engagement and passive output. Lin highlights that philosophy - and by extension, education, is about human activity, not just polished results. He makes sure students don't miss the key concepts. There is a hint of a Marxist thought in this passage, even if Lin doesn't frame it in those terms. The doing philosophy = authentic, lived, human activity similar to unalienated labour. While the producing philosophy introduces a commodified product detached from natural human engagement.

    16. AI can save time, lots of it. What used to take hours or days can be accomplished in minutes. For academic work, AI can help with a huge range of things, such as brainstorming ideas, summarizing, researching, sifting through massive amounts of content or data, and even writing full papers and creating presentations and podcasts. All this means more free time for you, which is a good thing for a student with a busy academic and social life, at least when that extra time is used well.

      By openly acknowledging the usefulness of AI, Lin demonstrates fairness and credibility. He shows that is not entirely rejecting AI out of fear or ignorance, but out of principle, which makes his case stronger. These benefits -speed, reduce stress, and help with mechanics - are genuine and many students find these benefits attractive. However, Lin reframes them as secondary to the real goals of education which are deep thinking, meaningful engagement, and intellectual growth. So in this way, he distinguishes between surface-level advantages, and deeper values. Arguing overall, efficiency should not override learning. Very persuasive point.

    17. AI cheating is also disrespectful to your peers who are trying to earn their grades by doing the work needed.

      This point shifts the conversation from individual integrity to collective fairness. When a student uses AI dishonestly, they aren't only misrepresenting their own learning but also undermining the efforts of classmates who struggled authentically through the assignment - creating an uneven playing field where students benefit from shortcuts while others invest time and energy into genuine work.

    18. “AI writing, meanwhile, is a cognitive pyramid scam. It’s a fraud on the reader.

      This phrase captures Lin's ethical stance. By comparing AI use to a pyramid scheme, he implies that AI's apparent polish hides a hollow foundation - borrowed knowledge without genuine understanding. The idea of "fraud" underscores how misrepresenting AI's work as your own undermines honesty in education. On this thought, students are invited to consider not just whether AI can produce work, but whether presenting it as theirs is intellectually or morally defensible. So does convenience justify compromising authorship?