6 Matching Annotations
  1. Oct 2025
    1. So, it’s far from clear that they can improve customer service, profitability, employee morale, productivity, and so on, even if using AI is cheaper than employing humans in the short term. For instance, this 2025 report found that AI users were prone to overestimating the benefits—believing that AI tools would speed up work by 20%, when in reality they slowed down their work by about 20% for a range of reasons, including having to fix AI errors. That research is still ongoing, and it’s too early to tell given that LLMs came onto the scene only a couple years ago or so. But the gaffes that companies have made, even Google as a leader in AI, have been embarrassing or, worse, a source of legal liability.

      This indicates that the initial optimism of what AI could be capable of might be causing people to see out of rose coloured glasses. From the teacher perspective, many students seem to think that AI can solve all of their problems.

    2. The impact on local communities is real. In Santa Clara county, the heart of Silicon Valley, AI data centers are estimated to gobble up 60% of the area’s available energy. At the same time, Silicon Valley and other parts of the state, as well as the country, continue to experience blackouts during times of peak demand for energy, such as on hot days, and our days are getting hotter and hotter. Energy companies are already under intense pressure to fix their infrastructure which has been blamed for devastating wildfires. Early estimates of the 2025 Los Angeles wildfires, possibly caused by sparking power lines, put the damages at over $250 billion. All this points to a looming energy disaster, which feeds the worsening climate disaster.

      This exposes the devastating, interconnected nature of development and environmental disaster. The costs of AI seem high when considering its impact on certain areas. To what extend will humankind continue to chase knowledge?

    3. Hopefully, you’re still in school because you understand that it’s still important to learn something in this education you are paying for, even if your endgame is to just receive a degree you can flash to employers and get on with your life. Imagine that it’s required to join a gym and complete a bunch of fitness courses in order to land a certain job, such as being a firefighter or some other physically demanding work. Would it be ok to just send in a robot in your place to do all that heavy lifting and other workouts? Even if you could get away with it, why would you want to, especially if you expect to do well in that job?

      While I understand Lin's attempt to reason with the archetype of the lazy postsecondary student, whose mantra is "Cs get degrees," I think that there a lot of institutional issues that are not being touched on.

      A lot of university students pay everything that they have to take courses that are beyond out of touch with their interests and career path. While, Lin stated that university is not vocational school, this section brings it back to the idea of becoming successful in a certain field. I find this confusing and inconsistent.

      I believe that one of the issues that pushes students towards AI use is the fact that programs rarely reflect people's goals. They are often a combination of learning vocational knowledge, niche interests that do not appeal to everyone and whatever electives fit into one's schedule.

    4. but to give up on reason and whatever other intellectual powers we possess is to give up on being human.

      This is a strong point and a good one. I agree that being able to reason is the superpower of humanity. While I think that students at times rely on generative AI for school assignments, a lot of reasoning still happens outside of the classroom, in relationships, family life, commitments and jobs. It could be valuable to think of ways to make academic reasoning more "human," so that students are able to treat it like they would any other situation.

    5. but there are many important but invisible tradeoffs in using it, especially in the classroom.

      The mention of "invisible tradeoffs" stands out to me. Oftentimes, with students that are trying to get away with using AI when it is prohibited, it seems that they either do not care about the "tradeoff" or do not realize that it is happening. By calling it "invisible," I think that the text captures this phenomenon very well. An interesting research angle would be the psychology of plagiarism with AI. What leads a student to engage in this tradeoff and is it invisible to them?

    6. On the other hand, universities aren’t vocational schools, merely training students for future jobs.

      This point indicates that we need to answer the question, "what are universities for?" to best understand the role of AI in courses. I think that an AI policy at any level of schooling needs to be grounded in what the course is intended for.