7 Matching Annotations
  1. Oct 2025
    1. Would it be ok to just send in a robot in your place to do all that heavy lifting and other workouts? Even if you could get away with it, why would you want to, especially if you expect to do well in that job?

      I find this comparison extremely poignant. If we were to send robots into physically demanding tasks in order to qualify for a job, we would not be qualified for that job - the AI would. It stands to reason, then, that using AI to complete a mentally demanding task would make the AI qualified to do the job, not the person who used the AI. On the other hand, perhaps the ability to properly and efficiently use AI in the context of mental tasks is in itself a skill that would make someone qualified for that task? In this situation a person is still doing something behind the scenes to influence an outcome, as opposed to a robot doing all the work independently of a person? Thoughts?

    2. outsourcing your thinking to an app is very different from outsourcing math or reading. Thinking is a more basic, fundamental intellectual capability, upon which everything else depends. It’s not just about how to think, but also about what you know and how creative you can be

      This is something I strongly believe, and something I reinforce with my students constantly. There are times when a task feels mundane or unnecessary because there is a tool that can do it better, faster, or more easily (specifically, learning to print or write in cursive comes to mind these days). While it may be true that there are tools to accomplish these tasks for you, the development of these skills are the fundamental building blocks to other skills, or they work your brain in a way that is good for you for lack of better wording.

    3. Another reason to be skeptical of rosy predictions is that, today, LLMs aren’t really helping companies that have adopted it, such as replacing humans with AI chatbots. A 2025 MIT report found that a whopping 95% of companies surveyed, that implemented generative AI, are seeing 0% return-on-investment. Worse, many companies are backtracking and spending money to fix AI mistakes, and many experts are predicting the AI bubble will burst before long.

      Another conversation my boyfriend and friends had, particularly with regards to 'vibe coding' where the creator does not know the inner workings of their program because the code is written by AI. This results in more work for companies in the long run to undo mistakes or rework an entire program because it is not written by a human so a human does not necessarily know where the error lies or how to fix it.

      In terms of the AI bubble bursting, too, we discussed the fact that AI has access to everything created by humans thus far, and in order to progress in any meaningful way it would likely need to wait for further human creation. We as humans will not likely be able to keep up the with demands of AI to improve it by creating new content for it.

    4. Also, it should be noted that users who generate AI art or content typically don’t publicly share their AI prompts that led to those outputs. That is, even they care about privacy and protecting their bespoke AI prompts, despite not caring about the privacy or IP of the people that the AI was trained upon (which suggests hypocrisy).

      I find this very interesting and applicable to my own experiences. As teachers, I find that there are a number of tasks that certain teachers have begun delegating to AI in order to save time (lesson planning, making a LRP, making report card comments, etc.). I find that often, people are unwilling to share what they put into the search engine when they used AI for these purposes, however. There is a level of art to generating exactly what you want/need from AI, and people can be very secretive of how they got their results so as not to be replicable. This argument does somewhat diminish Lin's later arguments that people will be replaceable by any other person that can use AI- clearly there is a more and less effective way to do so and the results vary based on your proficiency with using the program.

    5. Future AI detection methods could be applied retroactively to past courses, including this one—there’s no statute of limitations for punishing academic cheating, as some people have learned the hard way when their PhD and other degrees have been revoked after charges of plagiarism, up to decades later. And if you have a job that requires a degree, e.g., working as an architect, lawyer, professor, etc., then that may be the end of your career. Best case, you may be looking over your shoulders for much of your life, afraid of being caught for an old crime—don’t underestimate the weight of a guilty conscience.

      While I understand the point that the author is trying to make here, it is one of the times I have noticed that he uses an appeal to fear to drive his point across, something that I did not appreciate about this text. There is a slight passive aggressive tone to some of his arguments, such as this one which reminds readers (rightfully so, I suppose, as this is true) that they will always be looking over their shoulder and will never be safe from repercussions. I believe that his message could have been conveyed without the use of fear tactics. Most of his text has used secondary sources to support his arguments, which I really appreciate, so I wish he had not resorted to trying to frighten his readers to understanding his point.

    6. It’s not a time-savings if we need to research every AI claim to confirm its accuracy, though it would be a major shortcut if we could confirm some of those claims in our heads as we can. This means it’s important to have some domain knowledge, even with AI/LLMs everywhere, to make the work of confirming AI claims go much faster by identifying AI hallucinations on sight.

      This is a conversation my boyfriend and I have been having with some friends recently. The fact that AI generates its responses from a multitude of sources, ranging in reliability, makes it in itself an unreliable source that would need to be verified in order to use with any confidence. I believe that one of the biggest arguments made for this has been that AI has pulled its information from Reddit, which is a compilation of other people's answers that they may be basing purely on their own thoughts and beliefs rather than any reality or expertise. The time it takes to do the research into whether the information is credible could be better utilized doing research properly to begin with. I will say, it may be worth using AI to compile sources that discuss the topic you are looking to explore, essentially as a way to sift through and generate a list of sources for you to peruse, however the research needs to be done by you as the response that the AI is giving you is often a compilation of these sources, which vary in their credibility.

    7. And there’s already a growing backlash against AI art, as well as AI-branded products and services. The reasons are varied and include many discussed in this essay, from errors to environmental impact and much more.

      The backlash against AI art is very present in the media recently about the AI generated actress Tilly Norwood, who many actors are calling against, especially since acting agencies are looking to sign this AI actress on with them. The issue that many of these actors have against this 'artist' is the fact that they are now contending against an actor who has been created using the acting styles of many different actors, so that what you end up with is a 'super' actor who has essentially been shaped by the best of others. It removes authenticity and creative integrity from the original sources. It is also cause for concern as it can be the cause of lack of jobs in the future.