13 Matching Annotations
  1. Last 7 days
    1. But like those other subjects, the point of university GE isn’t so much to serve up things you’re already interested in but to expose you to new things that you might never engage with on your own—things that can be important to know in certain contexts and professions. And you might surprise yourself by acquiring new interests and skills that you didn’t even know you had, wanted, or needed, so much so that you might change majors halfway through college (as I did from pre-med to philosophy).

      I see Lin’s point that GE is supposed to expose us to new subjects we might not choose on our own, and I get how that can sometimes spark new passions. But for many students, especially those paying high tuition or with family pressure, electives don’t always feel like opportunities, they can feel like financial burdens. Switching majors is a privilege that not everyone can afford, so while GE may lead to discovery for some, for others it just feels like extra cost and stress on top of pursuing their main degree. Maybe these courses could be the place where students learn how to use AI responsibly without the same career pressure as major-specific classes.

    2. If your future boss asks you for some creative thinking off the top of your head, you’d look incompetent if you had to first ask your AI app—your boss would wonder why they hired you than some other random, less expensive, interchangeable person who also can operate an AI app.

      I agree that employers care a lot about creativity and being able to think quickly on your feet, but I also think there’s value in knowing how to use AI as a tool. For example, when I was on a supply call, the teacher showed me a lesson plan she had quickly generated with ChatGPT on her phone. As an elementary teacher with so many classes to prepare for, I could see how it saved her time. But I think the best use of AI is to spark ideas, putting your own creativity into the prompt and then tweaking the output for your students, rather than just taking the first thing it gives you.

    3. Even if it’s important to learn how to use AI in this brave new world, it’s not clear that the typical university course is the place to do it, much less a philosophy course.

      I actually think university could be the right place to learn how to use AI responsibly if it’s done in the right way. If students are going to use it in their future careers, it makes sense to practice now in a guided environment with feedback. Philosophy could potentially be a good place to start, since it’s about asking hard questions, examining ethics, and learning how to balance new tools with human reasoning.

    4. For instance, it’s hard to imagine anyone would want to hire this new UCLA graduate who bragged at graduation about his ChatGPT use to get through school (if he indeed did that), everything else being equal. You might be able to AI-cheat your way through college, but is that really a skill that employers are looking for?

      I see Lin’s point, but it also seems likely that employers will want graduates who can integrate AI into their work. The real issue may not be whether students use AI, but how they’re taught to balance it with their own thinking and judgment. Just like calculators became part of math education, maybe AI should become part of building new literacies in today’s classrooms.

    5. What is the value of free time—it sounds good, but considering that we waste a lot of our free time already, is doing more of that valuable?

      We all want more free time, and it sounds great in theory. But in today’s culture, so much of it goes into doom scrolling through social media or binge watching on streaming services. Maybe the real issue isn’t how much free time we have, but how we use it. Even if AI gave us extra hours of freedom, would most people actually use them productively to grow and learn — or would it just turn into more wasted time?

    6. Worse, you might instead come away with a technology dependency or addiction, causing you to doubt your own intelligence and abilities. As a result, you may become unable or very anxious to think or write for yourself without the help of AI (and writing is thinking).

      AI should help us gain confidence in navigating digital tools and not lessen it. For example, if a student always relies on ChatGPT to create essays, they might freeze up when asked to write a response in class that is timed or on an exam. Instead of building their own meaning-making skills, they become anxious without the tool. This also weakens participatory culture. If people are always doubting their own voice, they stop contributing authentically to discussions with the real human perspective.

    7. The risk of AI hallucinations and bad actors who are manipulating AI means that we need to be able to think for ourselves—we must become much better gatekeepers for what beliefs we let in.

      Literacy today isn’t just about reading and writing but about critically evaluating digital information. If AI can make mistakes or be manipulated, then we need those skills even more to question what we see instead of just accepting it.

    8. At the same time, we’re already seeing the loss of good jobs—ones that are interesting and valuable, which we should want to preserve as humans, such as artist jobs—and AI is predicted by industry experts to replace knowledge-based jobs (both entry-level and senior executives) and even most any job before long.

      If AI can replace not just routine work but also creative and knowledge-based jobs, what’s left that makes us valuable as humans? It’s unsettling to think that our ability to reason and make meaning (the one thing that sets us apart) might be the last thing we have to protect. We risk losing the very qualities that give us value in a future where machines handle everything else.

    9. Beyond schoolwork, there are personal impacts from relying on AI. If you wasted your college years and didn’t learn much, then you might not be able to converse intelligently when the occasion requires it, such as at a work meeting, professional networking event, social setting, and so on.

      This reminds me of Knobel & Lankshear’s idea of new literacies as something we practice to communicate and make meaning. If we just rely on AI in school, we’re not actually building those skills, and it will show when we can’t hold a real conversation in life or at work without AI guiding us. You lose confidence and the ability to really participate.

    10. And that is the primary advantage we humans have over all other animals: the capacity to reason and do other intellectual work, such as to be creative and not just act out of instinct or habit. We don’t have the fur, claws, wing, speed, fangs, strength, and other things that enable animals to survive the world.

      If we hand our reasoning over to AI, aren’t we giving up the very thing that makes us unique as a species? Instead of treating AI as a tool, it feels like we’re letting it replace our core advantage. If our creativity is replaced by AI, we lose the authentic human contributions that make participatory culture meaningful.

    11. For instance, imagine that AI use is banned in the classroom (as it is in ours), but the instructor secretly used AI to give feedback on your assignments, even though you are required to put in the work yourself. Would you feel disrespected, if not defrauded and want your money back?

      New literacies research shows how important feedback, peer mentoring, and revision are in online spaces. Lin’s example of a teacher secretly using AI for feedback shows how cheating this process undermines trust and genuine learning.

    12. AI cheating is also disrespectful to your peers who are trying to earn their grades by doing the work needed. In many courses, grading is curved to ensure there’s a plausible distribution of grades, including a reasonable amount of top grades. Cheating manipulates that grading curve and deprives otherwise-deserving students of higher grades that they would have earned, if you care about other people.

      I never considered how AI cheating can effect your classmates. If grades are curved, using AI unfairly can lower the chances for students who actually did the work. It makes me see academic honesty less as just a rule and more as a responsibility to my peers and those around me. This makes me wonder if professors should adjust grading practices in the AI era (like moving away from curves), or is it still up to students be fair?

    13. LLMs can’t even get basic math right or accurately count. The usual response is that they’re not designed to be calculators, which is true. But since mathematical reasoning appears closely related to critical thinking or logical reasoning, that’s a big reason to be concerned about how well AI can actually “reason” and therefore throws every claim an AI makes into question.

      This shows why new literacies are about checking and questioning digital tools. As a supply, I once used AI to find the answers to an algebra worksheet because I did not have time to solve every equation fully before the class started, and I ended up finding a miscalculation. If I hadn’t checked, I would have misled my students. It proves Lin’s point that we can’t just trust AI’s reasoning without critical evaluation.