You might not care for, say, history, but understanding geopolitics and the history of a particular region may help you develop business strategies in breaking into new markets and accounting for local preferences. You might not care for, say, biology, but understanding the environmental impact of your products can help not only your business but also the world you and your future children will want to live in. And you might not care for, say, philosophy, but identifying the possible ethical concerns about your products or services in advance can help you avoid stepping into a legal or public-relations landmine, even if you don’t care about doing what’s right.
Understanding that this is part of a broader conversation regarding the ethical and personal risks of AI, I believe it connects more closely to the topic of participatory culture.
When we expect AI to explain and solve all our problems for us or educate us on important global topics, we may unknowingly avoid conversations or other means of communication, which would more naturally help us understand what we are so eager to learn. Participating in important political conversations, for example, with those in our community through books, websites, and other sources, allows us to engage in a process rather than expecting all the answers at our fingertips. I feel as if isolating ourselves to a computer screen, talking to ChatGPT, removes the human element of participatory culture. In the same sense, does everyone engaging with AI begin a new participatory culture of AI use and generating ideas?