While both ChatGPT and Gemini produced coherent summaries of Tacitus’ digital history, they also generated “phantom sources,” as Salvaggio (2025) and Tenzer et al. (2024) warn. This forced me to confront my own instinct to trust well-written text. AI’s polished language can disguise its lack of historical grounding, a problem Graham (2020) describes as “phantom authority.” Recognizing this made me more cautious about accepting digital outputs at face value.
This is a really thoughtful point, and I appreciate that you didn’t just say AI can help, but also explained where it falls short. You describe the difference between “pattern” and “understanding” in a way that feels honest and grounded in what we’ve been learning. I wonder if you found any specific moment in the AI interrogation where the generated answer sounded convincing but you later realized it had no real historical basis. If you included a short example of that, it would make your argument even more meaningful and personal. But overall you’ve captured the ethical tension of using AI in history really well.