In short, the “hallucinations” and biases in generative AI outputs result from the nature of their training data, the tools’ design focus on pattern-based content generation, and the inherent limitations of AI technology. Acknowledging and addressing these challenges will be essential as generative AI systems become more integrated into decision-making processes across various sectors.
An important detail here is that AI biases and hallucinations come from the way they were trained. This supports the main point that these mistakes are built into how AI works and it's important to be able to acknowledge them.