AI algorithms can sometimes make mistakes in their predictions, forecasts, or decisions. Indeed, the very principle of such models’ construction and operation is fallible due to the theory of complexity [102].
If AI is inherently fallible because of the complexity in its design, how much trust should we really place in its predictions when it comes to critical areas like healthcare?