AI models could develop personalities during training that are (or if they occurred in humans would be described as) psychotic, paranoid, violent, or unstable, and act out, which for very powerful or capable systems could involve exterminating humanity.
for - progress trap - AI - abstraction - progress trap - AI with feelings & AI without feelings - no win? - One major and obvious aspect of current AI LLMs is that they are not only artificial in their intelligence, but also artificial in their lack of real world experiences. They are not embodied (and it would likely be a highly dubious ethical justification for their embodiment as in AI - powered robots) - Once we have the first known AI robot killing a human, it will be an indicator we have crossed the Rubicon - AI LLMs have ZERO realworld experience AND they are trained as artificial COGNITIE intelligence, not artificial EMOTIONAL intelligence - Without having the morals and social norms a human being is brought up with, it can become psychotic because they don't intrinsically value life - To attempt to program them with morals is equally dangerous because of moral relativity. A Christian nationalist's morality might be that anyone who is associated with abortions don't have a right to live and should be killed - an eye for an eye. Or a jihadist and muslim extremist with ISIS might feel all westerners do not have a right to exist because they don't follow Allah. - Do we really want moral programmability? - When we have a psychotic person armed with a lethal weapon, that is a dangerous situation. If we have a nation of super geniuses who go rogue, that is danger multiplied many orders of magnitude.