Until some company or scientist says ‘Here’s the proof! We can definitely have a safety mechanism that can scale to any level of intelligence,’ I don’t think we should be developing those general superintelligences.We can get most of the benefits we want from narrow AI, systems
for
- quote - AI super intelligence is too dangerous, narrow AI can give us most of what we need - Roman Yampolskiy
quote - AI super intelligence is too dangerous, narrow AI can give us most of what we need - Roman Yampolskiy - (see below) - I don’t think it’s possible to indefinitely control superintelligence. - By definition, it’s smarter than you: - It learns faster, - it acts faster, - it will change faster. - You will have malevolent actors modifying it. - We have no precedent of lower capability agents indefinitely staying in charge of more capable agents. - Until some company or scientist says - ‘Here’s the proof! We can definitely have a safety mechanism that can scale to any level of intelligence,’ - I don’t think we should be developing those general superintelligences. - We can get most of the benefits we want from narrow AI, - systems designed for specific tasks: - develop a drug, - drive a car. - They don’t have to be smarter than the smartest of us combined.
// - Comment - Roman Yampolskiy is right. The fact that the industry is pushing ahead full speed with b developing AGI, effectively the same as the AI superintelligence Roman Yampolskiy is referring to - shows the most dangerous pathology of neo capitalism and Technofeudalism, profit over everything else - This feature is a major driver of progress traps
//