- Jun 2024
-
-
a dictator who wields the power of superintelligence would command concentrated power unlike 00:50:45 anything we've ever seen
for - key insight - AI - progress trap - nightmare scenario - dictator controlling superintelligence
meet insight - AI - progress trap - nightmare scenario - locked in dictatorship controlling superintelligence - millions of AI controlled robotic law and enforcement agents could police their populace - Mass surveillance would be hypercharged - Dictator loyal AI agents could individually assess every single citizen for descent with near perfect lie detection sensor - rooting out any disloyalty e - Essentially - the robotic military and police force could be wholly controlled by a single political leader and - programmed to be perfectly obedient and there's going to be no risks of coups or rebellions and - his strategy is going to be perfect because he has super intelligence behind them - what does a look like when we have super intelligence in control by a dictator ? - there's simply no version of that where you escape literally - past dictatorships were not permanent but - superintelligence could eliminate any historical threat to a dictator's Rule and - lock in their power - If you believe in freedom and democracy this is an issue because - someone in power, - even if they're good - they could still stay in power - but you still need the freedom and democracy to be able to choose - This is why the Free World must Prevail so - there is so much at stake here that - This is why everyone is not taking this into account
-
this is a serious problem because all they need to do is automate AI research 00:41:53 build super intelligence and any lead that the US had would vanish the power dynamics would shift immediately
for - AI - security risk - once automated AI research is known, bad actors can easily build superintelligence
AI - security risk - once automated AI research is known, bad actors can easily build superintelligence - Any lead that the US had would immediately vanish.
-
if you have the cognitive abilities of something that is you know 10 to 100 times smarter than you trying to to outm smarten it it's just you know it's just not going to happen whatsoever so you've effectively lost at that point which means that 00:36:03 you're going to be able to overthrow the US government
for - AI evolution - nightmare scenario - US govt may seize Open AI assets if it arrives at superintelligence
AI evolution - projection - US govt may seize Open AI assets if it arrives at superintelligence - He makes a good point here - If Open AI, or Google achieve superintelligence that is many times more intelligent than any human, - the US government would be fearful that they could be overthrown or that the technology can be stolen and fall into the wrong hands
-
super intelligence is going to be like this across many domains it's going to be 00:31:42 able to find exploits in human code too subtle for humans to notice and it's going to be able to generate code too complicated for any human to understand even if the model spent decades trying to explain it
for - progress trap - superintelligence threat
progress trap - superintelligence threat - super intelligence is going to be far beyond our cognitive capabilities across many domains. For example, - it's going to be able to find exploits in human code too subtle for humans to notice - it's going to be able to generate code too complicated for any human to understand - even if the model spent decades trying to explain it - How do we entrust ourselves to a superintelligence that is so far beyond us? If it thinks we are expendable, it could easily find our weaknesses and bring about extinction
-
Sam mman has said that's his entire goal that's what opening eye are trying to build they're not really trying to build super intelligence but they Define AGI as a 00:24:03 system that can do automated AI research and once that does occur
for - key insight - AGI as automated AI researchers to create superintelligence
key insight - AGI as automated AI researchers to create superintelligence - We will reach a period of explosive, exponential AI research growth once AGI has been produced - The key is to deploy AGI as AI researchers that can do AI research 24/7 - 5,000 of such AGI research agents could result in superintelligence in a very short time period (years) - because every time any one of them makes a breakthrough, it is immediately sent to all 4,999 other AGI researchers
-
he Talk of the Town has shifted from 10 billion compute clusters 00:01:16 to hundred billion do compute clusters to even trillion doll clusters and every 6 months another zero is added to the boardroom plans
for - AI - future spending - trillion dollars - superintelligence by 2030
Tags
- progress trap - superintelligence threat
- AI - progress trap - nightmare scenario - dictator controlling superintelligence
- AI - future spending - trillion dollars - superintelligence by 2030
- key insight - AGI as automated AI researchers to create superintelligence
- AI evolution - nightmare scenario - US govt may seize Open AI assets if it arrives at superintelligence
- AI - security risk - once automated AI research is known, bad actors can easily build superintelligence
Annotators
URL
-
- Oct 2015
-
howwegettonext.com howwegettonext.com
-
artificial intelligence
Superintelligence
-