- Jun 2024
-
-
this is a serious problem because all they need to do is automate AI research 00:41:53 build super intelligence and any lead that the US had would vanish the power dynamics would shift immediately
for - AI - security risk - once automated AI research is known, bad actors can easily build superintelligence
AI - security risk - once automated AI research is known, bad actors can easily build superintelligence - Any lead that the US had would immediately vanish.
-
the model Waits are just a large files of numbers on a server and these can be easily stolen all it takes is an adversary to match your trillions 00:41:14 of dollars and your smartest minds of Decades of work just to steal this file
for - AI - security risk - model weight files - are a key leverage point
AI - security risk - model weight files - are a key leverage point for bad actors - These files are critical national security data that represent huge amounts of investment in time and research and they are just a file so can be easily stolen.
-
- Nov 2023
-
-
https://web.archive.org/web/20231108195303/https://axbom.com/aipower/
Per Axbom does a nice overview of actors and stakeholders to take into account when thinking about AI's impact and ethics. Some of these are mentioned in the [[EU AI Regulation]] but not all actors mentioned there are mentioned here I think: EU act not only defines users (of the application) but also users of the output of an application separately. This to ensure that outputs from un-checked or illegal applications outside the EU market are admissable to the EU market.
-