- Apr 2023
-
www.nytimes.com www.nytimes.com
-
If you told me you were building a next generation nuclear power plant, but there was no way to get accurate readings on whether the reactor core was going to blow up, I’d say you shouldn’t build it. Is A.I. like that power plant? I’m not sure.
This is the weird part of these articles … he has just made a cast-iron argument for regulation and then says "I'm not sure"!!
That first sentence alone is enough for the case. Why? Because he doesn't need to think for sure that AI is like that power plant ... he only needs to think there is a (even small) probability that AI is like that power plant. If he thinks that it could be even a bit like that power plant then we shouldn't build it. And, finally, in saying "I'm not sure" he has already acknowledged that there is some probability that AI is like the power plant (otherwise he would say: AI is definitely safe).
Strictly, this is combining the existence of the risk with the "ruin" aspect of this risk: one nuclear power blowing up is terrible but would not wipe out the whole human race (and all other species). A "bad" AI quite easily could (malevolent by our standards or simply misdirected).
All you need in these arguments is a simple admission of some probability of ruin. And almost everyone seems to agree on that.
Then it is a slam dunk to regulate strongly and immediately.
-
-
www.lesswrong.com www.lesswrong.com
-
A large amount of failure to panic sufficiently, seems to me to stem from a lack of appreciation for the incredible potential lethality of this thing that Earthlings as a culture have not named.)
👍
-
-
beiner.substack.com beiner.substack.com
-
So what does a conscious universe have to do with AI and existential risk? It all comes back to whether our primary orientation is around quantity, or around quality. An understanding of reality that recognises consciousness as fundamental views the quality of your experience as equal to, or greater than, what can be quantified.Orienting toward quality, toward the experience of being alive, can radically change how we build technology, how we approach complex problems, and how we treat one another.
Key finding Paraphrase - So what does a conscious universe have to do with AI and existential risk? - It all comes back to whether our primary orientation is around - quantity, or around - quality. - An understanding of reality - that recognises consciousness as fundamental - views the quality of your experience as - equal to, - or greater than, - what can be quantified.
- Orienting toward quality,
- toward the experience of being alive,
- can radically change
- how we build technology,
- how we approach complex problems,
- and how we treat one another.
Quote - metaphysics of quality - would open the door for ways of knowing made secondary by physicalism
Author - Robert Persig - Zen and the Art of Motorcycle Maintenance // - When we elevate the quality of each our experience - we elevate the life of each individual - and recognize each individual life as sacred - we each matter - The measurable is also the limited - whilst the immeasurable and directly felt is the infinite - Our finite world that all technology is built upon - is itself built on the raw material of the infinite
//
- Orienting toward quality,
-
If the metaphysical foundations of our society tell us we have no soul, how on earth are we going to imbue soul into AI? Four hundred years after Descartes and Hobbs, our scientific methods and cultural stories are still heavily influenced by their ideas.
Key observation - If the metaphysical foundations of our society tell us we have no soul, - how are we going to imbue soul into AI? - Four hundred years after Descartes and Hobbs, - our scientific methods and cultural stories are still heavily influenced by their ideas.
-
Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.
Quote - AI Gedanken - AI risk - The Paperclip Maximizer
-
We might call on a halt to research, or ask for coordination around ethics, but it’s a tall order. It just takes one actor not to play (to not turn off their metaphorical fish filter), and everyone else is forced into the multi-polar trap.
AI is a multi-polar trap
-
Title Reality Eats Culture For Breakfast: AI, Existential Risk and Ethical Tech Why calls for ethical technology are missing something crucial Author Alexander Beiner
Summary - Beiner unpacks the existential risk posed by AI - reflecting on recent calls by tech and AI thought leaders - to stop AI research and hold a moratorium.
-
Beiner unpacks the risk from a philosophical perspective
- that gets right to the deepest cultural assumptions that subsume modernity,
- ideas that are deeply acculturated into the citizens of modernity.
-
He argues convincingly that
- the quandry we are in requires this level of re-assessment
- of what it means to be human,
- and that a change in our fundamental cultural story is needed to derisk AI.
- the quandry we are in requires this level of re-assessment
-
Tags
- AI risk
- Alexander Beiner
- Zen and the Art of Motorcycle Maintenance
- Robert Persign
- physicalism
- progress trap
- no soul
- gedanken - paperclip
- Descartes
- quote - paperclip maximizer
- quote
- multi-polar trap
- quote - Nick Bolstrom
- Paperclip Maximizer
- gedanken
- quality vs quantity
- Thomas Hobbes
- gedanken - Nick Bolstrom
- Cartesian dualism
Annotators
URL
-
- Mar 2023
-
garymarcus.substack.com garymarcus.substack.com
-
on both short term and long term risks in AI
-
- Mar 2022
-
twitter.com twitter.com
-
Eric Topol. (2022, February 28). A multimodal #AI study of ~54 million blood cells from Covid patients @YaleMedicine for predicting mortality risk highlights protective T cell role (not TH17), poor outcomes of granulocytes, monocytes, and has 83% accuracy https://nature.com/articles/s41587-021-01186-x @NatureBiotech @KrishnaswamyLab https://t.co/V32Kq0Q5ez [Tweet]. @EricTopol. https://twitter.com/EricTopol/status/1498373229097799680
-
- Oct 2020
-
www.coe.int www.coe.int
-
AI and control of Covid-19 coronavirus. (n.d.). Artificial Intelligence. Retrieved October 15, 2020, from https://www.coe.int/en/web/artificial-intelligence/ai-and-control-of-covid-19-coronavirus
-
- Sep 2020
-
wip.mitpress.mit.edu wip.mitpress.mit.edu
-
Building the New Economy · Works in Progress. (n.d.). Works in Progress. Retrieved June 16, 2020, from https://wip.mitpress.mit.edu/new-economy
-
- Jun 2020
-
www.weforum.org www.weforum.org
-
How COVID-19 revealed 3 critical AI procurement blindspots. (n.d.). World Economic Forum. Retrieved June 22, 2020, from https://www.weforum.org/agenda/2020/06/how-covid-19-revealed-3-critical-blindspots-ai-governance-procurement/
Tags
- diligence
- citation
- prediction
- is:blog
- diagnostics
- transparency
- chatbots
- app
- procurement
- blindspot
- COVID-19
- fairness
- lang:en
- contact tracing
- risk
- AI
Annotators
URL
-