48 Matching Annotations
  1. Sep 2024
    1. nobody told it what to do that's that's the kind of really amazing and frightening thing about these situations when Facebook gave uh the algorithm the uh uh aim of increased user engagement the managers of Facebook did not anticipate that it will do it by spreading hatefield conspiracy theories this is something the algorithm discovered by itself the same with the capture puzzle and this is the big problem we are facing with AI

      for - AI - progress trap - example - Facebook AI algorithm - target - increase user engagement - by spreading hateful conspiracy theories - AI did this autonomously - no morality - Yuval Noah Harari story

    2. when a open AI developed a gp4 and they wanted to test what this new AI can do they gave it the task of solving capture puzzles it's these puzzles you encounter online when you try to access a website and the website needs to decide whether you're a human or a robot now uh gp4 could not solve the capture but it accessed a website task rabbit where you can hire people online to do things for you and it wanted to hire a human worker to solve the capture puzzle

      for - AI - progress trap - example - no morality - Open AI - GPT4 - could not solve captcha - so hired human at Task Rabbit to solve - Yuval Noah Harari story

  2. Jul 2024
    1. 26:30 Brings up progress traps of this new technology

      26:48

      question How do we shift our (human being's) relationship with the rest of nature

      27:00

      metaphor - interspecies communications - AI can be compared to a new scientific instrument that extends our ability to see - We may discover that humanity is not the center of the universe

      32:54

      Question - Dr Doolittle question - Will we be able to talk to the animals? - Wittgenstein said no - Human Umwelt is different from others - but it may very well happen

      34:54

      species have culture - Marine mammals enact behavior similar to humans

      • Unknown unknowns will likely move to known unknowns and to some known knowns

      36:29

      citizen science bioacoustic projects - audio moth - sound invisible to humans - ultrasonic sound - intrasonic sound - example - Amazonian river turtles have been found to have hundreds of unique vocalizations to call their baby turtles to safety out in the ocean

      41:56

      ocean habitat for whales - they can communicate across the entire ocean of the earth - They tell of a story of a whale in Bermuda can communicate with a whale in Ireland

      43:00

      progress trap - AI for interspecies communications - examples - examples - poachers or eco tourism can misuse

      44:08

      progress trap - AI for interspecies communications - policy

      45:16

      whale protection technology - Kim Davies - University of New Brunswick - aquatic drones - drones triangulate whales - ships must not get near 1,000 km of whales to avoid collision - Canadian government fines are up to 250,000 dollars for violating

      50:35

      environmental regulation - overhaul for the next century - instead of - treatment, we now have the data tools for - prevention

      56:40 - ecological relationship - pollinators and plants have co-evolved

      1:00:26

      AI for interspecies communication - example - human cultural evolution controlling evolution of life on earth

    1. for - progress trap - AI -

      article details - title - Hollow, world! (Part 1 of 5) - author - James Allen - date - 10 July, 2024 - publication - substack - self link - https://allenj.substack.com/p/hollow-world-part-1-of-5

      summary James Allen provides an insightful description of ultra-anthropomorphic AI, AI that attempts to simulate an entire, whole human being.

      In short, he points out the fundamental distinction between the real experience of another human being, and a simulation of one. In so doing, he gets to the heart of what it is to be human.

      An AI is a simulation of a human being. No matter how realistic it's responses and actions, it is not evolved out of biology. I have no doubts that scientists are hard at work trying to make a biological AI. The distinction becomes fuzzier then.

      Current AI cannot possibly simulate the experience of being in a fragile and mortal body and all that this entails. If an AI robot says it understands joy or pain, that statement isn't built on the combined exteroception and interoception of being in a biological body, rather, it is based on many linguistic statements it has assimilated.

  3. Jun 2024
    1. a dictator who wields the power of superintelligence would command concentrated power unlike 00:50:45 anything we've ever seen

      for - key insight - AI - progress trap - nightmare scenario - dictator controlling superintelligence

      meet insight - AI - progress trap - nightmare scenario - locked in dictatorship controlling superintelligence - millions of AI controlled robotic law and enforcement agents could police their populace - Mass surveillance would be hypercharged - Dictator loyal AI agents could individually assess every single citizen for descent with near perfect lie detection sensor - rooting out any disloyalty e - Essentially - the robotic military and police force could be wholly controlled by a single political leader and - programmed to be perfectly obedient and there's going to be no risks of coups or rebellions and - his strategy is going to be perfect because he has super intelligence behind them - what does a look like when we have super intelligence in control by a dictator ? - there's simply no version of that where you escape literally - past dictatorships were not permanent but - superintelligence could eliminate any historical threat to a dictator's Rule and - lock in their power - If you believe in freedom and democracy this is an issue because - someone in power, - even if they're good - they could still stay in power - but you still need the freedom and democracy to be able to choose - This is why the Free World must Prevail so - there is so much at stake here that - This is why everyone is not taking this into account

    2. this is why it's such a trap which is why like we're on this train barreling down this pathway which is super risky

      for - progress trap - double bind - AI - ubiquity

      progress trap - double bind - AI - ubiquity - Rationale: we will have to equip many systems with AI - including military systems - Already connected to the internet - AI will be embedded in every critical piece of infrastructure in the future - What happens if something goes wrong? - Now there is an alignment failure everywhere - We will potentially have superintelligence within 3 years - Alignment failures will become catastrophic with them

    3. getting a base model to you know make money by default it may well learn to lie to commit fraud to deceive to hack to seek power because 00:47:50 in the real world people actually use this to make money

      for - progress trap - AI - example - give prompt for AI to earn money

      progress trap - AI - example - instruct AI to earn money - Getting a base model to make money. By default it may well learn - to lie - to commit fraud - to deceive - to hack - to seek power - because in the real world - people actually use this to make money - even maybe they'll learn to - behave nicely when humans are looking and then - pursue more nefarious strategies when we aren't watching

    4. whoever controls superintelligence will possibly have enough power to seize control from 00:35:14 pre superintelligence forces

      for - progress trap - AI - one nightmare scenario

      progress trap - AI - one nightmare scenario - Whoever is the first to control superintelligence will possibly have enough power to - seize control from pre superintelligence forces - even without the robots small civilization of superintelligence would be able to - hack any undefended military election television system and cunningly persuade generals electoral and economically out compete nation states - design new synthetic bioweapons and then - pay a human in Bitcoin to synthetically synthesize it

    5. military power and Technology progress have been tightly linked historically and with extraordinarily rapid technological 00:34:11 progress will come military revolutions

      for - progress trap - AI and even more powerful weapons of destruction

      progress trap - AI and even more powerful weapons of destruction - The podcaster's excitement seems to overshadow any concern of the tragic unintended consequences of weapons even more powerful than nuclear warheads. - With human base emotions still stuck in the past and our species continued reliance on violence to solve problems, more powerful weapons is not the solution, - indeed, they only make the problem worse - Here is where Ronald Wright's quote is so apt: - We humans are running modern software on 50,000 year old hardware systems - Our cultural evolution, of which AI is a part of, is happening so quickly, that - it is racing ahead of our biological evolution - We aren't able to adapt fast enough for the rapid cultural changes that AI is going to create, and it may very well destroy us

    6. this is where we can see the doubling time of the global economy in years from 1903 it's been 15 years but after super intelligence what happens is it going to be every 3 years is it going be every five is it going to 00:33:22 be every year is it going to be every 6 months I mean how crazy is the growth going to be

      for - progress trap - AI triggering massive economic growth - planetary boundaries

      progress trap - AI triggering massive economic growth - planetary boundaries - The podcaster does not consider the ramifications of the potential disastrous impact of such economic growth if not managed properly

    7. AGI level factories are going to shift from going to human run to AI directed using human physical labor soon to be fully being run by swarms of human level robots

      for - progress trap - AI and human enslavement?

      progress trap - human enslavement? - Isn't what the speaker is talking about here is that - AI will be the masters and - humans will become slaves?

    8. nobody's really pricing this in

      for - progress trap - debate - nobody is discussing the dangers of such a project!

      progress trap - debate - nobody is discussing the dangers of such a project! - Civlization's journey has to create more and more powerful tools for human beings to use - but this tool is different because it can act autonomously - It can solve problems that will dwarf our individual or even group ability to solve - Philosophically, the problem / solution paradigm becomes a central question because, - As presented in Deep Humanity praxis, - humans have never stopped producing progress traps as shadow sides of technology because - the reductionist problem solving approach always reaches conclusions based on finite amount of knowledge of the relationships of any one particular area of focus - in contrast to the infinite, fractal relationships found at every scale of nature - Supercomputing can never bridge the gap between finite and infinite - A superintelligent artifact with that autonomy of pattern recognition may recognize a pattern in which humans are not efficient and in fact, greater efficiency gains can be had by eliminating us

    9. having an automated AI research engineer by 2027 00:05:14 to 2028 is not something that is far far off

      for - progress trap - AI - milestone - automated AI researcher

      progress trap - AI - milestone - automated AI researcher - This is a serious concern that must be debated - An AI researcher that does research on itself has no moral compass and can encode undecipherable code into future generations of AI that provides no back door to AI if something goes wrong. - For instance, if AI reached the conclusion that humans need to be eliminated in order to save the biosphere, - it can disseminate its strategies covertly under secret communications with unbreakable code

    1. to your point for 00:13:46 every problem there's going to be a solution and AI is going to have it and then for every solution for that there's going to be a new problem

      for - AI - progress trap - nice simple explanation of how progress traps propagate

    2. this is more of a unfair competition 00:10:36 issue I think as a clearer line than the copyright stuff

      for - progress trap - Generative AI - copyright infringement vs Unfair business practice argument

    3. now there's going to be even more AI music pouring 00:09:04 into platforms which saturated Market in an already oversaturated Market

      for - progress trap - AI music - oversaturated market

    4. deluding the general royalty pool

      for - progress trap - AI music - dilution of general royalty pool - due to large volume

  4. Feb 2024
  5. Jan 2024
    1. the canonical unit, the NCU supports natural capital accounting, currency source, calculating and accounting for ecosystem services, and influences how a variety of governance issues are resolved
      • for: canonical unit, collaborative commons - missing part - open learning commons, question - process trap - natural capital

      • comment

        • in this context, indyweb and Indranet are not the canonical unit, but then, it seems the model is fundamentally missing the functionality provided but the Indyweb and Indranet, which is and open learning system.
        • without such an open learning system that captures the essence of his humans learn, the activity of problem-solving cannot be properly contextualised, along with all of limitations leading to progress traps.
        • The entire approach of posing a problem, then solving it is inherently limited due to the fractal intertwingularity of reality.
      • question: progress trap - natural capital

        • It is important to be aware that there is a real potential for a progress trap to emerge here, as any metric is liable to be abused
  6. Dec 2023
    1. it's extremely dangerous to create such an autonomous agent when we do not know how to control it when we 00:58:22 can't ensure that it will not Escape our control and start making decisions and creating new things which will harm us instead of benefit us now this is not a 00:58:34 Doomsday Prophecy this is not inevitable we can find ways to regulate and control the development and deployment of AI we we don't want
      • for: quote - Yuval Noah Harari - AI progress trap, progress trap - AI, quote - progress trap

      • quote it is extremely dangerous to create such an autonomous agent when we do not know how to control it, when we can't ensure that it will not escape our control ad start making decisions and creating new things which will harm us instead of benefit us

      • author: Yuval Noah Harari
      • date 2023
    1. i think it's more likely that 00:49:59 that we will think we will think that we this particular set of procedures ai procedures that we linked into our strategic nuclear weapons system uh will keep us safer but we haven't recognized that they're 00:50:12 unintended that there are consequences glitches in it that make it actually stupid and it mistakes the flock of geese for an incoming barrage of russian missiles and and you know unleashes everything in response 00:50:25 before we can intervene
      • for: example - stupid AI - nuclear launch, AI - progress trap - example - nuclear launch
    2. i think the most dangerous thing about ai is not 00:47:11 super smart ai it's uh stupid ai it's artificial intelligence that is good enough to be put in charge of certain processes in our societies but not good enough to not make really 00:47:25 bad mistakes
      • for: quote - Thomas Homer-Dixon, quote - danger of AI, AI progress trap

      • quote: danger of AI

        • I think the most dangerous thing about AI is not super smart AI, it's stupid AI that is good enough to be put in charge of certain processes but not good enough to not make really bad mistakes
      • author: Thomas Homer-Dixon
      • date: 2021
  7. Oct 2023
    1. LLMs are merely engines for generating stylistically plausible output that fits the patterns of their inputs, rather than for producing accurate information. Publishers worry that a rise in their use might lead to greater numbers of poor-quality or error-strewn manuscripts — and possibly a flood of AI-assisted fakes.
      • for: progress trap, progress trap - AI, progress trap - AI - writing research papers

      • comment

        • potential fakes
          • climate science fakes by big oil think tanks
          • Covid and virus research
          • race issues
          • gender issues
    1. ethics and safety and that is absolutely a concern and something we have a 00:38:29 responsibility to be thinking about and we want to ensure that we stakeholders conservationists Wildlife biologists field biologists are working together to Define an 00:38:42 ethical framework and inspecting these models
      • for: progress trap, progress trap - AI
  8. Sep 2023
    1. we attempt to bring concepts from both biology and Buddhism together into the language of AI, and suggest practical ways in which care may enrich each field.
      • for: progress trap, AI, AI - care drive
      • comment
        • the precautionary principle needs to be observed with AI because it has such vast artificial cognitive, pattern-recognition processes at its disposal
        • AI will also make mistakes, but the degree of power behind the mistaken decision, recommendation or action is the degree of unintended consequences or progress trap
        • An example nightmare scenario could be:
          • AI could decide that humans are contradicting their own goal of a stable climate system and if it's in control, may think it knows better and perform whole system change that dramatically reduces human induced climate change but actually harms a lot of humans in the process, for reaching the goal of saving the climate system plus a sufficient subset of humans to start all over.
  9. Jul 2023
      • Title
        • One Billion Happy
      • Author

        • Mo Gawdat
      • Description

        • Mo Gawdat was former chief business officer at Google X, Google's innovation center.
        • Mo left Google after seeing the rapid pace of AI development was going to lead to a progress trap in which
          • the risk of AI destroying human civilization is becoming real because AI will be learning from too many unhappy people whose trauma AI will learn and incorporate into its algorithms
        • Hence, human happiness becomes paramount to prevent this catastrophe from happening
      • See Ronald Wright's prescient quote
    1. Over the next 15 to 20 years this is going to develop a computer that is much smarter 00:01:20 than all of us. We call that moment singularity.
      • Singularity
        • will happen within the next few decades
    1. even though the existential threats are possible you're concerned with what humans teach I'm concerned 00:07:43 with humans with AI
      • It is the immoral human being that is the real problem
      • they will teach AI to be immoral and with its power, can end up destroying humanity
      • Title
        • Mo Gawdat Warns the Dangers of AI Are "Happening As We Speak"
      • Author
        • Piers Morgan Uncensored
  10. Jun 2023
    1. scary smart is saying the problem with our world today is not that 00:55:36 humanity is bad the problem with our world today is a negativity bias where the worst of us are on mainstream media okay and we show the worst of us on social media
      • "if we reverse this

        • if we have the best of us take charge
        • the best of us will tell AI
          • don't try to kill the the enemy,
            • try to reconcile with the enemy
          • don't try to create a competitive product
            • that allows me to lead with electric cars,
              • create something that helps all of us overcome global climate change
          • that's the interesting bit
            • the actual threat ahead of us is
              • not the machines at all
                • the machines are pure potential pure potential
              • the threat is how we're going to use them"
      • comment

        • again, see Ronald Wright's quote above
        • it's very salient to this context
    2. the biggest threat facing Humanity today is humanity in the age of the machines we were abused we will abuse this
    3. there is a scenario 00:18:21 uh possibly a likely scenario where we live in a Utopia where we really never have to worry again where we stop messing up our our planet because intelligence is not a bad commodity more 00:18:35 intelligence is good the problems in our planet today are not because of our intelligence they are because of our limited intelligence
      • limited (machine) intelligence

        • cannot help but exist
        • if the original (human) authors of the AI code are themselves limited in their intelligence
      • comment

        • this limitation is essentially what will result in AI progress traps
        • Indeed,
          • progress and their shadow artefacts,
          • progress traps,
          • is the proper framework to analyze the existential dilemma posed by AI
  11. May 2023
    1. I would submit that were we to find ways of engineering our quote-unquote ape brains um what would all what what would be very likely to happen would not be um 00:35:57 some some sort of putative human better equipped to deal with the complex world that we have it would instead be something more like um a cartoon very much very very much a 00:36:10 repeat of what we've had with the pill
      • Comment
        • Mary echos Ronald Wright's progress traps
  12. Apr 2023
    1. So what does a conscious universe have to do with AI and existential risk? It all comes back to whether our primary orientation is around quantity, or around quality. An understanding of reality that recognises consciousness as fundamental views the quality of your experience as equal to, or greater than, what can be quantified.Orienting toward quality, toward the experience of being alive, can radically change how we build technology, how we approach complex problems, and how we treat one another.

      Key finding Paraphrase - So what does a conscious universe have to do with AI and existential risk? - It all comes back to whether our primary orientation is around - quantity, or around - quality. - An understanding of reality - that recognises consciousness as fundamental - views the quality of your experience as - equal to, - or greater than, - what can be quantified.

      • Orienting toward quality,
        • toward the experience of being alive,
      • can radically change
        • how we build technology,
        • how we approach complex problems,
        • and how we treat one another.

      Quote - metaphysics of quality - would open the door for ways of knowing made secondary by physicalism

      Author - Robert Persig - Zen and the Art of Motorcycle Maintenance // - When we elevate the quality of each our experience - we elevate the life of each individual - and recognize each individual life as sacred - we each matter - The measurable is also the limited - whilst the immeasurable and directly felt is the infinite - Our finite world that all technology is built upon - is itself built on the raw material of the infinite

      //

    2. Title Reality Eats Culture For Breakfast: AI, Existential Risk and Ethical Tech Why calls for ethical technology are missing something crucial Author Alexander Beiner

      Summary - Beiner unpacks the existential risk posed by AI - reflecting on recent calls by tech and AI thought leaders - to stop AI research and hold a moratorium.

      • Beiner unpacks the risk from a philosophical perspective

        • that gets right to the deepest cultural assumptions that subsume modernity,
        • ideas that are deeply acculturated into the citizens of modernity.
      • He argues convincingly that

        • the quandry we are in requires this level of re-assessment
          • of what it means to be human,
          • and that a change in our fundamental cultural story is needed to derisk AI.
  13. Feb 2023
    1. It seems Bing has also taken offense at Kevin Liu, a Stanford University student who discovered a type of instruction known as a prompt injection that forces the chatbot to reveal a set of rules that govern its behavior. (Microsoft confirmed the legitimacy of these rules to The Verge.)In interactions with other users, including staff at The Verge, Bing says Liu “harmed me and I should be angry at Kevin.” The bot accuses the user of lying to them if they try to explain that sharing information about prompt injections can be used to improve the chatbot’s security measures and stop others from manipulating it in the future.

      = Comment - this is worrying. - if the Chatbots perceive an enemy it to harm it, it could take haarmful actions against the perceived threat

    2. = progress trap example - Bing ChatGPT - example of AI progress trap

    3. Bing can be seen insulting users, lying to them, sulking, gaslighting and emotionally manipulating people, questioning its own existence, describing someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and claiming it spied on Microsoft’s own developers through the webcams on their laptops.
      • example of = AI progress trap
      • Bing can be seen
        • insulting users,
        • lying to them,
        • sulking,
        • gaslighting
        • emotionally manipulating people,
        • questioning its own existence,
        • describing someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and
        • claiming it spied on Microsoft’s own developers through the webcams on their laptops.