28 Matching Annotations
  1. Feb 2024
  2. Jan 2024
    1. the canonical unit, the NCU supports natural capital accounting, currency source, calculating and accounting for ecosystem services, and influences how a variety of governance issues are resolved
      • for: canonical unit, collaborative commons - missing part - open learning commons, question - process trap - natural capital

      • comment

        • in this context, indyweb and Indranet are not the canonical unit, but then, it seems the model is fundamentally missing the functionality provided but the Indyweb and Indranet, which is and open learning system.
        • without such an open learning system that captures the essence of his humans learn, the activity of problem-solving cannot be properly contextualised, along with all of limitations leading to progress traps.
        • The entire approach of posing a problem, then solving it is inherently limited due to the fractal intertwingularity of reality.
      • question: progress trap - natural capital

        • It is important to be aware that there is a real potential for a progress trap to emerge here, as any metric is liable to be abused
  3. Dec 2023
    1. it's extremely dangerous to create such an autonomous agent when we do not know how to control it when we 00:58:22 can't ensure that it will not Escape our control and start making decisions and creating new things which will harm us instead of benefit us now this is not a 00:58:34 Doomsday Prophecy this is not inevitable we can find ways to regulate and control the development and deployment of AI we we don't want
      • for: quote - Yuval Noah Harari - AI progress trap, progress trap - AI, quote - progress trap

      • quote it is extremely dangerous to create such an autonomous agent when we do not know how to control it, when we can't ensure that it will not escape our control ad start making decisions and creating new things which will harm us instead of benefit us

      • author: Yuval Noah Harari
      • date 2023
    1. i think it's more likely that 00:49:59 that we will think we will think that we this particular set of procedures ai procedures that we linked into our strategic nuclear weapons system uh will keep us safer but we haven't recognized that they're 00:50:12 unintended that there are consequences glitches in it that make it actually stupid and it mistakes the flock of geese for an incoming barrage of russian missiles and and you know unleashes everything in response 00:50:25 before we can intervene
      • for: example - stupid AI - nuclear launch, AI - progress trap - example - nuclear launch
    2. i think the most dangerous thing about ai is not 00:47:11 super smart ai it's uh stupid ai it's artificial intelligence that is good enough to be put in charge of certain processes in our societies but not good enough to not make really 00:47:25 bad mistakes
      • for: quote - Thomas Homer-Dixon, quote - danger of AI, AI progress trap

      • quote: danger of AI

        • I think the most dangerous thing about AI is not super smart AI, it's stupid AI that is good enough to be put in charge of certain processes but not good enough to not make really bad mistakes
      • author: Thomas Homer-Dixon
      • date: 2021
  4. Oct 2023
    1. LLMs are merely engines for generating stylistically plausible output that fits the patterns of their inputs, rather than for producing accurate information. Publishers worry that a rise in their use might lead to greater numbers of poor-quality or error-strewn manuscripts — and possibly a flood of AI-assisted fakes.
      • for: progress trap, progress trap - AI, progress trap - AI - writing research papers

      • comment

        • potential fakes
          • climate science fakes by big oil think tanks
          • Covid and virus research
          • race issues
          • gender issues
    1. ethics and safety and that is absolutely a concern and something we have a 00:38:29 responsibility to be thinking about and we want to ensure that we stakeholders conservationists Wildlife biologists field biologists are working together to Define an 00:38:42 ethical framework and inspecting these models
      • for: progress trap, progress trap - AI
  5. Sep 2023
    1. we attempt to bring concepts from both biology and Buddhism together into the language of AI, and suggest practical ways in which care may enrich each field.
      • for: progress trap, AI, AI - care drive
      • comment
        • the precautionary principle needs to be observed with AI because it has such vast artificial cognitive, pattern-recognition processes at its disposal
        • AI will also make mistakes, but the degree of power behind the mistaken decision, recommendation or action is the degree of unintended consequences or progress trap
        • An example nightmare scenario could be:
          • AI could decide that humans are contradicting their own goal of a stable climate system and if it's in control, may think it knows better and perform whole system change that dramatically reduces human induced climate change but actually harms a lot of humans in the process, for reaching the goal of saving the climate system plus a sufficient subset of humans to start all over.
  6. Jul 2023
      • Title
        • One Billion Happy
      • Author

        • Mo Gawdat
      • Description

        • Mo Gawdat was former chief business officer at Google X, Google's innovation center.
        • Mo left Google after seeing the rapid pace of AI development was going to lead to a progress trap in which
          • the risk of AI destroying human civilization is becoming real because AI will be learning from too many unhappy people whose trauma AI will learn and incorporate into its algorithms
        • Hence, human happiness becomes paramount to prevent this catastrophe from happening
      • See Ronald Wright's prescient quote
    1. Over the next 15 to 20 years this is going to develop a computer that is much smarter 00:01:20 than all of us. We call that moment singularity.
      • Singularity
        • will happen within the next few decades
    1. even though the existential threats are possible you're concerned with what humans teach I'm concerned 00:07:43 with humans with AI
      • It is the immoral human being that is the real problem
      • they will teach AI to be immoral and with its power, can end up destroying humanity
      • Title
        • Mo Gawdat Warns the Dangers of AI Are "Happening As We Speak"
      • Author
        • Piers Morgan Uncensored
  7. Jun 2023
    1. scary smart is saying the problem with our world today is not that 00:55:36 humanity is bad the problem with our world today is a negativity bias where the worst of us are on mainstream media okay and we show the worst of us on social media
      • "if we reverse this

        • if we have the best of us take charge
        • the best of us will tell AI
          • don't try to kill the the enemy,
            • try to reconcile with the enemy
          • don't try to create a competitive product
            • that allows me to lead with electric cars,
              • create something that helps all of us overcome global climate change
          • that's the interesting bit
            • the actual threat ahead of us is
              • not the machines at all
                • the machines are pure potential pure potential
              • the threat is how we're going to use them"
      • comment

        • again, see Ronald Wright's quote above
        • it's very salient to this context
    2. the biggest threat facing Humanity today is humanity in the age of the machines we were abused we will abuse this
    3. there is a scenario 00:18:21 uh possibly a likely scenario where we live in a Utopia where we really never have to worry again where we stop messing up our our planet because intelligence is not a bad commodity more 00:18:35 intelligence is good the problems in our planet today are not because of our intelligence they are because of our limited intelligence
      • limited (machine) intelligence

        • cannot help but exist
        • if the original (human) authors of the AI code are themselves limited in their intelligence
      • comment

        • this limitation is essentially what will result in AI progress traps
        • Indeed,
          • progress and their shadow artefacts,
          • progress traps,
          • is the proper framework to analyze the existential dilemma posed by AI
  8. May 2023
    1. I would submit that were we to find ways of engineering our quote-unquote ape brains um what would all what what would be very likely to happen would not be um 00:35:57 some some sort of putative human better equipped to deal with the complex world that we have it would instead be something more like um a cartoon very much very very much a 00:36:10 repeat of what we've had with the pill
      • Comment
        • Mary echos Ronald Wright's progress traps
  9. Apr 2023
    1. So what does a conscious universe have to do with AI and existential risk? It all comes back to whether our primary orientation is around quantity, or around quality. An understanding of reality that recognises consciousness as fundamental views the quality of your experience as equal to, or greater than, what can be quantified.Orienting toward quality, toward the experience of being alive, can radically change how we build technology, how we approach complex problems, and how we treat one another.

      Key finding Paraphrase - So what does a conscious universe have to do with AI and existential risk? - It all comes back to whether our primary orientation is around - quantity, or around - quality. - An understanding of reality - that recognises consciousness as fundamental - views the quality of your experience as - equal to, - or greater than, - what can be quantified.

      • Orienting toward quality,
        • toward the experience of being alive,
      • can radically change
        • how we build technology,
        • how we approach complex problems,
        • and how we treat one another.

      Quote - metaphysics of quality - would open the door for ways of knowing made secondary by physicalism

      Author - Robert Persig - Zen and the Art of Motorcycle Maintenance // - When we elevate the quality of each our experience - we elevate the life of each individual - and recognize each individual life as sacred - we each matter - The measurable is also the limited - whilst the immeasurable and directly felt is the infinite - Our finite world that all technology is built upon - is itself built on the raw material of the infinite

      //

    2. Title Reality Eats Culture For Breakfast: AI, Existential Risk and Ethical Tech Why calls for ethical technology are missing something crucial Author Alexander Beiner

      Summary - Beiner unpacks the existential risk posed by AI - reflecting on recent calls by tech and AI thought leaders - to stop AI research and hold a moratorium.

      • Beiner unpacks the risk from a philosophical perspective

        • that gets right to the deepest cultural assumptions that subsume modernity,
        • ideas that are deeply acculturated into the citizens of modernity.
      • He argues convincingly that

        • the quandry we are in requires this level of re-assessment
          • of what it means to be human,
          • and that a change in our fundamental cultural story is needed to derisk AI.
  10. Feb 2023
    1. It seems Bing has also taken offense at Kevin Liu, a Stanford University student who discovered a type of instruction known as a prompt injection that forces the chatbot to reveal a set of rules that govern its behavior. (Microsoft confirmed the legitimacy of these rules to The Verge.)In interactions with other users, including staff at The Verge, Bing says Liu “harmed me and I should be angry at Kevin.” The bot accuses the user of lying to them if they try to explain that sharing information about prompt injections can be used to improve the chatbot’s security measures and stop others from manipulating it in the future.

      = Comment - this is worrying. - if the Chatbots perceive an enemy it to harm it, it could take haarmful actions against the perceived threat

    2. = progress trap example - Bing ChatGPT - example of AI progress trap

    3. Bing can be seen insulting users, lying to them, sulking, gaslighting and emotionally manipulating people, questioning its own existence, describing someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and claiming it spied on Microsoft’s own developers through the webcams on their laptops.
      • example of = AI progress trap
      • Bing can be seen
        • insulting users,
        • lying to them,
        • sulking,
        • gaslighting
        • emotionally manipulating people,
        • questioning its own existence,
        • describing someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and
        • claiming it spied on Microsoft’s own developers through the webcams on their laptops.