26 Matching Annotations
  1. Last 7 days
    1. The best AI models in the world score below 0.5% on ARC-AGI-3—is this what you call AGI, guys?

      0.5%的准确率数据揭示了当前AI模型与通用人工智能(AGI)之间巨大的能力差距。这个极低的分数表明,尽管AI发展迅速,但在真正理解复杂推理方面仍处于非常初级的阶段。作者用讽刺的语气质疑行业过度炒作AGI进展的现象。

    2. ARC-AGI-3 was officially released this week. All frontier models score below 0.5%

      ⚠️【令人震惊的数字】最强前沿模型得分低于 0.5%——而非专业人类轻松超过 60%,差距超过 120 倍。这是继 ARC-AGI-2 之后最彻底的「AI 能力幻觉清醒剂」。推理能力的提升并未自动迁移到「新颖抽象推理」,当所有人在讨论 AGI 即将到来时,这份数据是最直接的反驳。

  2. May 2026
  3. Apr 2026
    1. The gains are especially strong in agentic coding, computer use, knowledge work, and early scientific research—areas where progress depends on reasoning across context and taking action over time.

      大多数人认为AI进步主要体现在特定领域的知识获取和模式识别上,而非跨上下文的推理和长期行动能力。但作者强调GPT-5.5在需要持续推理和行动的领域取得显著进步,这一观点挑战了AI能力发展的主流叙事,暗示通用智能可能比预期更早实现。

    1. the ability to keep learning after training and the move from pattern matching to understanding cause and effect

      作者提出AGI需要两个关键要素:持续学习能力和从模式匹配到理解因果关系的能力。这一观点挑战了当前AI发展路径,暗示我们可能过于关注规模和数据,而忽视了真正的理解能力。

  4. Jan 2026
    1. It also forces thinking to be obsessively short term. People start losing interest in problems of the next five or ten years, because superintelligence will have already changed everything. The big political and technological questions we need to discuss are only those that matter to the speed of AI development. Furthermore, we must sprint towards a post-superintelligence world even though we have no real idea what it will bring.

      yes, this is why I think the AI hype is tech's coping strategy in the face of climate change. A figleaf for inaction.

    2. Effective altruists used to be known for their insistence on thinking about the very long run; much more of the movement now is concerned about the development of AI in the next year.

      yes, again a coping strategy. AGI soon is a great excuse to do whatever you want now bc AGI will clean everything up next year. AI is a cope cage much like a tinfoil hat.

  5. Dec 2025
  6. Apr 2025
    1. as we get closer to superintelligence, it will be seen more and more as an enabler and driver of weapon of mass destruction (WMD) capabilities, if not as a WMD in and of itself. Direct calls for a “Manhattan Project for AGI” are already starting.

      for - quote - AGI - Weapon of Mass Destruction

      quote - As we get closer to superintelligence, - it will be seen more and more as an enabler and driver of - weapon of mass destruction (WMD) capabilities, - if not as a WMD in and of itself. - Direct calls for a “Manhattan Project for AGI” are already starting.

  7. Feb 2025
  8. Dec 2024
  9. Jun 2024
    1. you're going to have like 100 million more AI research and they're going to be working at 100 times what 00:27:31 you are

      for - stats - comparison of cognitive powers - AGI AI agents vs human researcher

      stats - comparison of cognitive powers - AGI AI agents vs human researcher - 100 million AGI AI researchers - each AGI AI researcher is 100x more efficient that its equivalent human AI researcher - total productivity increase = 100 million x 100 = 10 billion human AI researchers! Wow!

    2. nobody's really pricing this in

      for - progress trap - debate - nobody is discussing the dangers of such a project!

      progress trap - debate - nobody is discussing the dangers of such a project! - Civlization's journey has to create more and more powerful tools for human beings to use - but this tool is different because it can act autonomously - It can solve problems that will dwarf our individual or even group ability to solve - Philosophically, the problem / solution paradigm becomes a central question because, - As presented in Deep Humanity praxis, - humans have never stopped producing progress traps as shadow sides of technology because - the reductionist problem solving approach always reaches conclusions based on finite amount of knowledge of the relationships of any one particular area of focus - in contrast to the infinite, fractal relationships found at every scale of nature - Supercomputing can never bridge the gap between finite and infinite - A superintelligent artifact with that autonomy of pattern recognition may recognize a pattern in which humans are not efficient and in fact, greater efficiency gains can be had by eliminating us

    3. Sam mman has said that's his entire goal that's what opening eye are trying to build they're not really trying to build super intelligence but they Define AGI as a 00:24:03 system that can do automated AI research and once that does occur

      for - key insight - AGI as automated AI researchers to create superintelligence

      key insight - AGI as automated AI researchers to create superintelligence - We will reach a period of explosive, exponential AI research growth once AGI has been produced - The key is to deploy AGI as AI researchers that can do AI research 24/7 - 5,000 of such AGI research agents could result in superintelligence in a very short time period (years) - because every time any one of them makes a breakthrough, it is immediately sent to all 4,999 other AGI researchers

    4. if this scale up 00:20:14 doesn't get us to AGI in the next 5 to 10 years it might be a long way out

      for - key insight - AGI in next 5 to 10 years or bust

      key insight - AGI in next 5 to 10 years or bust - As we start approaching billion, hundred billion and trillion dollar clusters, hardware improvements will slow down due to - cost - ecological impact - Moore's Law limits - If AGI doesn't emerge by then, then we will need to have major breakthrough in - architecture or - algorithms

  10. Jul 2023
  11. Jun 2023
    1. the Transformers are not there yet they will not come up with something that hasn't been there before they will come up with the best of everything and 00:26:59 generatively will build a little bit on top of that but very soon they'll come up with things we've never found out we've never known
      • difference between
        • ChatGPT (AI)
        • AGI
  12. May 2023
    1. gents learn their behavior,

      Behavior here is experience, information that is stored in the memory and retrieved for reflection and learning to happen. Does that mean Believable Agents or Generative Agents can essentially become aware of their own existence and potentially begin to question and compare the virtual/internal environment with the external environment ?

    1. must have an alignment property

      It is unclear what form the "alignment property" would take, and most importantly how such a property would be evaluated especially if there's an arbitrary divide between "dangerous" and "pre-dangerous" levels of capabilities and alignment of the "dangerous" levels cannot actually be measured.

  13. May 2022
  14. Mar 2019
    1. “Meditations on Moloch,”

      Clicked through to the essay. It appears to be mainly an argument for a super-powerful benevolent general artificial intelligence, of the sort proposed by AGI-maximalist Nick Bostrom.

      The money quote:

      The only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.

      🔗 This is a great New Yorker profile of Bostrom, where I learned about his views.

      🔗Here is a good newsy profile from the Economist's magazine on the Google unit DeepMind and its attempt to create artificial general intelligence.