7 Matching Annotations
  1. Last 7 days
    1. Wilson Lin at Cursor coordinated hundreds of GPT-5.2 agents to build a web browser from scratch, running uninterrupted for one week. Over a million lines of Rust.

      这个案例展示了AI系统的惊人规模和产出能力,协调数百个AI agent,一周内生成超过一百万行代码。然而,'远未达到生产质量'的评估也揭示了当前AI系统在复杂项目中的局限性,特别是在代码质量和系统架构方面。

  2. Apr 2026
    1. Amazon is investing $5 billion in Anthropic today, with up to an additional $20 billion in the future. This builds on the $8 billion Amazon has previously invested.

      大多数人认为科技巨头对AI公司的投资通常在数亿级别,但Amazon对Anthropic的总投资可能高达330亿美元,这远超行业共识。这种规模的投资表明科技巨头对AI基础设施的重视程度和投入规模正在以前所未有的方式增长,可能重塑AI行业的资本结构和竞争动态。

    2. up to 5 gigawatts (GW) of capacity for training and deploying Claude

      5GW的算力规模极其庞大,相当于一个小型国家的电力消耗。这一数字表明Anthropic正在为AI模型训练和部署构建前所未有的基础设施,反映了大型语言模型对计算资源的巨大需求。相比其他AI公司的算力规模,这是一个非常激进的扩张计划。

    1. American hyperscalers are driving a data center buildout that's larger than the Manhattan Project and Apollo Program at their peaks.

      将美国 AI 数据中心建设规模与曼哈顿计划和阿波罗计划的峰值相比——这个类比既令人震惊,又揭示了竞争的本质已从技术竞争升级为「工业动员」。曼哈顿计划是战时国家意志的总动员,阿波罗计划是冷战荣耀的象征投入。如今的 AI 算力竞赛,在绝对体量上已超越这两个历史上最大规模的科技工程——而这场竞赛还远未触及天花板。

  3. Jun 2024
    1. I think that Noam chsky said exactly a year ago in New York Times around a year ago that generative AI is not any 00:18:37 intelligence it's just a plagiarism software that learned stealing human uh work transform it and sell it as much as possible as cheap as possible

      for - AI music theft - citation - Noam Chomsky - quote - Noam Chomsky - AI as plagiarism on a grand scale

      to - P2P Foundation - commons transition plan - Michel Bauwens - netarchical capitalism - predatory capitalism - https://wiki.p2pfoundation.net/Commons_Transition_Plan#Solving_the_value_crisis_through_a_social_knowledge_economy

  4. Jun 2021
    1. One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning

      This is a big lesson. As a field, we still have not thoroughly learned it, as we are continuing to make the same kind of mistakes. To see this, and to effectively resist it, we have to understand the appeal of these mistakes. We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.