13 Matching Annotations
  1. Last 7 days
    1. Notably among hyperscalers, Google's compute comes primarily from its own custom TPU chips rather than NVIDIA's GPUs.

      Google 是四大超大规模云厂商中唯一不主要依赖 NVIDIA 的。微软、Meta、亚马逊的算力主体仍是 NVIDIA GPU,而 Google 用自研 TPU 走出了一条独立路线。这意味着在 AI 算力版图上,真正存在两套「操作系统」:NVIDIA 生态和 Google 生态——而前者的统治地位被严重高估了。

    1. Google holds the equivalent of around 5 million Nvidia H100 GPUs in compute capacity, roughly 25% of the world's total!

      大多数人可能认为Nvidia是AI计算能力的最大拥有者,因为他们的芯片被广泛使用,但作者认为谷歌通过其自研TPU芯片拥有相当于500万块H100 GPU的计算能力,占全球总量的25%。这表明自研芯片战略可能比购买商用芯片更能建立计算优势。

  2. Jan 2026
    1. Google’s biggest advantage lies under the hood. Almost every other AI lab trains with NVIDIA GPUs, which are sold at a margin that props up NVIDIA’s multi-trillion dollar valuation. Google use their own in-house hardware, TPUs, which they’ve demonstrated this year work exceptionally well for both training and inference of their models. When your number one expense is time spent on GPUs, having a competitor with their own, optimized and presumably much cheaper hardware stack is a daunting prospect.

      Google has a hardware stack advantage: they have their own hardware / processors, and not dependent on Nvidia GPUs. Vgl Nvidia's acq of Groq [[Nvidia koopt AI-technologie Groq voor 20 miljard dollar]]

  3. Dec 2025
    1. Het bedrijf ontwikkelt al een AI-naar-FPGA-platform waarmee elk AI-model kan draaien op goedkope, in de EU geproduceerde herconfigureerbare chips. Als ze hierin slagen, zou dit de afhankelijkheid van Europa van buitenlandse GPU-fabrieken volledig kunnen wegnemen, een terugkerend thema in de strategie van Vydar.

      A potential path away from NVIDIA it seems, but not at the moment, the text suggests.

  4. Nov 2025
  5. Jun 2025
    1. 1000x Increase in AI Demand
      • NVIDIA’s latest earnings highlight a dramatic surge in AI demand, driven by a shift from simple one-shot inference to more complex, compute-intensive reasoning tasks.
      • Reasoning models require hundreds to thousands of times more computational resources and tokens per task, significantly increasing GPU usage, especially for AI coding agents and advanced applications.
      • Major hyperscalers like Microsoft, Google, and OpenAI are experiencing exponential growth in token generation, with Microsoft alone processing over 100 trillion tokens in Q1—a fivefold year-over-year increase.
      • Hyperscalers are deploying nearly 1,000 NVL72 racks (72,000 Blackwell GPUs) per week, and NVIDIA-powered “AI factories” have doubled year-over-year to nearly 100, with the average GPU count per factory also doubling.
      • To meet this unprecedented demand, more than $300 billion in capital expenditure is being invested this year in data centers (rebranded by NVIDIA as “AI factories”), signaling a new industrial revolution in AI infrastructure.
  6. Apr 2024
  7. Feb 2022
  8. Feb 2021
  9. Oct 2020
  10. Jan 2019
    1. Coming back to the two ‘FreeSync’ settings in the monitor OSD, they differ in the variable refresh rate range that they support. ‘Standard Engine’ supports 90 – 144Hz (90 – 119Hz via HDMI) whilst ‘Ultimate Engine’ gives a broader variable refresh rate range of 70 – 144Hz (62 – 119Hz via HDMI). We didn’t notice any adverse effects when using ‘Ultimate Engine’, so we’d suggest users simply stick to that option.

      In my tests using Standard Engine, in combo with G-Sync Compatible Driver, I get more screen flickering during menus.