the Language Processing Unit (LPU), a new category of processor. Groq created and built the LPU from the ground up to meet the unique needs of AI. LPUs run Large Language Models (LLMs) and other leading models at substantially faster speeds and, on an architectural level, up to 10x more efficiently from an energy perspective compared to GPUs.
Groq self describes their core tech as a new cat of processor that run LLMs. Faster speeds. And saves energy an order of magnitude on 'an architectural level' compared to gpu's. That deliberate phrasing suggests there's a trade-off somewhere else.
Also contrast with FPGAs and their use for on-device ai applications like in [[Vydar wil het Europese bolwerk worden voor navigatie zonder GPS]]