The book’s central argument was not about timelines or machines outperforming humans at specific tasks. It was about scale. Artificial intelligence, I argued, should not be understood at the level of an individual mind, but at the level of civilization. Technology does not merely support humanity. It shapes what humanity is. If AI crossed certain thresholds, it would not just automate tasks, but it would reconfigure social coordination, knowledge production, and agency itself. That framing has aged better than I expected, not because any particular prediction came true, but because the underlying question turned out to be the right one.
The premise of the book that scale mattered wrt AI (SU vibes). AI to be understood at societal level, not from an individual perspective, as tech and society mutually shape eachother (basic WWTS premise). Given certain thresholds it would impact coordination, knowledge and agency.