3 Matching Annotations
  1. Dec 2025
    1. The book’s central argument was not about timelines or machines outperforming humans at specific tasks. It was about scale. Artificial intelligence, I argued, should not be understood at the level of an individual mind, but at the level of civilization. Technology does not merely support humanity. It shapes what humanity is. If AI crossed certain thresholds, it would not just automate tasks, but it would reconfigure social coordination, knowledge production, and agency itself. That framing has aged better than I expected, not because any particular prediction came true, but because the underlying question turned out to be the right one.

      The premise of the book that scale mattered wrt AI (SU vibes). AI to be understood at societal level, not from an individual perspective, as tech and society mutually shape eachother (basic WWTS premise). Given certain thresholds it would impact coordination, knowledge and agency.

  2. Nov 2024
    1. I’ve come to feel like human-centered design (HCD) and the overarching project of HCI has reached a state of abject failure. Maybe it’s been there for a while, but I think the field’s inability to rise forcefully to the ascent of large language models and the pervasive use of chatbots as panaceas to every conceivable problem is uncharitably illustrative of its current state.

      HCI and HCD as fields have failed to respond to LLM tools and chatbot interfaces a generic solution to everything forcefully.