4 Matching Annotations
  1. Dec 2023
  2. May 2023
    1. I tried to come up with three snappy principles for building products with language models. I expect these to evolve over time, but this is my first passFirst, protect human agency. Second, treat models as reasoning engines, not sources of truth And third, augment cognitive abilities rather than replace them.

      Use LLM in tools that 1. protect human agency 2. treat models as reasoning engines, not source of truth / oracles 3. augment cog abilities, no greedy reductionism to replace them

      I would not just protect human agency, which turns our human efforts into a preserve, LLM tools need to increase human agency (individually and societally) 3 yes, we must keep Engelbarting! lack of 2 is the source of the hype balloon we need to pop. It starts with avoiding anthromorphizing through our idiom around these tools. It will be hard. People want their magic wand, not the colder realism of 2 (you need to keep sorting out your own messes, but with a better shovel)

  3. Feb 2018
    1. the prolifery of breakthrough innovations that came pouring out of Doug's lab in the 1960s and '70s, probably more breakthroughs than any other lab in the history of computing before or since

      Well ain't that the truth!

  4. Jan 2018