2 Matching Annotations
  1. Aug 2022
    1. GPT-3 is by no means a reliable source of knowledge. What it says is nonsense more often than not! Like the demon in The Exorcist, language models only adds enough truth to twist our minds and make us do stupid things

      The need to be aware that GPT-3 is a text generation tool, not an accurate search engine. However being factually correct is not a prerequisite of experiencing surprisal. The author uses the tool to open up new lines of thought, so his prompt engineering in a way is aimed at being prompted himself. This is reminiscent of how Luhmann talks about communicating with his index cards: the need for factuality does not reside with the card, meaning is (re)constructed in the act of communication. The locus of meaning is the conversation, the impact it has on oneself, less the content, it seems.

    2. https://web.archive.org/web/20220810205211/https://escapingflatland.substack.com/p/gpt-3

      Blogged a few first associations at https://www.zylstra.org/blog/2022/08/communicating-with-gpt-3/ . Prompt design for narrative research may be a useful experience here. 'Interviewing' GPT-3 a Luhmann-style conversation with a system? Can we ditch our notes for GPT-3? GPT-3 as interface to the internet. Fascinatiing essay, need to explore.