10 Matching Annotations
  1. Aug 2023
    1. Sortes Vergilianae: taking random quotes from Vergilius and interpret their meaning either as prediction or as advice. The latter as a trigger for self reflection makes it a #leeswijze #reading manner that is non-linear

      Vgl. [[Skillful reading is generally non-linear 20210303154148]]

      St. Antonius (of Egypt, 3rd century) is said to have read the bible this way (sortes sanctorum it's called if you use it for divination), and Augustinus followed that thus picking up Paul's letter to the Romans and getting converted in the 4th century.

      Is this ripping up of the text into isolated paragraphs to access and read a text an early input into commonplace books and florilegia? As a gathering of such things?

      Mentioned in [[Information edited by Ann Blair]] in lemma 'Readers' p730.

  2. May 2023
    1. I have decided that the most efficient way to develop a note taking system isn’t to start at the beginning, but to start at the end. What this means, is simply to think about what the notes are going to be used for

      yes. Me: re-usable insights from project work, exploring defined fields of interest to see adjacent topics I may move into or parts to currently focus on, blogposts on same, see evolutionary patterns in my stuff.

      Btw need to find a diff term than output, too much productivity overtones. life isn't 'output', it's lived.

    1. They're just interim artefacts in our thinking and research process.

      weave models into your processes not shove it between me and the world by having it create the output. doing that is diminishing yourself and your own agency. Vgl [[Everymans Allemans AI 20190807141523]]

    2. One alternate approach is to start with our own curated datasets we trust. These could be repositories of published scientific papers, our own personal notes, or public databases like Wikipedia.We can then run many small specialised model tasks over them.

      Yes, if I could run my own notes of 3 decades or so on an LLM locally (where it doesn't feed the general model), that I would do instantly.

    3. We will eventually find it absurd that anyone would browse the “raw web” without their personal model filtering it.

      yes, it already is that way in effect.

    4. We will have to design this very carefully, or it'll give a whole new meaning to filter bubbles.

      Not just bubble, it will be the FB timeline. Key here is agency, and design for human biases. A model is likely much better than I to manage the diversity of sources for me, if I give it a starting point myself, or to see which outliers to include etc. Again I think it also means moving away from single artefacts. Often I'm not interested in what everyone is saying about X, but am interested in who is talking about X. Patterns not singular artefacts. See [[Mijn ideale feedreader 20180703063626]]

    5. I expect these to be baked into browsers or at the OS level.These specialised models will help us identify generated content (if possible), debunk claims, flag misinformation, hunt down sources for us, curate and suggest content, and ideally solve our discovery and search problems.

      Appleton suggests agents to fact check / filter / summarise / curate and suggest (those last two are more personal than the others, which are the grunt work of infostrats) would become part of your browser. Only if I can myself strongly influence what it does (otherwise it is the FB timeline all over again!)

      If these models become part of the browser, do we still need the browser as a metaphor for a window on the web, or surfing the net? Why wouldn't those models come up with whatever they grabbed from the web/net/darkweb in the right spot in my own infostrats? The browser is itself not a part of my infostrats, it's the starting point of it, the viewer on the raw material. Whatever I keep from browsing is when PKM starts. When the model filters / curates why not put that in the right spots for me to start working with it / on it / processing it? The model not as part of the browser, but doing the actual browsing, an active agent going out there to flag patterns of interest (based on my prefs/current issues etc) and organising it for me for my next steps? [[Individuele software agents 20200402151419]]

    6. if content generated from models becomes our source of truth, the way we know things is simply that a language model once said them. Then they're forever captured in the circular flow of generated information

      This is definitely a feedback loop in play, as already LLMs emulate bland SEO optimised text very well because most of the internet is already full of that crap. It's just a bunch of sites, and mostly other sources that serve as source of K though, is it not? So the feedback loop exposes to more people that they shouldn't see 'the internet' as the source of all truth? And is this feedbackloop not pointing to people simply stopping to take this stuff in (the writing part does not matter when there's no reader for it)? Unless curated, filtered etc by verifiable human actors? Are we about to see personal generative agents that can do lots of pattern hunting for me on my [[Social Distance als ordeningsprincipe 20190612143232]] en [[Social netwerk als filter 20060930194648]]

  3. Feb 2023
    1. come to the conclusion that most of us can no longer follow the stream and make sense of what’s flowing through, or even catch what’s important

      I've always assumed the point of the stream is that you can't drink it all. My [[Infostrat Filtering 20050928171301]] is based on the stream being overwhelming. Never twice into the same river etc. You don't make sense of the stream or catch what's important. Social filtering is the bit you 'drink' from the stream, and what you reshare is feedback into it. Given enough feedback what is important will always resurface.

  4. Aug 2022
    1. The technology is guilty of amplifying. And after all, that’s what we’re talking about is amplifying human capabilities. Well, it turns out that there are human capabilities and human motivations that are evil or misguided. And those are amplified way beyond what they were before.

      What can one do complexity-style stimulating desired capabilities, attenuating the undesirable ones? More like this, less like that stuff. At a personal level that may be clear (if one pays attention to it personally, see above), at group level, society level? Btw esp adtech platforms are not symmetrical in their amplification. They lift the mentioned pirate boats, but not the hospital boats. By design. Control over parameters for amplification in ones own info may be one.