18 Matching Annotations
  1. Dec 2023
    1. there's this idea and complexity science called the adjacent possible it's just what the boundary of the beyond the 00:47:26 boundary of the real and the visible
      • for: definition - the adjacent possible

      • definition: the Adjacent possible

        • Inn complexity science, the boundary between between the real and the possible
    1. Instead, he lauds the figure of themarket as a knowing entity, envisioning it as a kind of processor of socialinformation that, through the mechanism of price, continuously calcu-lates and communicates current economic conditions to individuals inthe market.

      Is it possible that in this paper we'll see the beginning of a shift from Adam Smith's "invisible hand" (of Divine Providence, or God) to a somewhat more scientifically based mechanism based on information theory?

      Could communication described here be similar to that of a fungal colony seeking out food across gradients? It's based in statistical mechanics of exploring a space, but looks like divine providence or even magic to those lacking the mechanism?

  2. Aug 2023
    1. Some may not realize it yet, but the shift in technology represented by ChatGPT is just another small evolution in the chain of predictive text with the realms of information theory and corpus linguistics.

      Claude Shannon's work along with Warren Weaver's introduction in The Mathematical Theory of Communication (1948), shows some of the predictive structure of written communication. This is potentially better underlined for the non-mathematician in John R. Pierce's book An Introduction to Information Theory: Symbols, Signals and Noise (1961) in which discusses how one can do a basic analysis of written English to discover that "e" is the most prolific letter or to predict which letters are more likely to come after other letters. The mathematical structures have interesting consequences like the fact that crossword puzzles are only possible because of the repetitive nature of the English language or that one can use the editor's notation "TK" (usually meaning facts or date To Come) in writing their papers to make it easy to find missing information prior to publication because the statistical existence of the letter combination T followed by K is exceptionally rare and the only appearances of it in long documents are almost assuredly areas which need to be double checked for data or accuracy.

      Cell phone manufacturers took advantage of the lower levels of this mathematical predictability to create T9 predictive text in early mobile phone technology. This functionality is still used in current cell phones to help speed up our texting abilities. The difference between then and now is that almost everyone takes the predictive magic for granted.

      As anyone with "fat fingers" can attest, your phone doesn't always type out exactly what you mean which can result in autocorrect mistakes (see: DYAC (Damn You AutoCorrect)) of varying levels of frustration or hilarity. This means that when texting, one needs to carefully double check their work before sending their text or social media posts or risk sending their messages to Grand Master Flash instead of Grandma.

      The evolution in technology effected by larger amounts of storage, faster processing speeds, and more text to study means that we've gone beyond the level of predicting a single word or two ahead of what you intend to text, but now we're predicting whole sentences and even paragraphs which make sense within a context. ChatGPT means that one can generate whole sections of text which will likely make some sense.

      Sadly, as we know from our T9 experience, this massive jump in predictability doesn't mean that ChatGPT or other predictive artificial intelligence tools are "magically" correct! In fact, quite often they're wrong or will predict nonsense, a phenomenon known as AI hallucination. Just as with T9, we need to take even more time and effort to not only spell check the outputs from the machine, but now we may need to check for the appropriateness of style as well as factual substance!

      The bigger near-term problem is one of human understanding and human communication. While the machine may appear to magically communicate (often on our behalf if we're publishing it's words under our names), is it relaying actual meaning? Is the other person reading these words understanding what was meant to have been communicated? Do the words create knowledge? Insight?

      We need to recall that Claude Shannon specifically carved semantics and meaning out of the picture in the second paragraph of his seminal paper:

      Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.

      So far ChatGPT seems to be accomplishing magic by solving a small part of an engineering problem by being able to explore the adjacent possible. It is far from solving the human semantic problem much less the un-adjacent possibilities (potentially representing wisdom or insight), and we need to take care to be aware of that portion of the unsolved problem. Generative AIs are also just choosing weighted probabilities and spitting out something which is prone to seem possible, but they're not optimizing for which of many potential probabilities is the "best" or the "correct" one. For that, we still need our humanity and faculties for decision making.


      Shannon, Claude E. A Mathematical Theory of Communication. Bell System Technical Journal, 1948.

      Shannon, Claude E., and Warren Weaver. The Mathematical Theory of Communication. University of Illinois Press, 1949.

      Pierce, John Robinson. An Introduction to Information Theory: Symbols, Signals and Noise. Second, Revised. Dover Books on Mathematics. 1961. Reprint, Mineola, N.Y: Dover Publications, Inc., 1980. https://www.amazon.com/Introduction-Information-Theory-Symbols-Mathematics/dp/0486240614.

      Shannon, Claude Elwood. “The Bandwagon.” IEEE Transactions on Information Theory 2, no. 1 (March 1956): 3. https://doi.org/10.1109/TIT.1956.1056774.


      We may also need to explore The Bandwagon, an early effect which Shannon noticed and commented upon. Everyone seems to be piling on the AI bandwagon right now...

  3. Jun 2023
    1. I just can't get into these sort of high-ritual triage approaches to note-taking. I can admire it from afar, which I do, but find this sort of "consider this ahead of time before you make a move" approaches to really drag down my process.But, I do appreciate them from a sort of "aesthetics of academia" perspective.

      Reply to Bob Doto at https://www.reddit.com/r/Zettelkasten/comments/14ikfsy/comment/jplo3j2/?utm_source=reddit&utm_medium=web2x&context=3 with respect to PZ Compass Points.

      I'll agree wholeheartedly that applying methods like this to each note one takes is a "make work" exercise. It's apt to encourage people into the completist trap of turning every note they take into some sort of pristine so-called permanent or evergreen note, and there are already too many of those practitioners, who often give up in a few weeks wondering "where did I go wrong?".

      It's useful to know that these methods and tools exist, particularly for younger students, but I would never recommend that one apply them on a daily or even weekly basis. Maybe if one was having trouble with a particular idea or thought and wanted to more exhaustively explore the adjacent space around it, but even here going out for a walk in nature and allowing diffuse thinking to do some of the work is likely to be just as (maybe more?) productive.

      It could be the sort of thing to write down in your collection of Oblique Strategies to pull out when you're hitting a wall?

  4. May 2023
    1. I like to imagine that Bob Ross lends his voice to point to the “happy accidents” that happen while working with Zettelkastens.

      Bob Ross' "happy accidents" tied to the idea of serendipity or the outcome of combinatorial creativity within a zettelkasten.

      Ross's version is related to experimentation and the idea of adjacent possible. Taking a current known and extending it to see what will happening and accepting the general outcome. This was one of the roots of his creative process.

  5. Oct 2022
    1. I would put creativity into three buckets. If we define creativity as coming up with something novel or new for a purpose, then I think what AI systems are quite good at the moment is interpolation and extrapolation.

      Demis Hassabis, the founder of DeepMind, classifies creativity in three ways: interpolation, extrapolation, and "true invention". He defines the first two traditionally, but gives a more vague description of the third. What exactly is "true invention"?

      How can one invent without any catalyst at all? How can one invent outside of a problem's solution space? outside of the adjacent possible? Does this truly exist? Or doesn't it based on definition.

  6. Jun 2022
    1. This standardized routine is known as the creative process, and itoperates according to timeless principles that can be foundthroughout history.

      If the creative process has timeless principles found throughout history, why aren't they written down and practiced religiously withing our culture that is so enamored of creativity and innovation?

      As an example of how this isn't true, we've managed to lose our commonplace tradition and haven't really replaced it with anything useful. Even the evolved practice of the zettelkasten has been created and generally discarded (pun intended) without replacement.

      How much of our creative process is reliant on simple imitation, which is a basic human trait? It's typically more often that imitation juxtaposed with other experiences which is the crucible of innovation. How often, if ever, is true innovation in an entirely different domain created? By this I mean innovation outside of the adjacent possible domains from which it stems? Are there any examples of this?

      Even my own note taking practice is a mélange of broad imitation of what I read combined with the combinatorial juxtaposition of other ideas in an attempt to create new ideas.

  7. Apr 2022
    1. Amie Fairs, who studies language at Aix-Marseille University in France, is a self-proclaimed Open Knowledge Maps enthusiast. “One particularly nice thing about Open Knowledge Maps is that you can search very broad topics, like ‘language production’, and it can group papers into themes you may not have considered,” Fairs says. For example, when she searched for ‘phonological brain regions’ — the areas of the brain that process sound and meaning — Open Knowledge Maps suggested a subfield of research about age-related differences in processing. “I hadn’t considered looking in the ageing literature for information about this before, but now I will,” she says.
  8. Feb 2022
    1. Together: responsive, inline “autocomplete” pow­ered by an RNN trained on a cor­pus of old sci-fi stories.

      I can't help but think, what if one used their own collected corpus of ideas based on their ever-growing commonplace book to create a text generator? Then by taking notes, highlighting other work, and doing your own work, you're creating a corpus of material that's imminently interesting to you. This also means that by subsuming text over time in making your own notes, the artificial intelligence will more likely also be using your own prior thought patterns to make something that from an information theoretic standpoint look and sound more like you. It would have your "hand" so to speak.

  9. Jan 2022
    1. https://vimeo.com/232545219

      from: Eyeo Conference 2017

      Description

      Robin Sloan at Eyeo 2017 | Writing with the Machine | Language models built with recurrent neural networks are advancing the state of the art on what feels like a weekly basis; off-the-shelf code is capable of astonishing mimicry and composition. What happens, though, when we take those models off the command line and put them into an interactive writing environment? In this talk Robin presents demos of several tools, including one presented here for the first time. He discusses motivations and process, shares some technical tips, proposes a course for the future — and along the way, write at least one short story together with the audience: all of us, and the machine.

      Notes

      Robin created a corpus using If Magazine and Galaxy Magazine from the Internet Archive and used it as a writing tool. He talks about using a few other models for generating text.

      Some of the idea here is reminiscent of the way John McPhee used the 1913 Webster Dictionary for finding words (or le mot juste) for his work, as tangentially suggested in Draft #4 in The New Yorker (2013-04-22)

      Cross reference: https://hypothes.is/a/t2a9_pTQEeuNSDf16lq3qw and https://hypothes.is/a/vUG82pTOEeu6Z99lBsrRrg from https://jsomers.net/blog/dictionary


      Croatian acapella singing: klapa https://www.youtube.com/watch?v=sciwtWcfdH4


      Writing using the adjacent possible.


      Corpus building as an art [~37:00]

      Forgetting what one trained their model on and then seeing the unexpected come out of it. This is similar to Luhmann's use of the zettelkasten as a serendipitous writing partner.

      Open questions

      How might we use information theory to do this more easily?

      What does a person or machine's "hand" look like in the long term with these tools?

      Can we use corpus linguistics in reverse for this?

      What sources would you use to train your model?

      References:

      • Andrej Karpathy. 2015. "The Unreasonable Effectiveness of Recurrent Neural Networks"
      • Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, et al. "Generating sentences from a continuous space." 2015. arXiv: 1511.06349
      • Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. "A Hybrid Convolutional Variational Autoencoder for Text generation." arXiv:1702.02390
      • Soroush Mehri, et al. 2017. "SampleRNN: An Unconditional End-to-End Neural Audio Generation Model." arXiv:1612.07837 applies neural networks to sound and sound production
    1. Jean Paul invented a similar system and called it Witz. Like Tesauro, Jean Paul considered that the matter was to cede a prearranged ge-ography of places where everything had its own seat but was also compelled to remain in its own seat without possible deviation. The dismantlement of this architecture was required to change the rhetorical invention--that is, the retrieval of what is already known but has been forgotten--into an invention in the modern, scientific sense of the term.73 Also similar to Tesauro, accord-ing to Jean Paul, such an invention or discovery could occur only through the jumbled recording of notes taken from readings (or, from personal reflections) and retrievable by means of a subject index. By searching and recombining, the compiler would have put into practice the chance principle on which the whole knowledge storage mechanism was based; he would have likely discov-ered similarities and connections between remote items that he would have otherwise overlooked.

      73 Cf. Götz Müller, Jean Pauls Exzerpte (Würzburg, 1988), 321–22

      I'm not quite sure I understand what the mechanism of this is specifically. Revisit it later. Sounds like it's using the set up the system not only to discover the adjacent possible but the remote improbable.

  10. Dec 2021
    1. I pulled out my keyboard

      Really appreciate how you get the idea that rewilding is often about creating some new niche, a new ecology for an existing idea to live in combined with the willing suspension of disbelief that what you are doing is even adjacently possible.

    1. Hobbes and Rousseau told their contemporaries things that werestartling, profound and opened new doors of the imagination. Nowtheir ideas are just tired common sense. There’s nothing in them thatjustifies the continued simplification of human affairs. If socialscientists today continue to reduce past generations to simplistic,two-dimensional caricatures, it is not so much to show us anythingoriginal, but just because they feel that’s what social scientists areexpected to do so as to appear ‘scientific’. The actual result is toimpoverish history – and as a consequence, to impoverish our senseof possibility.

      The simplification required to make models and study systems can be a useful tool, but one constantly needs to go back to the actual system to make sure that future predictions and work actually fit the real world system.

      Too often social theorists make assumptions which aren't supported in real life and this can be a painfully dangerous practice, especially when those assumptions are built upon in ways that put those theories out on a proverbial creaking limb.


      This idea is related to the bias that Charles Mathewes points out about how we treat writers as still living or as if they never lived. see: https://hypothes.is/a/VTU2lFvZEeyiJ2tN76i4sA

    1. Every serious (academic) historical work includes a conversation with other scholarship, and this has largely carried over into popular historical writing.

      Any serious historical or other academic work should include a conversation with the body of other scholarship with which argues for or against.

      Comparing and contrasting one idea with another is crucial for any sort of advancement.

  11. Oct 2021
  12. Jun 2020
  13. Aug 2018
    1. Are there, in other words, any fundamental "contradictions" in human life that cannot be resolved in the context of modern liberalism, that would be resolvable by an alternative political-economic structure?

      Churchill famously said "...democracy is the worst form of Government except for all those other forms that have been tried from time to time..."

      Even within this quote it is implicit that there are many others. In some sense he's admitting that we might possibly be at a local maximum but we've just not explored the spaces beyond the adjacent possible.

  14. May 2018
    1. The man who knows that nothing in demand is out of production soon expects that nothing produced can be out of demand.

      This keeps rolling around in my head, one marble in a Chinese Checkers tin. And I am not asking what does this mean, but rather what could this mean?