102 Matching Annotations
  1. Sep 2024
  2. Aug 2024
    1. for - Federico Faggin - quantum physics - consciousness

      summary - Frederico Faggin is a physicist and microelectronic engineer who was the developer of the world's first microprocessor at Intel, the Intel 4004 CPU. - Now he focuses his attention on developing a robust and testable theory of consciousness based on quantum information theory. - What sets Frederico apart from other scientists who are studying consciousness is a series of profound personal 'awakening'-type experiences in which has led to a psychological dissolution of the sense of self bounded by his physical body - This profound experience led him to claim with unshakable certainty that our individual consciousness is far greater than our normal mundane experience of it - Having a science and engineering background, Faggin has set out to validate his experiences with a new scientific theory of Consciousness, Information and Physicality (CIP) and Operational Probabilistic Theory (OPT)

      to - Frederico Faggin's website - https://hyp.is/JTGs6lr9Ee-K8-uSXD3tsg/www.fagginfoundation.org/what-we-do/j - Federico Faggin and paper: - Hard Problem and Free Will: - an information-theoretical approach - https://hyp.is/styU2lofEe-11hO02KJC8w/link.springer.com/chapter/10.1007/978-3-030-85480-5_5

    2. what you call CIP B which is the Consciousness information and physicality and how it links to opt which is operational probabilistic Theory

      for - definition - Consciousness Information and Physicality (CIP) - definition - Operational Probabilistic Theory (OPT)

  3. Jul 2024
    1. Whenever a teacher orally explains something to a class or a pupil, wheneverpupils talk to each other or hear speech, the information presented is transient. Byits very nature, all speech is transient. Unless it is recorded, any spoken informationdisappears. If it is important information for the learner, then the learner must tryto remember it. Remembering verbal information often can be more easily achievedif it is written down. Writing was invented primarily to turn transient oral informa-tion into a permanent form. In the absence of a permanent written record, thelearner may need to use a mental rehearsal strategy to keep information alive inworking memory before it dissipates. The more information there is to learn, themore difficult it becomes to remember, unless it is written down, or students haveadditional access to a permanent record. Furthermore, if spoken informationrequires complex processing, then the demands made on working memory becomeeven more intrusive. For example, if a teacher explains a point using several spokensentences, each containing information that must be integrated in order to under-stand the general gist, the demands made on working memory may be excessive.Information from one sentence may need to be held in working memory whileinformation from another sentence is integrated with it. From this perspective, suchinformation will create a heavy cognitive load. Accordingly, all spoken informationhas the potential to interfere with learning unless it is broken down into manageableproportions or supported by external offloads such as written notes.

      Note to self: - Transient = Fading - Non-Transient = Permanent

  4. Jun 2024
  5. Dec 2023
    1. Shannon, Claude E., Norbert Wiener, Frances A. Yates, Gregory Bateson, Michel Foucault, Friedrich. A. Hayek, Walter Benjamin, et al. Information: A Reader. Edited by Eric Hayot, Lea Pao, and Anatoly Detwyler. New York: Columbia University Press, 2021. https://doi.org/10.7312/hayo18620.

      Annotation URL: urn:x-pdf:d987e346ec524f00d3c201c5055bf12e

      https://jonudell.info/h/facet/?user=chrisaldrich&max=100&exactTagSearch=true&expanded=true&url=urn%3Ax-pdf%3Ad987e346ec524f00d3c201c5055bf12e

      Noticing after starting to read that this chapter is an abridged excerpt of the original, so I'm switching to the original 1945 version.

      See: https://hypothes.is/a/dZRmapquEe66Ehf7Emie3Q

    2. Instead, he lauds the figure of themarket as a knowing entity, envisioning it as a kind of processor of socialinformation that, through the mechanism of price, continuously calcu-lates and communicates current economic conditions to individuals inthe market.

      Is it possible that in this paper we'll see the beginning of a shift from Adam Smith's "invisible hand" (of Divine Providence, or God) to a somewhat more scientifically based mechanism based on information theory?

      Could communication described here be similar to that of a fungal colony seeking out food across gradients? It's based in statistical mechanics of exploring a space, but looks like divine providence or even magic to those lacking the mechanism?

    1. Hayek, Friedrich A. “The Use of Knowledge in Society.” The American Economic Review 35, no. 4 (1945): 519–30.

      See also, notes at abbreviated version in Information: A Reader (2021). (@Shannon2021)

      https://jonudell.info/h/facet/?user=chrisaldrich&max=100&exactTagSearch=true&expanded=true&url=urn%3Ax-pdf%3Ad987e346ec524f00d3c201c5055bf12e

  6. Sep 2023
    1. Recent work has revealed several new and significant aspects of the dynamics of theory change. First, statistical information, information about the probabilistic contingencies between events, plays a particularly important role in theory-formation both in science and in childhood. In the last fifteen years we’ve discovered the power of early statistical learning.

      The data of the past is congruent with the current psychological trends that face the education system of today. Developmentalists have charted how children construct and revise intuitive theories. In turn, a variety of theories have developed because of the greater use of statistical information that supports probabilistic contingencies that help to better inform us of causal models and their distinctive cognitive functions. These studies investigate the physical, psychological, and social domains. In the case of intuitive psychology, or "theory of mind," developmentalism has traced a progression from an early understanding of emotion and action to an understanding of intentions and simple aspects of perception, to an understanding of knowledge vs. ignorance, and finally to a representational and then an interpretive theory of mind.

      The mechanisms by which life evolved—from chemical beginnings to cognizing human beings—are central to understanding the psychological basis of learning. We are the product of an evolutionary process and it is the mechanisms inherent in this process that offer the most probable explanations to how we think and learn.

      Bada, & Olusegun, S. (2015). Constructivism Learning Theory : A Paradigm for Teaching and Learning.

  7. Aug 2023
    1. Some may not realize it yet, but the shift in technology represented by ChatGPT is just another small evolution in the chain of predictive text with the realms of information theory and corpus linguistics.

      Claude Shannon's work along with Warren Weaver's introduction in The Mathematical Theory of Communication (1948), shows some of the predictive structure of written communication. This is potentially better underlined for the non-mathematician in John R. Pierce's book An Introduction to Information Theory: Symbols, Signals and Noise (1961) in which discusses how one can do a basic analysis of written English to discover that "e" is the most prolific letter or to predict which letters are more likely to come after other letters. The mathematical structures have interesting consequences like the fact that crossword puzzles are only possible because of the repetitive nature of the English language or that one can use the editor's notation "TK" (usually meaning facts or date To Come) in writing their papers to make it easy to find missing information prior to publication because the statistical existence of the letter combination T followed by K is exceptionally rare and the only appearances of it in long documents are almost assuredly areas which need to be double checked for data or accuracy.

      Cell phone manufacturers took advantage of the lower levels of this mathematical predictability to create T9 predictive text in early mobile phone technology. This functionality is still used in current cell phones to help speed up our texting abilities. The difference between then and now is that almost everyone takes the predictive magic for granted.

      As anyone with "fat fingers" can attest, your phone doesn't always type out exactly what you mean which can result in autocorrect mistakes (see: DYAC (Damn You AutoCorrect)) of varying levels of frustration or hilarity. This means that when texting, one needs to carefully double check their work before sending their text or social media posts or risk sending their messages to Grand Master Flash instead of Grandma.

      The evolution in technology effected by larger amounts of storage, faster processing speeds, and more text to study means that we've gone beyond the level of predicting a single word or two ahead of what you intend to text, but now we're predicting whole sentences and even paragraphs which make sense within a context. ChatGPT means that one can generate whole sections of text which will likely make some sense.

      Sadly, as we know from our T9 experience, this massive jump in predictability doesn't mean that ChatGPT or other predictive artificial intelligence tools are "magically" correct! In fact, quite often they're wrong or will predict nonsense, a phenomenon known as AI hallucination. Just as with T9, we need to take even more time and effort to not only spell check the outputs from the machine, but now we may need to check for the appropriateness of style as well as factual substance!

      The bigger near-term problem is one of human understanding and human communication. While the machine may appear to magically communicate (often on our behalf if we're publishing it's words under our names), is it relaying actual meaning? Is the other person reading these words understanding what was meant to have been communicated? Do the words create knowledge? Insight?

      We need to recall that Claude Shannon specifically carved semantics and meaning out of the picture in the second paragraph of his seminal paper:

      Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.

      So far ChatGPT seems to be accomplishing magic by solving a small part of an engineering problem by being able to explore the adjacent possible. It is far from solving the human semantic problem much less the un-adjacent possibilities (potentially representing wisdom or insight), and we need to take care to be aware of that portion of the unsolved problem. Generative AIs are also just choosing weighted probabilities and spitting out something which is prone to seem possible, but they're not optimizing for which of many potential probabilities is the "best" or the "correct" one. For that, we still need our humanity and faculties for decision making.


      Shannon, Claude E. A Mathematical Theory of Communication. Bell System Technical Journal, 1948.

      Shannon, Claude E., and Warren Weaver. The Mathematical Theory of Communication. University of Illinois Press, 1949.

      Pierce, John Robinson. An Introduction to Information Theory: Symbols, Signals and Noise. Second, Revised. Dover Books on Mathematics. 1961. Reprint, Mineola, N.Y: Dover Publications, Inc., 1980. https://www.amazon.com/Introduction-Information-Theory-Symbols-Mathematics/dp/0486240614.

      Shannon, Claude Elwood. “The Bandwagon.” IEEE Transactions on Information Theory 2, no. 1 (March 1956): 3. https://doi.org/10.1109/TIT.1956.1056774.


      We may also need to explore The Bandwagon, an early effect which Shannon noticed and commented upon. Everyone seems to be piling on the AI bandwagon right now...

    1. "in his youth he was full of vim and vigor"

      Do calcified words eventually cease to have any definition over time? That is they have a stand alone definition, then a definition within their calcified phrase, then they cease to have any stand alone definition at all though they continue existence only in those calcified phrases.

  8. May 2023
    1. Entropy is not a property of the string you got, but of the strings you could have obtained instead. In other words, it qualifies the process by which the string was generated.
  9. Apr 2023
  10. Mar 2023
    1. Abstract

      // abstract - summary - Rationalist approaches to environmental problems such as climate change - apply an information deficit model, - assuming that if people understand what needs to be done they will act rationally. - However, applying a knowledge deficit hypothesis often fails to recognize unconscious motivations revealed by: - social psychology, - cognitive science, - behavioral economics.

      • Applying ecosystems science, data collection, economic incentives, and public education are necessary for solving problems such as climate change, but they are not sufficient.
      • Climate change discourse makes us aware of our mortality
      • This prompts consumerism as a social psychological defensive strategy,
      • which is counterproductive to pro-environmental behavior.
      • Studies in terror management theory, applied to the study of ritual and ecological conscience formation,
      • suggest that ritual expressions of giving thanks can have significant social psychological effects in relation to overconsumption driving climate change.
      • Primary data gathering informing this work included participant observation and interviews with contemporary Heathens in Canada from 2018–2019.
  11. Feb 2023
    1. The application is powered by LaMDA, one of the latest generation of large language models. At its core, LaMDA is a simple machine — it's trained to predict the most likely next word given a textual prompt. But because the model is so large and has been trained on a massive amount of text, it's able to learn higher-level concepts.

      Is LaMDA really able to "learn higher-level concepts" or is it just a large, straight-forward information theoretic-based prediction engine?

    1. Rhetoric of encomium

      How do institutions form around notions of merit?

      Me: what about blurbs as evidence of implied social networks? Who blurbs whom? How are these invitations sent/received and by whom?

      diachronic: how blurbs evolve over time

      Signals, can blurbs predict: - the field of the work - gender - other

      Emergence or decrease of signals with respect to time

      Imitation of styles and choices. - how does this happen? contagion - I'm reminded of George Mathew Dutcher admonition:

      Imitation to be avoided. Avoid the mannerisms and personal peculiarities of method or style of well-known writers, such as Carlyle or Macaulay. (see: https://hypothes.is/a/ROR3VCDEEe2sZNOy4rwRgQ )

      Systematic studies of related words within corpora. (this idea should have a clever name) word2vec, word correlations, information theory

      How does praise work?

      metaphors within blurbs (eg: light, scintillating, brilliant, new lens, etc.)

  12. Jan 2023
    1. https://www.complexityexplorer.org/courses/162-foundations-applications-of-humanities-analytics/segments/15625

      https://www.youtube.com/watch?v=vZklLt80wqg

      Looking at three broad ideas with examples of each to follow: - signals - patterns - pattern making, pattern breaking

      Proceedings of the Old Bailey, 1674-1913

      Jane Kent for witchcraft

      250 years with ~200,000 trial transcripts

      Can be viewed as: - storytelling, - history - information process of signals

      All the best trials include the words "Covent Garden".

      Example: 1163. Emma Smith and Corfe indictment for stealing.

      19:45 Norbert Elias. The Civilizing Process. (book)

      Prozhito: large-scale archive of Russian (and Soviet) diaries; 1900s - 2000s

      How do people understand the act of diary-writing?

      Diaries are:

      Leo Tolstoy

      a convenient way to evaluate the self

      Franz Kafka

      a means to see, with reassuring clarity [...] the changes which you constantly suffer.

      Virginia Woolf'

      a kindly blankfaced old confidante

      Diary entries in five categories - spirit - routine - literary - material form (talking about the diary itself) - interpersonal (people sharing diaries)

      Are there specific periods in which these emerge or how do they fluctuate? How would these change between and over cultures?

      The pattern of talking about diaries in this study are relatively stable over the century.

      pre-print available of DeDeo's work here

      Pattern making, pattern breaking

      Individuals, institutions, and innovation in the debates of the French Revolution

      • transcripts of debates in the constituent assembly

      the idea of revolution through tedium and boredom is fascinating.

      speeches broken into combinations of patterns using topic modeling

      (what would this look like on commonplace book and zettelkasten corpora?)

      emergent patterns from one speech to the next (information theory) question of novelty - hi novelty versus low novelty as predictors of leaders and followers

      Robespierre bringing in novel ideas

      How do you differentiate Robespierre versus a Muppet (like Animal)? What is the level of following after novelty?

      Four parts (2x2 grid) - high novelty, high imitation (novelty with ideas that stick) - high novelty, low imitation (new ideas ignored) - low novelty, high imitation - low novelty, low imitation (discussion killers)

      Could one analyze television scripts over time to determine the good/bad, when they'll "jump the shark"?

    1. a common technique in natural language processing is to operationalize certain semantic concepts (e.g., "synonym") in terms of syntactic structure (two words that tend to occur nearby in a sentence are more likely to be synonyms, etc). This is what word2vec does.

      Can I use some of these sorts of methods with respect to corpus linguistics over time to better identified calcified words or archaic phrases that stick with the language, but are heavily limited to narrower(ing) contexts?

    1. Fried-berg Judeo-Arabic Project, accessible at http://fjms.genizah.org. This projectmaintains a digital corpus of Judeo-Arabic texts that can be searched and an-alyzed.

      The Friedberg Judeo-Arabic Project contains a large corpus of Judeo-Arabic text which can be manually searched to help improve translations of texts, but it might also be profitably mined using information theoretic and corpus linguistic methods to provide larger group textual translations and suggestions at a grander scale.

  13. Oct 2022
    1. Intellectual readiness involves a minimumlevel of visual perception such that the child can take in andremember an entire word and the letters that combine to formit. Language readiness involves the ability to speak clearly andto use several sentences in correct order.

      Just as predictive means may be used on the level of letters, words, and even whole sentences within information theory at the level of specific languages, does early orality sophistication in children help them to become predictive readers at earlier ages?

      How could one go about testing this, particularly in a broad, neurodiverse group?

    1. Underlining Keyterms and Index Bloat .t3_y1akec._2FCtq-QzlfuN-SwVMUZMM3 { --postTitle-VisitedLinkColor: #9b9b9b; --postTitleLink-VisitedLinkColor: #9b9b9b; --postBodyLink-VisitedLinkColor: #989898; }

      Hello u/sscheper,

      Let me start by thanking you for introducing me to Zettelkasten. I have been writing notes for a week now and it's great that I'm able to retain more info and relate pieces of knowledge better through this method.

      I recently came to notice that there is redundancy in my index entries.

      I have two entries for Number Line. I have two branches in my Math category that deals with arithmetic, and so far I have "Addition" and "Subtraction". In those two branches I talk about visualizing ways of doing that, and both of those make use of and underline the term Number Line. So now the two entries in my index are "Number Line (Under Addition)" and "Number Line (Under Subtraction)". In those notes I elaborate how exactly each operation is done on a number line and the insights that can be derived from it. If this continues, I will have Number Line entries for "Multiplication" and "Division". I will also have to point to these entries if I want to link a main note for "Number Line".

      Is this alright? Am I underlining appropriately? When do I not underline keyterms? I know that I do these to increase my chances of relating to those notes when I get to reach the concept of Number Lines as I go through the index but I feel like I'm overdoing it, and it's probably bloating it.

      I get "Communication (under Info. Theory): '4212/1'" in the beginning because that is one aspect of Communication itself. But for something like the number line, it's very closely associated with arithmetic operations, and maybe I need to rethink how I populate my index.

      Presuming, since you're here, that you're creating a more Luhmann-esque inspired zettelkasten as opposed to the commonplace book (and usually more heavily indexed) inspired version, here are some things to think about:<br /> - Aren't your various versions of number line card behind each other or at least very near each other within your system to begin with? (And if not, why not?) If they are, then you can get away with indexing only one and know that the others will automatically be nearby in the tree. <br /> - Rather than indexing each, why not cross-index the cards themselves (if they happen to be far away from each other) so that the link to Number Line (Subtraction) appears on Number Line (Addition) and vice-versa? As long as you can find one, you'll be able to find them all, if necessary.

      If you look at Luhmann's online example index, you'll see that each index term only has one or two cross references, in part because future/new ideas close to the first one will naturally be installed close to the first instance. You won't find thousands of index entries in his system for things like "sociology" or "systems theory" because there would be so many that the index term would be useless. Instead, over time, he built huge blocks of cards on these topics and was thus able to focus more on the narrow/niche topics, which is usually where you're going to be doing most of your direct (and interesting) work.

      Your case sounds, and I see it with many, is that your thinking process is going from the bottom up, but that you're attempting to wedge it into a top down process and create an artificial hierarchy based on it. Resist this urge. Approaching things after-the-fact, we might place information theory as a sub-category of mathematics with overlaps in physics, engineering, computer science, and even the humanities in areas like sociology, psychology, and anthropology, but where you put your work on it may depend on your approach. If you're a physicist, you'll center it within your physics work and then branch out from there. You'd then have some of the psychology related parts of information theory and communications branching off of your physics work, but who cares if it's there and not in a dramatically separate section with the top level labeled humanities? It's all interdisciplinary anyway, so don't worry and place things closest in your system to where you think they fit for you and your work. If you had five different people studying information theory who were respectively a physicist, a mathematician, a computer scientist, an engineer, and an anthropologist, they could ostensibly have all the same material on their cards, but the branching structures and locations of them all would be dramatically different and unique, if nothing else based on the time ordered way in which they came across all the distinct pieces. This is fine. You're building this for yourself, not for a mass public that will be using the Dewey Decimal System to track it all down—researchers and librarians can do that on behalf of your estate. (Of course, if you're a musician, it bears noting that you'd be totally fine building your information theory section within the area of "bands" as a subsection on "The Bandwagon". 😁)

      If you overthink things and attempt to keep them too separate in their own prefigured categorical bins, you might, for example, have "chocolate" filed historically under the Olmec and might have "peanut butter" filed with Marcellus Gilmore Edson under chemistry or pharmacy. If you're a professional pastry chef this could be devastating as it will be much harder for the true "foodie" in your zettelkasten to creatively and more serendipitously link the two together to make peanut butter cups, something which may have otherwise fallen out much more quickly and easily if you'd taken a multi-disciplinary (bottom up) and certainly more natural approach to begin with. (Apologies for the length and potential overreach on your context here, but my two line response expanded because of other lines of thought I've been working on, and it was just easier for me to continue on writing while I had the "muse". Rather than edit it back down, I'll leave it as it may be of potential use to others coming with no context at all. In other words, consider most of this response a selfish one for me and my own slip box than as responsive to the OP.)

    1. He argued that God gazes over history in its totality and finds all periods equal.

      Leopold von Ranke's argument that God gazes over history and finds all periods equal is very similar to a framing of history from the viewpoint of statistical thermodynamics: it's all the same material floating around, it just takes different states at different times.

      link to: https://hyp.is/jqug2tNlEeyg2JfEczmepw/3stages.org/c/gq_title.cgi?list=1045&ti=Foucault%27s%20Pendulum%20(Eco)

  14. Aug 2022
  15. Jul 2022
  16. bafybeicho2xrqouoq4cvqev3l2p44rapi6vtmngfdt42emek5lyygbp3sy.ipfs.dweb.link bafybeicho2xrqouoq4cvqev3l2p44rapi6vtmngfdt42emek5lyygbp3sy.ipfs.dweb.link
    1. he aim of the present paper is to propose a radical resolution to this controversy: weassume that mind is a ubiquitous property of all minimally active matter (Heylighen, 2011). Itis in no way restricted to the human brain—although that is the place where we know it in itsmost concentrated form. Therefore, the extended mind hypothesis is in fact misguided,because it assumes that the mind originates in the brain, and merely “extends” itself a little bitoutside in order to increase its reach, the way one’s arm extends itself by grasping a stick.While ancient mystical traditions and idealist philosophies have formulated similarpanpsychist ideas (Seager, 2006), the approach we propose is rooted in contemporaryscience—in particular cybernetics, cognitive science, and complex systems theory. As such, itstrives to formulate its assumptions as precisely and concretely as possible, if possible in amathematical or computational form (Heylighen, Busseniers, Veitas, Vidal, & Weinbaum,2012), so that they can be tested and applied in real-world situations—and not just in thethought experiments beloved by philosophers

      The proposal is for a more general definition of the word mind, which includes the traditional usage when applied to the human mind, but extends far beyond that into a general property of nature herself.

      So in Heylighen's defintion, mind is a property of matter, but of all MINIMALLY ACTIVE matter, not just brains. In this respect, Heylighen's approach has early elements of the Integrated Information Theory (IIT) theory of Koch & Tononi

    1. there was an interesting paper that came out i cited in the in my in my in paper number one that uh was 01:15:53 looking at this question of what is an individual and they were looking at it from an information theory standpoint you know so they came up with this they came up with this uh uh theory uh and i think do they have a name for 01:16:09 it yeah uh information theory of individuality and they say base it's done at the bottom of the slide there and they say basically that uh you know an individual is a process just what's 01:16:20 what we've been talking about before that propagates information from the past into the future so that you know implies uh information flow and implies a cognitive process uh it implies anticipation of 01:16:33 the future uh and it probably implies action and this thing that is an individual it is not like it is a layered hierarchical individual it's like you can draw a circle around 01:16:45 anything you know in a certain sense and call it an individual under you know with certain uh definitions you know if you want to define what its markov blanket is 01:16:57 but uh but you know we are we are we are our cells are individuals our tissues liver say is an individual um a human is an individual a family is an 01:17:12 individual you know and it just keeps expanding outward from there the society is an individual so it really it's none of those are have you know any kind of inherent preference 01:17:24 levels there's no preference to any of those levels everything's an individual layered interacting overlapping individuals and it's just it's just a it's really just a the idea of an individual is just where 01:17:36 do you want to draw your circle and then you can you know then you can talk about an individual at whatever level you want so so that's all about information so it's all about processing information right

      The "individual" is therefore scale and dimension dependent. There are so many ways to define an individual depending on the scale you are looking at and your perspective.

      Information theory of individuality addresses this aspect.

    1. Take extreme care how you may conflate and differentiate (or don't) the ideas of "information" and "knowledge". Also keep in mind that the mathematical/physics definition of information is wholly divorced from any semantic meanings it may have for a variety of receivers which can have dramatically different contexts which compound things. YI suspect that your meaning is an Take extreme care how you may conflate and differentiate (or don't) the ideas of "information" and "knowledge". Also keep in mind that the mathematical/physics definition of information is wholly divorced from any semantic meanings it may have for a variety of receivers which can have dramatically different contexts which compound things. I suspect that your meaning is an

      Take extreme care how you may conflate and differentiate (or don't) the ideas of "information" and "knowledge". Also keep in mind that the mathematical/physics definition of information is wholly divorced from any semantic meanings it may have for a variety of receivers which can have dramatically different contexts which compound things.

      It's very possible that the meaning you draw from it is an eisegetical one to the meaning which Eco assigns it.

  17. Jun 2022
    1. William James’s self-assessment: “I am no lover of disorder, but fear to lose truth by the pretension to possess it entirely.”
  18. May 2022
    1. Brine, Kevin R., Ellen Gruber Garvey, Lisa M. Gitelman, Steven J. Jackson, Virginia Jackson, Markus Krajewski, Mary Poovey, et al. “Raw Data” Is an Oxymoron. Edited by Lisa M. Gitelman. Infrastructures. MIT Press, 2013. https://mitpress.mit.edu/books/raw-data-oxymoron.

    1. Scott, I'll spend some more in-depth time with it shortly, but in a quick skim of topics I pleasantly notice a few citations of my own work. Perhaps I've done a poor job communicating about wikis, but from what I've seen from your work thus far I take much the same view of zettelkasten as you do. Somehow though I find that you're quoting me in opposition to your views? While you're broadly distinguishing against the well-known Wikipedia, and rightly so, I also broadly consider (unpublished) and include examples of small personal wikis and those within Ward Cunningham's FedWiki space, though I don't focus on them in that particular piece. In broad generalities most of these smaller wikis are closer to the commonplace and zettelkasten traditions, though as you point out they have some structural functional differences. You also quote me as someone in "information theory" in a way that that indicates context collapse. Note that my distinctions and work in information theory relate primarily to theoretical areas in electrical engineering, physics, complexity theory, and mathematics as it relates to Claude Shannon's work. It very specifically does not relate to my more humanities focused work within intellectual history, note taking, commonplaces, rhetoric, orality, or memory. In these areas, I'm better read than most, but have no professional title(s). Can't wait to read the entire piece more thoroughly...

  19. Apr 2022
    1. The book was reviewed in all major magazines and newspapers, sparking what historian Ronald Kline has termed a “cybernetics craze,” becoming “a staple of science fiction and a fad among artists, musicians, and intellectuals in the 1950s and 1960s.”

      This same sort of craze also happened with Claude Shannon's The Mathematical Theory of Information which helped to bolster Weiner's take.

  20. Mar 2022
    1. Melvin Vopson has proposed an experiment involving particle annihilation that could prove that information has mass, and by Einstein's mass-energy equivalence, information is also energy. If true, the experiment would also show that information is one of the states of matter.

      The experiment doesn't need a particle accelerator, but instead uses slow positrons at thermal velocities.

      Melvin Vopson is an information theory researcher at the University of Portsmouth in the United Kingdom.

      A proof that information has mass (or is energy) may explain the idea of dark matter. Vopson's rough calculations indicate that 10^93 bits of information would explain all of the “missing” dark matter.

      Vopson's 2022 AIP Advances paper would indicate that the smallest theoretical size of digital bits, presuming they are stable and exist on their own would become the smallest known building blocks of matter.

      The width of digital bits today is between ten and 30 nanometers. Smaller physical bits could mean more densely packed storage devices.


      Vopson proposes that a positron-electron annihilation should produce energy equivalent to the masses of the two particles. It should also produce an extra dash of energy: two infrared, low-energy photons of a specific wavelength (predicted to be about 50 microns), as a direct result of erasing the information content of the particles.

      The mass-energy-information equivalence principle Vopson proposed in his 2019 AIP Advances paper assumes that a digital information bit is not just physical, but has a “finite and quantifiable mass while it stores information.” This very small mass is 3.19 × 1038 kilograms at room temperature.

      For example, if you erase one terabyte of data from a storage device, it would decrease in mass by 2.5 × 1025 kilograms, a mass so small that it can only be compared to the mass of a proton, which is about 1.67 × 1027 kilograms.

      In 1961, Rolf Landauer first proposed the idea that a bit is physical and has a well-defined energy. When one bit of information is erased, the bit dissipates a measurable amount of energy.

    1. This hierarchical system ensures accuracy, rigour and competencyof information.

      Hierarchical systems of knowledge in Indigenous cultures helps to ensure rigor, competency, and most importantly accuracy of knowledge passed down from generation to generation.

    1. https://www.linkedin.com/pulse/incorrect-use-information-theory-rafael-garc%C3%ADa/

      A fascinating little problem. The bigger question is how can one abstract this problem into a more general theory?

      How many questions can one ask? How many groups could things be broken up into? What is the effect on the number of objects?

  21. Feb 2022
    1. Together: responsive, inline “autocomplete” pow­ered by an RNN trained on a cor­pus of old sci-fi stories.

      I can't help but think, what if one used their own collected corpus of ideas based on their ever-growing commonplace book to create a text generator? Then by taking notes, highlighting other work, and doing your own work, you're creating a corpus of material that's imminently interesting to you. This also means that by subsuming text over time in making your own notes, the artificial intelligence will more likely also be using your own prior thought patterns to make something that from an information theoretic standpoint look and sound more like you. It would have your "hand" so to speak.

    1. And the best ideas are usually the ones we haven’t anticipatedanyway.

      If the best ideas are the ones we haven't anticipated, how are we defining "best"? Most surprising from an information theoretic perspective? One which creates new frontiers of change? One which subsumes or abstracts prior ideas within it? Others?

  22. Jan 2022
    1. https://english.elpais.com/science-tech/2022-01-14/a-spanish-data-scientists-strategy-to-win-99-of-the-time-at-wordle.html

      Story of a scientist trying to optimize for solutions of Wordle.

      Nothing brilliant here. Depressing that the story creates a mythology around algorithms as the solution rather than delving in a bit into the math and science of information theory to explain why this solution is the correct one.

      Desperately missing from the discussion are second and third order words that would make useful guesses to further reduce the solution space for actual readers.

    2. The letters of “aeros” include the five most frequent letters used in English (as Edgar Allan Poe pointed out in the cryptographic challenge included in his famous short story The Golden Beetle)

      "Orate" and "aeros" are respectively the best words to start with when playing Wordle.

    3. “It makes perfect sense,” says Moro from his home in Boston. “For the game to be a success, it needs to be simple and playable, and picking the most common terms means that in the end, we all get it right in just a few tries.”

      Esteban Moro

      For games to be a success they need to meet a set of Goldilock's conditions, they should be simple enough to learn to play and win, but complex enough to still be challenging.

      How many other things in life need this sort of balance between simplicity and complexity to be successful?

      Is there an information theoretic statement that bounds this mathematically? What would it look like for various games?

    4. Cross-referencing the correct answers from previous Wordles with a body of the most commonly used English terms, Moro confirmed that Wardle chooses frequently used words in English, something the game’s inventor also pointed out in his interview with The New York Times, which mentioned that he avoided rare words.

      Wordle specifically chooses more common words which cuts back drastically on the complexity of the game.

    1. https://www.youtube.com/watch?v=z3Tvjf0buc8

      graph thinking

      • intuitive
      • speed, agility
      • adaptability

      ; graph thinking : focuses on relationships to turn data into information and uses patterns to find meaning

      property graph data model

      • relationships (connectors with verbs which can have properties)
      • nodes (have names and can have properties)

      Examples:

      • Purchase recommendations for products in real time
      • Fraud detection

      Use for dependency analysis

  23. Dec 2021
    1. One of the most basic presuppositions of communication is that the partners can mutually surprise each other.

      A reasonably succinct summary of Claude Shannon's 1948 paper The Mathematical Theory of Communication. By 1981 it had firmly ensconced itself into the vernacular, and would have done so for Luhmann as much of systems theory grew out of the prior generation's communication theory.

  24. Oct 2021
  25. Sep 2021
    1. “With whistling, it was more like, let’s see what people did naturally to simplify the signal. What did they keep?” she says.
    2. In practice, almost every whistled tonal language chooses to use pitch to encode the tones.

      Why is pitch encoding of tones more prevalent in tonal languages? What is the efficiency and outcome of the speech and the information that can be encoded?

    3. Whistlers of tonal languages thus face a dilemma: Should they whistle the tones, or the vowels and consonants? “In whistling, you can produce only one of the two. They have to choose,” says Meyer.

      Non-tonal speech is easy to transfer into whistling language, but tonal languages have to choose between whistling the tones or the vowels and consonants as one can only produce one of the two with whistling.

      What effect does this tell us about the information content and density of languages, particularly tonal languages and whistling?

  26. Aug 2021
    1. Normally, thousands of rabbits and guinea pigs are used andkilled, in scientific laboratories, for experiments which yieldgreat and tangible benefits to humanity. This war butcheredmillions of people and ruined the health and lives of tens ofmillions. Is this climax of the pre-war civilization to be passedunnoticed, except for the poetry and the manuring of the battlefields, that the“poppies blow”stronger and better fed? Or is thedeath of ten men on the battle field to be of as much worth inknowledge gained as is the life of one rabbit killed for experi-ment? Is the great sacrifice worth analysing? There can be onlyone answer—yes. But, if truth be desired, the analysis must bescientific.

      Idea: Neural net parameter analysis but with society as the 'neural net' and the 'training examples' things like industrial accidents, etc. How many 'training examples' does it take to 'learn' a lesson, and what can we infer about the rate of learning from these statistics?

  27. Jul 2021
    1. Here by "learning" is meant understanding more, not remem­bering more information that has the same degree of intelli­gibility as other information you already possess.

      A definition of learning here. Is this the thing that's missing from my note above?

    2. The first sense is the one in which we speak of ourselves as reading newspapers, magazines, or anything else that, according to our skill and talents, is at once thoroughly intel­ligible to us. Such things may increase our store of informa­tion, but they cannot improve our understanding, for our understanding was equal to them before we started. Otherwise, we would have felt the shock of puzzlement and perplexity that comes from getting in over our depth-that is, if we were both alert and honest.

      Here they're comparing reading for information and reading for understanding.

      How do these two modes relate to Claude Shannon's versions of information (surprise) and semantics (the communication) itself. Are there other pieces which exist which we're not tacitly including here? It feels like there's another piece we're overlooking.

    1. These criteria – surprise serendipity, information and inner complexity

      These criteria – surprise serendipity, information and inner complexity – are the criteria any communication has to meet.

      An interesting thesis about communication. Note that Luhmann worked in general systems theory. I'm curious if he was working in cybernetics as well?

    2. Irritation: basically, without surprise or disappointment there’s no information. Both partners have to be surprised in some way to say communication takes place.

      This is a basic tenet of information theory. Interesting to see it appear in a work on writing.

  28. May 2021
    1. These “Songline” stories are ancient, exhibit little variation over long periods of time, and are carefully learned and guarded by the Elders who are its custodians [7].

      What is the best way we could test and explore error correction and overwriting in such a system from an information theoretic standpoint?

  29. Apr 2021
    1. A reproduction of Carroll’snotes on his number alphabet will be found in Warren Weaver’s arti-cle “Lewis Carroll: Mathematician,” inScientific Americanfor April1956.)

      I need to track down this reference and would love to see what Weaver has to say about the matter.

      Certainly Weaver would have spoken of this with Claude Shannon (or he'd have read it).

  30. Mar 2021
    1. He introduces the idea of the apophatic: what we can't put into words, but is important and vaguely understood. This term comes from Orthodox theology, where people defined god by saying what it was not.

      Too often as humans we're focused on what is immediately in front of us and not what is missing.

      This same thing plagues our science in that we're only publishing positive results and not negative results.

      From an information theoretic perspective, we're throwing away half (or more?) of the information we're generating. We might be able to go much farther much faster if we were keeping and publishing all of our results in better fashion.

      Is there a better word for this negative information? #openquestions

  31. Feb 2021
    1. The main purpose of this book is to go one step forward, not onlyto use the principle of maximum entropy in predicting probabilitydistributions, but to replace altogether the concept of entropy withthe more suitable concept of information, or better yet, the missinginformation (MI).

      The purpose of this textbook

    2. Thereare also a few books on statistical thermodynamics that use infor-mation theory such as those by Jaynes, Katz, and Tribus.

      Books on statistical thermodynamics that use information theory.

      Which textbook of Jaynes is he referring to?

    3. Levine, R. D. and Tribus, M (eds) (1979),The Maximum Entropy Principle,MIT Press, Cambridge, MA.

      Book on statistical thermodynamics that use information theory, mentioned in Chapter 1.

    4. Katz, A. (1967),Principles of Statistical Mechanics: The Informational TheoryApproach,W.H.Freeman,London.

      Books on statistical thermodynamics that use information theory.

  32. Jan 2021
  33. Nov 2020
  34. Oct 2020
    1. The notion that counting more shapes in the sky will reveal more details of the Big Bang is implied in a central principle of quantum physics known as “unitarity.” Unitarity dictates that the probabilities of all possible quantum states of the universe must add up to one, now and forever; thus, information, which is stored in quantum states, can never be lost — only scrambled. This means that all information about the birth of the cosmos remains encoded in its present state, and the more precisely cosmologists know the latter, the more they can learn about the former.
    1. Social scientist, on the other hand, have focused on what ties are more likely to bring in new information, which are primarily weak ties (Granovetter 1973), and on why weak ties bring new information (because they bridge structural holes (Burt 2001), (Burt 2005)).
    1. Found reference to this in a review of Henry Quastler's book Information Theory in Biology.

      A more serious thing, in the reviewer's opinion, is the compIete absence of contributions deaJing with information theory and the central nervous system, which may be the field par excellence for the use of such a theory. Although no explicit reference to information theory is made in the well-known paper of W. McCulloch and W. Pitts (1943), the connection is quite obvious. This is made explicit in the systematic elaboration of the McCulloch-Pitts' approach by J. von Neumann (1952). In his interesting book J. T. Culbertson (1950) discussed possible neuraI mechanisms for recognition of visual patterns, and particularly investigated the problems of how greatly a pattern may be deformed without ceasing to be recognizable. The connection between this problem and the problem of distortion in the theory of information is obvious. The work of Anatol Rapoport and his associates on random nets, and especially on their applications to rumor spread (see the series of papers which appeared in this Journal during the past four years), is also closely connected with problems of information theory.

      Electronic copy available at: http://www.cse.chalmers.se/~coquand/AUTOMATA/mcp.pdf

    1. Similar to my recent musings (link coming soon) on the dualism of matter vs information, I find that the real beauty may lie precisely in the complexity of their combination.

      There's a kernel of an idea hiding in here that I want to come back and revisit at a future date.

  35. Aug 2020
  36. Jul 2020
  37. Jun 2020
  38. May 2020
  39. Apr 2020
  40. Nov 2019
    1. The most interesting examples have been the weird ones (cf. HI7), where the language model has been trained on narrower, more colorful sets of texts, and then sparked with creative prompts. Archaeologist Shawn Graham, who is working on a book I’d like to preorder right now, An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence, fed GPT-2 the works of the English Egyptologist Flinders Petrie (1853-1942) and then resurrected him at the command line for a conversation about his work. Robin Sloan had similar good fun this summer with a focus on fantasy quests, and helpfully documented how he did it.

      Circle back around and read this when it comes out.

      Similarly, these other references should be an interesting read as well.

  41. Sep 2019
    1. He is now intending to collaborate with Bourne on a series of articles about the find. “Having these annotations might allow us to identify further books that have been annotated by Milton,” he said. “This is evidence of how digital technology and the opening up of libraries [could] transform our knowledge of this period.”
    2. “Not only does this hand look like Milton’s, but it behaves like Milton’s writing elsewhere does, doing exactly the things Milton does when he annotates books, and using exactly the same marks,” said Dr Will Poole at New College Oxford.

      The discussion of the information theoretic idea of "hand" is interesting here, particularly as it relates to the "hand" of annotation and how it was done in other settings by the same person.

  42. Apr 2019
    1. Digital sociology needs more big theory as well as testable theory.

      Here I might posit that Cesar Hidalgo's book Why Information Grows (MIT, 2015) has some interesting theses about links between people and companies which could be extrapolated up to "societies of linked companies". What could we predict about how those will interact based on the underlying pieces? Is it possible that we see other emergent complex behaviors?

  43. Mar 2019
    1. Engelbart insisted that effective intellectual augmentation was always realized within a system, and that any intervention intended to accelerate intellectual augmentation must be understood as an intervention in a system. And while at many points the 1962 report emphasizes the individual knowledge worker, there is also the idea of sharing the context of one’s work (an idea Vannevar Bush had also described in “As We May Think”), the foundation of Engelbart’s lifelong view that a crucial way to accelerate intellectual augmentation was to think together more comprehensively and effectively. One might even rewrite Engelbart’s words above to say, “We do not speak of isolated clever individuals with knowledge of particular domains. We refer to a way of life in an integrated society where poets, musicians, dreamers, and visionaries usefully co-exist with engineers, scientists, executives, and governmental leaders.” Make your own list.
  44. Jan 2019
    1. By examining information as a product of people’s contingent choices, rather than as an impartial recording of unchanging truths, the critically information-literate student develops an outlook toward information characterized by a robust sense of agency and a heightened concern for justice.

      It seems like there's still a transfer problem here, though. There seems to be an assertion that criticality will be inherently cross-domain, but I'm not clear why that should be true. Why would the critical outlook not remain domain-specific. (To say "if it does, then it isn't critical", seems like a tautology.)

  45. Nov 2018
    1. I had begun to think of social movements’ abilities in terms of “capacities”—like the muscles one develops while exercising but could be used for other purposes like carrying groceries or walking long distances—and their repertoire of pro-test, like marches, rallies, and occupations as “signals” of those capacities.

      I find it interesting that she's using words from information theory like "capacities" and "signals" here. It reminds me of the thesis of Caesar Hidalgo's Why Information Grows and his ideas about links. While within the social milieu, links may be easier to break with new modes of communication, what most protesters won't grasp or have the time and patience for is the recreation of new links to create new institutions for rule. As seen in many war torn countries, this is the most difficult part. Similarly campaigning is easy, governing is much harder.

      As an example: The US government's breaking of the links of military and police forces in post-war Iraq made their recovery process far more difficult because all those links within the social hierarchy and political landscape proved harder to reconstruct.

    1. Understanding Individual Neuron Importance Using Information Theory

      前几天也发现了这个文,果断收藏下载了!

      在信息论下,讨论互信息和分类效率等在网络内部的影响~

    2. Understanding Convolutional Neural Network Training with Information Theory

      要认真读的文,从信息论观点去理解 CNN。

  46. Sep 2018