9 Matching Annotations
  1. Nov 2022
    1. subspace topology

      This definition can be used to demonstrate why the following function is continuous:

      \(f: [0,2\pi) \to S^1\) where \(f(\phi)= (\cos\phi, \sin\phi)\) and \(S^1\) is the unit circle in the cartesian coordinate plane \(\mathbb{R}^2\).

      Intuition

      The preimage of open (in codomain) is open (in domain). Roughly, anything "close" in the codomain must have come from something "close" in the domain. Otherwise, stuff got split apart (think gaps, holes, jumps) on the way from our domain to our codomain.

      Formalism

      For some \(f: X \to Y\), for any open set \(V \in \tau_Y\), there exists some open set \(U \in \tau_X\) so that it's image under \(f\) is \(V\). In math, \(\forall V \in \tau_Y, \exists U \in \tau_X \text{ s.t. } f(U) = V\)

      Demonstration

      So for \(f: [0,2\pi) \to S^1\), we can see that \([0,2\pi)\) is open under the subspace topology. Why? Let's start with a different example.

      Claim 1: \(U_S=[0,1) \cup (2,2\pi)\) is open in \(S = [0,2\pi)\)

      We need to show that \(U_S = S \cap U_X\) for some \(U_X \in \mathbb{R}\). So we can take whatever open set that overlaps with our subspace to generate \(U_S\text{.}\)

      proof 1

      Consider \(U_X = (-1,1) \cup (2, 2\pi)\) and its intersection with \(S = [0, 2\pi)\). The overlap of \(U_X\) with \(S\) is precisely \(U_S\). That is,

      $$ \begin{align} S \cap U_X &= [0, 2\pi) \cap U_X \ &= [0, 2\pi) \cap \bigl( (-1,1) \cup (2,2\pi) \bigr) \ &= \bigl( [0, 2\pi) \cap (-1,1) \bigr) \cup \bigl( [0,2\pi) \cap (2,2\pi)\bigr) \ &= [0, 1) \cup (2, 2\pi) \ &= U_S \end{align} $$

    1. The random process has outcomes

      Notation of a random process that has outcomes

      The "universal set" aka "sample space" of all possible outcomes is sometimes denoted by \(U\), \(S\), or \(\Omega\): https://en.wikipedia.org/wiki/Sample_space

      Probability theory & measure theory

      From what I recall, the notation, \(\Omega\), was mainly used in higher-level grad courses on probability theory. ie, when trying to frame things in probability theory as a special case of measure theory things/ideas/processes. eg, a probability space, \((\cal{F}, \Omega, P)\) where \(\cal{F}\) is a \(\sigma\text{-field}\) aka \(\sigma\text{-algebra}\) and \(P\) is a probability density function on any element of \(\cal{F}\) and \(P(\Omega)=1.\)

      Somehow, the definition of a sigma-field captures the notion of what we want out of something that's measurable, but it's unclear to me why so let's see where writing through this takes me.

      Working through why a sigma-algebra yields a coherent notion of measureable

      A sigma-algebra \(\cal{F}\) on a set \(\Omega\) is defined somewhat close to the definition of a topology \(\tau\) on some space \(X\). They're both collections of sub-collections of the set/space of reference (ie, \(\tau \sub 2^X\) and \(\cal{F} \sub 2^\Omega\)). Also, they're both defined to contain their underlying set/space (ie, \(X \in \tau\) and \(\Omega \in \cal{F}\)).

      Additionally, they both contain the empty set but for (maybe) different reasons, definitionally. For a topology, it's simply defined to contain both the whole space and the empty set (ie, \(X \in \tau\) and \(\empty \in \tau\)). In a sigma-algebra's case, it's defined to be closed under complements, so since \(\Omega \in \cal{F}\) the complement must also be in \(\cal{F}\)... but the complement of the universal set \(\Omega\) is the empty set, so \(\empty \in \cal{F}\).

      I think this might be where the similarity ends, since a topology need not be closed under complements (but probably has a special property when it is, although I'm not sure what; oh wait, the complement of open is closed in topology, so it'd be clopen! Not sure what this would really entail though 🤷‍♀️). Moreover, a topology is closed under arbitrary unions (which includes uncountable), but a sigma-algebra is closed under countable unions. Hmm... Maybe this restriction to countable unions is what gives a coherent notion of being measurable? I suspect it also has to do with Banach-Tarski paradox. ie, cutting a sphere into 5 pieces and rearranging in a clever way so that you get 2 sphere's that each have the volume of the original sphere; I mean, WTF, if 1 sphere's volume equals the volume of 2 sphere's, then we're definitely not able to measure stuff any more.

      And now I'm starting to vaguely recall that this what sigma-fields essentially outlaw/ban from being possible. It's also related to something important in measure theory called a Lebeque measure, although I'm not really sure what that is (something about doing a Riemann integral but picking the partition on the y-axis/codomain instead of on the x-axis/domain, maybe?)

      And with that, I think I've got some intuition about how fundamental sigma-algebras are to letting us handle probability and uncertainty.

      Back to probability theory

      So then events like \(E_1\) and \(E_2\) that are elements of the set of sub-collections, \(\cal{F}\), of the possibility space \(\Omega\). Like, maybe \(\Omega\) is the set of all possible outcomes of rolling 2 dice, but \(E_1\) could be a simple event (ie, just one outcome like rolling a 2) while \(E_2\) could be a compound(?) event (ie, more than one, like rolling an even number). Notably, \(E_1\) & \(E_2\) are NOT elements of the sample space \(\Omega\); they're elements of the powerset of our possibility space (ie, the set of all possible subsets of \(\Omega\) denoted by \(2^\Omega\)). So maybe this explains why the "closed under complements" is needed; if you roll a 2, you should also be able to NOT roll a 2. And the property that a sigma-algebra must "contain the whole space" might be what's needed to give rise to a notion of a complete measure (conjecture about complete measures: everything in the measurable space can be assigned a value where that part of the measurable space does, in fact, represent some constitutive part of the whole).

      But what about these "random events"?

      Ah, so that's where random variables come into play (and probably why in probability theory they prefer to use \(\Omega\) for the sample space instead of \(X\) like a base space in topology). There's a function, that is, a mapping from outcomes of this "random event" (eg, a role of 2 dice) to a space in which we can associate (ie, assign) a sense of distance (ie, our sigma-algebra). What confuses me is that we see things like "\(P(X=x)\)" which we interpret as "probability that our random variable, \(X\), ends up being some particular outcome \(x\)." But it's also said that \(X\) is a real-valued function, ie, takes some arbitrary elements (eg, events like rolling an even number) and assigns them a real number (ie, some \(x \in \mathbb{R}\)).

      Aha! I think I recall the missing link: the notation "\(X=x\)" is really a shorthand for "\(X(\omega)=x\)" where \(\omega \in \cal{F}\). But something that still feels unreconciled is that our probability metric, \(P\), is just taking some real value to another real value... So which one is our sigma-algebra, the inputs of \(P\) or the inputs of \(X\)? 🤔 Hmm... Well, I guess it has the be the set of elements that \(X\) is mapping into \(\mathbb{R}\) since \(X\text{'s}\) input is a small omega \(\omega\) (which is probably an element of big omega \(\Omega\) based on the conventions of small notation being elements of big notation), so \(X\text{'s}\) domain much be the sigma-algrebra?

      Let's try to generate a plausible example of this in action... Maybe something with an inequality like "\(X\ge 1\)". Okay, yeah, how about \(X\) is a random variable for the random process of how long it takes a customer to get through a grocery line. So \(X\) is mapping the elements of our sigma-algebra (ie, what customers actually end up experiencing in the real world) into a subset of the reals, namely \([0,\infty)\) because their time in line could be 0 minutes or infinite minutes (geesh, 😬 what a life that would be, huh?). Okay, so then I can ask a question like "What's the probability that \(X\) takes on a value greater than or equal to 1 minute?" which I think translates to "\(P\left(X(\omega)\ge 1\right)\)" which is really attempting to model this whole "random event" of "What's gonna happen to a particular person on average?"

      So this makes me wonder... Is this fact that \(X\) can model this "random event" (at all) what people mean when they say something is a stochastic model? That there's a probability distribution it generates which affords us some way of dealing with navigating the uncertainty of the "random event"? If so, then sigma-algebras seem to serve as a kind of gateway and/or foundation into specific cognitive practices (ie, learning to think & reason probabilistically) that affords us a way out of being overwhelmed by our anxiety or fear and can help us reclaim some agency and autonomy in situations with uncertainty.

  2. Jul 2022
    1. most people need to talk out an idea in order to think about it2.

      D. J. Levitin, The organized mind: thinking straight in the age of information overload. New York, N.Y: Dutton, 2014. #books/wanttoread

      A general truism in my experience, but I'm curious what else Levitin has to say on this subject.

  3. Sep 2021
    1. One last resource for augmenting our minds can be found in other people’s minds. We are fundamentally social creatures, oriented toward thinking with others. Problems arise when we do our thinking alone — for example, the well-documented phenomenon of confirmation bias, which leads us to preferentially attend to information that supports the beliefs we already hold. According to the argumentative theory of reasoning, advanced by the cognitive scientists Hugo Mercier and Dan Sperber, this bias is accentuated when we reason in solitude. Humans’ evolved faculty for reasoning is not aimed at arriving at objective truth, Mercier and Sperber point out; it is aimed at defending our arguments and scrutinizing others’. It makes sense, they write, “for a cognitive mechanism aimed at justifying oneself and convincing others to be biased and lazy. The failures of the solitary reasoner follow from the use of reason in an ‘abnormal’ context’” — that is, a nonsocial one. Vigorous debates, engaged with an open mind, are the solution. “When people who disagree but have a common interest in finding the truth or the solution to a problem exchange arguments with each other, the best idea tends to win,” they write, citing evidence from studies of students, forecasters and jury members.

      Thinking in solitary can increase one's susceptibility to confirmation bias. Thinking in groups can mitigate this.

      How might keeping one's notes in public potentially help fight against these cognitive biases?

      Is having a "conversation in the margins" with an author using annotation tools like Hypothes.is a way to help mitigate this sort of cognitive bias?

      At the far end of the spectrum how do we prevent this social thinking from becoming groupthink, or the practice of thinking or making decisions as a group in a way that discourages creativity or individual responsibility?

  4. Oct 2020
    1. Digital texts embody the intersections between history and biography that Mills (1959) thought inherent to understanding social relations. Content from my blog is a ready example. I have access to the entire data set. I can track its macro discursive moments to action, space, and place. And I can consider it as a reflexive sociological practice. In this way, I have used my digital texts as methodologists use autoethnographies: reflexive, critical practices of social relationship.

      I wonder a bit about applying behavioral economics or areas like System 1/System 2 of D. Kahneman and A. Tversky to social media as well. Some (a majority?) use Twitter as an immediate knee-jerk reaction to content they're reading and interacting with in a very System 1 sense while others use longer form writing and analysis seen in the blogosphere to create System 2 sort of social thinking.

      This naturally needs to be cross referenced in peoples' time and abilities to consume these things and the reactions and dopamine responses they provoke. Most people are apt to read the shorter form writing because it's easier and takes less time and effort compared with longer form writing which requires far more cognitive load and time expenditure.

  5. Jun 2020
    1. But tagging, alone, is still not good enough. Even our many tags become useless if/when their meaning changes (in our minds) by the time we go retrieve the data they point to. This could be years after we tagged something. Somehow, whether manually or automatically, we need agents and tools to help us keep our tags updated and relevant.

      search engines usually can surface that faster (less cognitive load than recalling what and where you store something) than you retrieve it in your second brain (abundance info, do can always retrieve from external source in a JIT fashion)

  6. May 2020
    1. these words bring up all kinds of questions

      some thoughts when skimming through stream-of-consciousness journals like these

      if I want to absorb the information and "learn" faster, then reading faster or summarising the text is not the solution, because a text is already a compressed lossy encoded form of the initial thought. to decode it further and transfer it into my head would risk too much missing bits of information.

  7. Jul 2019
    1. A systematic analysis of my public writing makes the case that as academics are increasingly called to “publicly engage,” we have not fully conceptualized or counted the costs of public writing from various social locations.
  8. Jun 2019