19 Matching Annotations
  1. Dec 2022
    1. My freely downloadable Beginning Mathematical Logic is a Study Guide, suggesting introductory readings beginning at sub-Masters level. Take a look at the main introductory suggestions on First-Order Logic, Computability, Set Theory as useful preparation. Tackling mid-level books will help develop your appreciation of mathematical approaches to logic.

      This is a reference to a great book "Beginning Mathematical Logic: A Study Guide [18 Feb 2022]" by Peter Smith on "Teach Yourself Logic A Study Guide (and other Book Notes)". The document itself is called "LogicStudyGuide.pdf".

      It focuses on mathematical logic and can be a gateway into understanding Gödel's incompleteness theorems.

      I found this some time ago when looking for a way to grasp the difference between first-order and second-order logics. I recall enjoying his style of writing and his commentary on the books he refers to. Both recollections still remain true after rereading some of it.

      It both serves as an intro to and recommended reading list for the following: - classical logics - first- & second-order - modal logics - model theory<br /> - non-classical logics - intuitionistic - relevant - free - plural - arithmetic, computability, and incompleteness - set theory (naïve and less naïve) - proof theory - algebras for logic - Boolean - Heyting/pseudo-Boolean - higher-order logics - type theory - homotopy type theory

    1. for settling in a finite number of steps, whether a relevant object hasproperty P.Relatedly, the answer to a question Q is effectively decidable ifand only if there is an algorithm which gives the answer, again by adeterministic computation, in a finite number of steps.

      Missing highlight from preceding page:

      A property \( P \) is effectively decidible if and only if there is an algorithm (a finite set of instructions for a deterministic computation) ...

      Isn't this related to the idea of left & right adjoints in category theory? iirc, there was something about the "canonical construction" of something X being the best solution to a particular problem Y (which had another framing like, "Problem Y is the most difficult problem for which X is a solution")

      Different thought: the Curry-Howard-Lambek correspondance connects intuitionistic logic, typed lambda calculus, and cartesian closed categories.

  2. Nov 2022
    1. “In order to talk to each other, we have to have words, and that’s all right. It’s a good idea to try to see the difference, and it’s a good idea to know when we are teaching the tools of science, such as words, and when we are teaching science itself,” Feynman said.

      Maths, Logic, Computer Science, Chess, Music, and Dance

      A similar observation could be made about mathematics, logic, and computer science. Sadly, public education in the states seems to lose sight that the formalisms in these domains are merely the tools of the trade and not the trade itself (ie, developing an understanding of the fundamental/foundational notions, their relationships, their instantiations, and cultivating how one can develop capacity to "move" in that space).

      Similarly, it's as if we encourage children that they need to merely memorize all the movements of chess pieces to appreciate the depth of the game.

      Or saying "Here, just memorize these disconnected contortions of the hand upon these strings along this piece of wood. Once you have that down, you've experienced all that guitar, (nay, music itself!) has to offer."

      Or "Yes, once, you internalize the words for these moves and recite them verbatim, you will have experienced all the depth and wonder that dance and movement have to offer."

      However, none of these examples are given so as to dismiss or ignore the necessity of (at least some level of) formalistic fluency within each of these domains of experience. Rather, their purpose is to highlight the parallels in other domains that may seem (at first) so disconnected from one's own experience, so far from one's fundamental way of feeling the world, that the only plausible reasons one can make to explain why people would waste their time engaging in such acts are 1. folly: they merely do not yet know their activities are absurd, but surely enough time will disabuse them of their foolish ways. 2. madness: they cannot ever know the absurdity of their acts, for "the absurd" and "the astute" are but two names for one and the same thing in their world of chaos. 3. apathy: they in fact do see the absurdity in their continuing of activities which give them no sense of meaning, yet their indifference insurmountably impedes them from changing their course of action. For how could one resist the path of least resistance, a road born of habit, when one must expend energy to do so but that energy can only come from one who cares?

      Or at least, these 3 reasons can surely seem like that's all there possibly could be to warrant someone continuing music, chess, dance, maths, logic, computer science, or any apparently alien craft. However, if one takes time to speak to someone who earnestly pursues such "alien crafts", then one may start to perceive intimations of something beyond their current impressions

      The contorted clutching of the strings now seems... coordinated. The pensive placement of the pawns now appears... purposeful. The frantic flailing of one's feet now feels... freeing. The movements of one's mind now feels... marvelous.

      So the very activity that once seemed so clearly absurd, becomes cognition and shapes perspectives beyond words

    1. it became clear that Fermat's Last Theorem could be proven as a corollary of a limited form of the modularity theorem (unproven at the time and then known as the "Taniyama–Shimura–Weil conjecture"). The modularity theorem involved elliptic curves, which was also Wiles's own specialist area.[15][16]

      Elliptical curves are also use in Ed25519 which are purportedly more robust to side channel attacks. Could there been some useful insight from Wiles and the modularity theorem?

    1. subspace topology

      This definition can be used to demonstrate why the following function is continuous:

      \(f: [0,2\pi) \to S^1\) where \(f(\phi)= (\cos\phi, \sin\phi)\) and \(S^1\) is the unit circle in the cartesian coordinate plane \(\mathbb{R}^2\).


      The preimage of open (in codomain) is open (in domain). Roughly, anything "close" in the codomain must have come from something "close" in the domain. Otherwise, stuff got split apart (think gaps, holes, jumps) on the way from our domain to our codomain.


      For some \(f: X \to Y\), for any open set \(V \in \tau_Y\), there exists some open set \(U \in \tau_X\) so that it's image under \(f\) is \(V\). In math, \(\forall V \in \tau_Y, \exists U \in \tau_X \text{ s.t. } f(U) = V\)


      So for \(f: [0,2\pi) \to S^1\), we can see that \([0,2\pi)\) is open under the subspace topology. Why? Let's start with a different example.

      Claim 1: \(U_S=[0,1) \cup (2,2\pi)\) is open in \(S = [0,2\pi)\)

      We need to show that \(U_S = S \cap U_X\) for some \(U_X \in \mathbb{R}\). So we can take whatever open set that overlaps with our subspace to generate \(U_S\text{.}\)

      proof 1

      Consider \(U_X = (-1,1) \cup (2, 2\pi)\) and its intersection with \(S = [0, 2\pi)\). The overlap of \(U_X\) with \(S\) is precisely \(U_S\). That is,

      $$ \begin{align} S \cap U_X &= [0, 2\pi) \cap U_X \ &= [0, 2\pi) \cap \bigl( (-1,1) \cup (2,2\pi) \bigr) \ &= \bigl( [0, 2\pi) \cap (-1,1) \bigr) \cup \bigl( [0,2\pi) \cap (2,2\pi)\bigr) \ &= [0, 1) \cup (2, 2\pi) \ &= U_S \end{align} $$

    1. The random process has outcomes

      Notation of a random process that has outcomes

      The "universal set" aka "sample space" of all possible outcomes is sometimes denoted by \(U\), \(S\), or \(\Omega\): https://en.wikipedia.org/wiki/Sample_space

      Probability theory & measure theory

      From what I recall, the notation, \(\Omega\), was mainly used in higher-level grad courses on probability theory. ie, when trying to frame things in probability theory as a special case of measure theory things/ideas/processes. eg, a probability space, \((\cal{F}, \Omega, P)\) where \(\cal{F}\) is a \(\sigma\text{-field}\) aka \(\sigma\text{-algebra}\) and \(P\) is a probability density function on any element of \(\cal{F}\) and \(P(\Omega)=1.\)

      Somehow, the definition of a sigma-field captures the notion of what we want out of something that's measurable, but it's unclear to me why so let's see where writing through this takes me.

      Working through why a sigma-algebra yields a coherent notion of measureable

      A sigma-algebra \(\cal{F}\) on a set \(\Omega\) is defined somewhat close to the definition of a topology \(\tau\) on some space \(X\). They're both collections of sub-collections of the set/space of reference (ie, \(\tau \sub 2^X\) and \(\cal{F} \sub 2^\Omega\)). Also, they're both defined to contain their underlying set/space (ie, \(X \in \tau\) and \(\Omega \in \cal{F}\)).

      Additionally, they both contain the empty set but for (maybe) different reasons, definitionally. For a topology, it's simply defined to contain both the whole space and the empty set (ie, \(X \in \tau\) and \(\empty \in \tau\)). In a sigma-algebra's case, it's defined to be closed under complements, so since \(\Omega \in \cal{F}\) the complement must also be in \(\cal{F}\)... but the complement of the universal set \(\Omega\) is the empty set, so \(\empty \in \cal{F}\).

      I think this might be where the similarity ends, since a topology need not be closed under complements (but probably has a special property when it is, although I'm not sure what; oh wait, the complement of open is closed in topology, so it'd be clopen! Not sure what this would really entail though 🤷‍♀️). Moreover, a topology is closed under arbitrary unions (which includes uncountable), but a sigma-algebra is closed under countable unions. Hmm... Maybe this restriction to countable unions is what gives a coherent notion of being measurable? I suspect it also has to do with Banach-Tarski paradox. ie, cutting a sphere into 5 pieces and rearranging in a clever way so that you get 2 sphere's that each have the volume of the original sphere; I mean, WTF, if 1 sphere's volume equals the volume of 2 sphere's, then we're definitely not able to measure stuff any more.

      And now I'm starting to vaguely recall that this what sigma-fields essentially outlaw/ban from being possible. It's also related to something important in measure theory called a Lebeque measure, although I'm not really sure what that is (something about doing a Riemann integral but picking the partition on the y-axis/codomain instead of on the x-axis/domain, maybe?)

      And with that, I think I've got some intuition about how fundamental sigma-algebras are to letting us handle probability and uncertainty.

      Back to probability theory

      So then events like \(E_1\) and \(E_2\) that are elements of the set of sub-collections, \(\cal{F}\), of the possibility space \(\Omega\). Like, maybe \(\Omega\) is the set of all possible outcomes of rolling 2 dice, but \(E_1\) could be a simple event (ie, just one outcome like rolling a 2) while \(E_2\) could be a compound(?) event (ie, more than one, like rolling an even number). Notably, \(E_1\) & \(E_2\) are NOT elements of the sample space \(\Omega\); they're elements of the powerset of our possibility space (ie, the set of all possible subsets of \(\Omega\) denoted by \(2^\Omega\)). So maybe this explains why the "closed under complements" is needed; if you roll a 2, you should also be able to NOT roll a 2. And the property that a sigma-algebra must "contain the whole space" might be what's needed to give rise to a notion of a complete measure (conjecture about complete measures: everything in the measurable space can be assigned a value where that part of the measurable space does, in fact, represent some constitutive part of the whole).

      But what about these "random events"?

      Ah, so that's where random variables come into play (and probably why in probability theory they prefer to use \(\Omega\) for the sample space instead of \(X\) like a base space in topology). There's a function, that is, a mapping from outcomes of this "random event" (eg, a role of 2 dice) to a space in which we can associate (ie, assign) a sense of distance (ie, our sigma-algebra). What confuses me is that we see things like "\(P(X=x)\)" which we interpret as "probability that our random variable, \(X\), ends up being some particular outcome \(x\)." But it's also said that \(X\) is a real-valued function, ie, takes some arbitrary elements (eg, events like rolling an even number) and assigns them a real number (ie, some \(x \in \mathbb{R}\)).

      Aha! I think I recall the missing link: the notation "\(X=x\)" is really a shorthand for "\(X(\omega)=x\)" where \(\omega \in \cal{F}\). But something that still feels unreconciled is that our probability metric, \(P\), is just taking some real value to another real value... So which one is our sigma-algebra, the inputs of \(P\) or the inputs of \(X\)? 🤔 Hmm... Well, I guess it has the be the set of elements that \(X\) is mapping into \(\mathbb{R}\) since \(X\text{'s}\) input is a small omega \(\omega\) (which is probably an element of big omega \(\Omega\) based on the conventions of small notation being elements of big notation), so \(X\text{'s}\) domain much be the sigma-algrebra?

      Let's try to generate a plausible example of this in action... Maybe something with an inequality like "\(X\ge 1\)". Okay, yeah, how about \(X\) is a random variable for the random process of how long it takes a customer to get through a grocery line. So \(X\) is mapping the elements of our sigma-algebra (ie, what customers actually end up experiencing in the real world) into a subset of the reals, namely \([0,\infty)\) because their time in line could be 0 minutes or infinite minutes (geesh, 😬 what a life that would be, huh?). Okay, so then I can ask a question like "What's the probability that \(X\) takes on a value greater than or equal to 1 minute?" which I think translates to "\(P\left(X(\omega)\ge 1\right)\)" which is really attempting to model this whole "random event" of "What's gonna happen to a particular person on average?"

      So this makes me wonder... Is this fact that \(X\) can model this "random event" (at all) what people mean when they say something is a stochastic model? That there's a probability distribution it generates which affords us some way of dealing with navigating the uncertainty of the "random event"? If so, then sigma-algebras seem to serve as a kind of gateway and/or foundation into specific cognitive practices (ie, learning to think & reason probabilistically) that affords us a way out of being overwhelmed by our anxiety or fear and can help us reclaim some agency and autonomy in situations with uncertainty.

    1. the moments of a function are quantitative measures related to the shape of the function's graph

      Vaguely recall these "uniquely determined" some (but not all) functions. Later on, the article says all moments from \(0\) to \(\infty\) do uniquely determine bounded functions. Guess you can't judge a book (or graph) by it's cover; you have to wait moment by moment for it to reveal itself

  3. May 2022
  4. Mar 2022
    1. Pierre Bézier (Renault), a French engineer and one of the founders in the field of solid, geometric and physical modelling and Paul De Casteljau (Citroën), a French Mathematician and physicist developed an algorithm to calculate a family of curves. These curves are named as Bézier curves while the algorithm is named after De Casteljau, DeCasteljau’s algorithm. The algorithm and the Bézier curves are used in almost all the graphic tools. Before the invention of these tools, the software could not understand a shape if it wasn’t a circle, a parabola or a basic line. The availability of hardware that could machine complex 3-D shapes and lack of the software that could not communicate the specifics of those shapes created a gap. The Bézier curves solved this issue. They were used in creating the design of body parts of Renault and Peugeot cars as early as in 1960s.
  5. Jan 2022
  6. Dec 2021
  7. Apr 2021
    1. Mathematical explanations are fundamentally different because no part of a mathematical system can be otherwise than it is given without changing the entire system as a whole.

      Why mathematics statement is not causalL

  8. May 2019
  9. Nov 2016
    1. Online tutoring with its growth and popularity in all over the world has become an excellent place for the job seekers with mastery in almost any subject, academic or others like music and arts. Click on the website of any online tutoring company in UK and you will find that they require teacher for any subject you know. However, the demand of the tutors varies from subject to subject.

  10. Feb 2016