31 Matching Annotations
  1. Jan 2024
    1. Christian Lawson-Perfect @christianp@mathstodon.xyz@liseo there are lots of ways of representing colours numerically. The most basic way that computers use is to use a number between 0 and 255 for each of the red, green and blue components, called RGB encoding. The problem with that is that colours that look close to each other don't necessarily have close RGB values. There are other colour spaces which try to get closer to the ideal of having similar colours close together. Oklab, which I use in this tool, is currently the best for that.

      https://mastodon.social/@christianp@mathstodon.xyz/111759984202211741

      Is there a way to mathematically encode colors, similar to RGB perhaps, such that the colors in nearby neighborhoods all have values close to each other?

  2. Oct 2023
  3. Sep 2023
    1. For note makers who find themselves creating an unwieldy amount of so-called "orphan notes," the folgezettel sounds the alarm. When faced with a sea of parents without children (9A 9B 9C 9D 9E, etc) it makes these "empty nesters" all the more apparent as the note gets added to the stack.

      There's an interesting dichotomy which seems to be arising here. It's almost as if he's defining a folgezettel note in opposition to orphaned notes, most often seen in digital settings when importing lots of "stuff" but which Doto indicates can happen in analog systems as well.

      Orphaned notes in an analog space, however are still linked by proximity even though they're not as densely linked (even from a mathematical topology perspective.)

  4. Aug 2023
    1. Die Bücher waren undsind noch heute zum Teil nach seinem Prinzip der »gutenNachbarschaft« geordnet und folgten ausdrücklich seinensubjektiven Forschungsinteressen.

      Warburg's zettelkasten does not appear to be a simple bibliographic classification system according to Steiner. He indicates that the books in Warburg's library are arranged according to Warburg's idea of »guten Nachbarschaft« or "good neighborliness" whereby they followed his subjective interests an ordering that is reflected in the labels of his note boxes and various tabs which subsection notes within them.

    1. "But there's a very famous theorem in topology called the Jordan curve theorem. You have a plane and on it a simple curve that doesn't intersect and closes—in other words, a loop. There's an inside and an outside to the loop." As Riehl draws this, it seems obvious enough, but here's the problem: No matter how much your intuition tells you that there must be an inside and an outside, it's very hard to prove mathematically that this holds true for any loop that can be drawn.

      How does one concretely define "inside" and "outside"? This definition is part of the missing space between the intuition and the mathematical proof.

  5. May 2023
  6. Apr 2023
    1. Similarly, you must give up the assumption that there are privileged places, notes of special and knowledge-ensuring quality. Each note is just an element that gets its value from being a part of a network of references and cross-references in the system. A note that is not connected to this network will get lost in the Zettelkasten, and will be forgotten by the Zettelkasten.

      This section is almost exactly the same as Umberto Eco's description of a slip box practice:

      No piece of information is superior to any other. Power lies in having them all on file and then finding the connections. There are always connections; you have only to want to find them. -- Umberto Eco. Foucault's Pendulum

      See: https://hypothes.is/a/jqug2tNlEeyg2JfEczmepw


      Interestingly, these structures map reasonably well onto Paul Baran's work from 1964: Paul Baran's graphs for Centralized, Decentralized, and Distributed systems

      The subject heading based filing system looks and functions a lot like a centralized system where the center (on a per topic basis) is the subject heading or topical category and the notes related to that section are filed within it. Luhmann's zettelkasten has the feel of a mixture of the decentralized and distributed graphs, but each sub-portion has its own topology. The index is decentralized in nature, while the bibliographical section/notes are all somewhat centralized in form.

      Cross reference:<br /> Baran, Paul. “On Distributed Communications: I. Introduction to Distributed Communications Networks.” Research Memoranda. Santa Monica, California: RAND Corporation, August 1964. https://doi.org/10.7249/RM3420.

  7. Feb 2023
    1. Folgezettel

      Do folgezettel in combination with an index help to prevent over-indexing behaviors? Or the scaling problem of categorization in a personal knowledge management space?

      Where do subject headings within a zettelkasten dovetail with the index? Where do they help relieve the idea of heavy indexing or tagging? How are the neighborhoods of ideas involved in keeping a sense of closeness while still allowing density of ideas and information?

      Having digital search views into small portions of neighborhoods like gxabbo suggested can be a fantastic affordance. see: https://hypothes.is/a/W2vqGLYxEe2qredYNyNu1A

      For example, consider an anthropology student who intends to spend a lifetime in the subject and its many sub-areas. If they begin smartly tagging things with anthropology as they start, eventually the value of the category, any tags, or ideas within their index will eventually grow without bound to the point that the meaning or value as a search affordance within their zettelkasten (digital or analog) will be utterly useless. Let's say they fix part of the issue by sub-categorizing pieces into cultural anthropology, biological anthropology, linguistic anthropology, archaeology, etc. This problem is fine while they're in undergraduate or graduate school for a bit, but eventually as they specialize, these areas too will become overwhelming in terms of search and the search results. This problem can continue ad-infinitum for areas and sub areas. So how can one solve it?

      Is a living and concatenating index the solution? The index can have anthropology with sub-areas listed with pointers to the beginnings of threads of thought in these areas which will eventually create neighborhoods of these related ideas.

      The solution is far easier when the ideas are done top-down after-the-fact like in the Dewey Decimal System when the broad areas are preknown and pre-delineated. But in a Luhmann-esque zettelkasten, things grow from the bottom up and thus present different difficulties from a scaling up perspective.

      How do we classify first, second, and third order effects which emerge out of the complexity of a zettelkasten? - Sparse indexing can be a useful long term affordance in the second or third order space. - Combinatorial creativity and ideas of serendipity emerge out of at least the third order. - Using ZK for writing is a second order affordance - Storage is a first order affordance - Memory is a first order affordance (related to storage) - Productivity is a second+ order (because solely spending the time to save and store ideas is a drag at the first order and doesn't show value until retrieval at a later date). - Poor organization can be non-affordance or deterrent which results in a scrap heap - lack of a reason why can be a non-affordance or deterrence as well - cross reference this list and continue on with other pieces and affordances

    1. One of the problems in approaching quantum gravity is the choice for how to best represent it mathematically. Most of quantum mechanics is algebraic in nature but gravity has a geometry component which is important. (restatement)


      This is similar to the early 20th century problem of how to best represent quantum mechanics: as differential equations or using group theory/Lie algebras?

      This prompts the question: what other potential representations might also work?

      Could it be better understood/represented using Algebraic geometry or algebraic topology as perspectives?

      [handwritten notes from 2023-02-02]

  8. Dec 2022
    1. Simplify network topology by connecting only one end-device to the TP-Link device. DO NOT connect any other devices like modem or server because those may have an impact on the running of web-based management server of the TP-Link or your end-device.
  9. Nov 2022
    1. subspace topology

      This definition can be used to demonstrate why the following function is continuous:

      \(f: [0,2\pi) \to S^1\) where \(f(\phi)= (\cos\phi, \sin\phi)\) and \(S^1\) is the unit circle in the cartesian coordinate plane \(\mathbb{R}^2\).

      Intuition

      The preimage of open (in codomain) is open (in domain). Roughly, anything "close" in the codomain must have come from something "close" in the domain. Otherwise, stuff got split apart (think gaps, holes, jumps) on the way from our domain to our codomain.

      Formalism

      For some \(f: X \to Y\), for any open set \(V \in \tau_Y\), there exists some open set \(U \in \tau_X\) so that it's image under \(f\) is \(V\). In math, \(\forall V \in \tau_Y, \exists U \in \tau_X \text{ s.t. } f(U) = V\)

      Demonstration

      So for \(f: [0,2\pi) \to S^1\), we can see that \([0,2\pi)\) is open under the subspace topology. Why? Let's start with a different example.

      Claim 1: \(U_S=[0,1) \cup (2,2\pi)\) is open in \(S = [0,2\pi)\)

      We need to show that \(U_S = S \cap U_X\) for some \(U_X \in \mathbb{R}\). So we can take whatever open set that overlaps with our subspace to generate \(U_S\text{.}\)

      proof 1

      Consider \(U_X = (-1,1) \cup (2, 2\pi)\) and its intersection with \(S = [0, 2\pi)\). The overlap of \(U_X\) with \(S\) is precisely \(U_S\). That is,

      $$ \begin{align} S \cap U_X &= [0, 2\pi) \cap U_X \ &= [0, 2\pi) \cap \bigl( (-1,1) \cup (2,2\pi) \bigr) \ &= \bigl( [0, 2\pi) \cap (-1,1) \bigr) \cup \bigl( [0,2\pi) \cap (2,2\pi)\bigr) \ &= [0, 1) \cup (2, 2\pi) \ &= U_S \end{align} $$

    1. The random process has outcomes

      Notation of a random process that has outcomes

      The "universal set" aka "sample space" of all possible outcomes is sometimes denoted by \(U\), \(S\), or \(\Omega\): https://en.wikipedia.org/wiki/Sample_space

      Probability theory & measure theory

      From what I recall, the notation, \(\Omega\), was mainly used in higher-level grad courses on probability theory. ie, when trying to frame things in probability theory as a special case of measure theory things/ideas/processes. eg, a probability space, \((\cal{F}, \Omega, P)\) where \(\cal{F}\) is a \(\sigma\text{-field}\) aka \(\sigma\text{-algebra}\) and \(P\) is a probability density function on any element of \(\cal{F}\) and \(P(\Omega)=1.\)

      Somehow, the definition of a sigma-field captures the notion of what we want out of something that's measurable, but it's unclear to me why so let's see where writing through this takes me.

      Working through why a sigma-algebra yields a coherent notion of measureable

      A sigma-algebra \(\cal{F}\) on a set \(\Omega\) is defined somewhat close to the definition of a topology \(\tau\) on some space \(X\). They're both collections of sub-collections of the set/space of reference (ie, \(\tau \sub 2^X\) and \(\cal{F} \sub 2^\Omega\)). Also, they're both defined to contain their underlying set/space (ie, \(X \in \tau\) and \(\Omega \in \cal{F}\)).

      Additionally, they both contain the empty set but for (maybe) different reasons, definitionally. For a topology, it's simply defined to contain both the whole space and the empty set (ie, \(X \in \tau\) and \(\empty \in \tau\)). In a sigma-algebra's case, it's defined to be closed under complements, so since \(\Omega \in \cal{F}\) the complement must also be in \(\cal{F}\)... but the complement of the universal set \(\Omega\) is the empty set, so \(\empty \in \cal{F}\).

      I think this might be where the similarity ends, since a topology need not be closed under complements (but probably has a special property when it is, although I'm not sure what; oh wait, the complement of open is closed in topology, so it'd be clopen! Not sure what this would really entail though 🤷‍♀️). Moreover, a topology is closed under arbitrary unions (which includes uncountable), but a sigma-algebra is closed under countable unions. Hmm... Maybe this restriction to countable unions is what gives a coherent notion of being measurable? I suspect it also has to do with Banach-Tarski paradox. ie, cutting a sphere into 5 pieces and rearranging in a clever way so that you get 2 sphere's that each have the volume of the original sphere; I mean, WTF, if 1 sphere's volume equals the volume of 2 sphere's, then we're definitely not able to measure stuff any more.

      And now I'm starting to vaguely recall that this what sigma-fields essentially outlaw/ban from being possible. It's also related to something important in measure theory called a Lebeque measure, although I'm not really sure what that is (something about doing a Riemann integral but picking the partition on the y-axis/codomain instead of on the x-axis/domain, maybe?)

      And with that, I think I've got some intuition about how fundamental sigma-algebras are to letting us handle probability and uncertainty.

      Back to probability theory

      So then events like \(E_1\) and \(E_2\) that are elements of the set of sub-collections, \(\cal{F}\), of the possibility space \(\Omega\). Like, maybe \(\Omega\) is the set of all possible outcomes of rolling 2 dice, but \(E_1\) could be a simple event (ie, just one outcome like rolling a 2) while \(E_2\) could be a compound(?) event (ie, more than one, like rolling an even number). Notably, \(E_1\) & \(E_2\) are NOT elements of the sample space \(\Omega\); they're elements of the powerset of our possibility space (ie, the set of all possible subsets of \(\Omega\) denoted by \(2^\Omega\)). So maybe this explains why the "closed under complements" is needed; if you roll a 2, you should also be able to NOT roll a 2. And the property that a sigma-algebra must "contain the whole space" might be what's needed to give rise to a notion of a complete measure (conjecture about complete measures: everything in the measurable space can be assigned a value where that part of the measurable space does, in fact, represent some constitutive part of the whole).

      But what about these "random events"?

      Ah, so that's where random variables come into play (and probably why in probability theory they prefer to use \(\Omega\) for the sample space instead of \(X\) like a base space in topology). There's a function, that is, a mapping from outcomes of this "random event" (eg, a role of 2 dice) to a space in which we can associate (ie, assign) a sense of distance (ie, our sigma-algebra). What confuses me is that we see things like "\(P(X=x)\)" which we interpret as "probability that our random variable, \(X\), ends up being some particular outcome \(x\)." But it's also said that \(X\) is a real-valued function, ie, takes some arbitrary elements (eg, events like rolling an even number) and assigns them a real number (ie, some \(x \in \mathbb{R}\)).

      Aha! I think I recall the missing link: the notation "\(X=x\)" is really a shorthand for "\(X(\omega)=x\)" where \(\omega \in \cal{F}\). But something that still feels unreconciled is that our probability metric, \(P\), is just taking some real value to another real value... So which one is our sigma-algebra, the inputs of \(P\) or the inputs of \(X\)? 🤔 Hmm... Well, I guess it has the be the set of elements that \(X\) is mapping into \(\mathbb{R}\) since \(X\text{'s}\) input is a small omega \(\omega\) (which is probably an element of big omega \(\Omega\) based on the conventions of small notation being elements of big notation), so \(X\text{'s}\) domain much be the sigma-algrebra?

      Let's try to generate a plausible example of this in action... Maybe something with an inequality like "\(X\ge 1\)". Okay, yeah, how about \(X\) is a random variable for the random process of how long it takes a customer to get through a grocery line. So \(X\) is mapping the elements of our sigma-algebra (ie, what customers actually end up experiencing in the real world) into a subset of the reals, namely \([0,\infty)\) because their time in line could be 0 minutes or infinite minutes (geesh, 😬 what a life that would be, huh?). Okay, so then I can ask a question like "What's the probability that \(X\) takes on a value greater than or equal to 1 minute?" which I think translates to "\(P\left(X(\omega)\ge 1\right)\)" which is really attempting to model this whole "random event" of "What's gonna happen to a particular person on average?"

      So this makes me wonder... Is this fact that \(X\) can model this "random event" (at all) what people mean when they say something is a stochastic model? That there's a probability distribution it generates which affords us some way of dealing with navigating the uncertainty of the "random event"? If so, then sigma-algebras seem to serve as a kind of gateway and/or foundation into specific cognitive practices (ie, learning to think & reason probabilistically) that affords us a way out of being overwhelmed by our anxiety or fear and can help us reclaim some agency and autonomy in situations with uncertainty.

  10. Oct 2022
    1. The question often asked: "What happens when you want to add a new note between notes 1/1 and 1/1a?"

      Thoughts on Zettelkasten numbering systems

      I've seen variations of the beginner Zettelkasten question:

      "What happens when you want to add a new note between notes 1/1 and 1/1a?"

      asked at least a dozen times in the Reddit fora related to note taking and zettelkasten, on zettelkasten.de, or in other places across the web.

      Dense Sets

      From a mathematical perspective, these numbering or alpha-numeric systems are, by both intent and design, underpinned by the mathematical idea of dense sets. In the areas of topology and real analysis, one considers a set dense when one can choose a point as close as one likes to any other point. For both library cataloging systems and numbering schemes for ideas in Zettelkasten this means that you can always juxtapose one topic or idea in between any other two.

      Part of the beauty of Melvil Dewey's original Dewey Decimal System is that regardless of how many new topics and subtopics one wants to add to their system, one can always fit another new topic between existing ones ad infinitum.

      Going back to the motivating question above, the equivalent question mathematically is "what number is between 0.11 and 0.111?" (Here we've converted the artificial "number" "a" to a 1 and removed the punctuation, which doesn't create any issues and may help clarify the orderings a bit.) The answer is that there is an infinite number of numbers between these!

      This is much more explicit by writing these numbers as:<br /> 0.110<br /> 0.111

      Naturally 0.1101 is between them (along with an infinity of others), so one could start here as a means of inserting ideas this way if they liked. One either needs to count up sequentially (0, 1, 2, 3, ...) or add additional place values.

      Decimal numbering systems in practice

      The problem most people face is that they're not thinking of these numbers as decimals, but as natural numbers or integers (or broadly numbers without any decimal portions). Though of course in the realm of real numbers, numbers above 0 are dense as well, but require the use of their decimal portions to remain so.

      The tough question is: what sorts of semantic meanings one might attach to their adding of additional place values or their alphabetical characters? This meaning can vary from person to person and system to system, so I won't delve into it here.

      One may find it useful to logically chunk these numbers into groups of three as is often done using commas, periods, slashes, dashes, spaces, or other punctuation. This doesn't need to mean anything in particular, but may help to make one's numbers more easily readable as well as usable for filing new ideas. Sometimes these indicators can be confusing in discussion, so if ever in doubt, simply remove them and the general principles mentioned here should still hold.

      Depending on one's note taking system, however, when putting cards into some semblance of a logical sort-able order (perhaps within a folder for example), the system may choke on additional characters beyond the standard period to designate a decimal number. For example: within Obsidian, if you have a "zettelkasten" folder with lots of numbered and named files within it, you'll want to give each number the maximum number of decimal places so that when doing an alphabetic sort within the folder, all of the numbered ideas are properly sorted. As an example if you give one file the name "0.510 Mathematics", another "0.514 Topology" and a third "0.5141 Dense Sets" they may not sort properly unless you give the first two decimal expansions to the ten-thousands place at a minimum. If you changed them to "0.5100 Mathematics" and "0.5140 Topology, then you're in good shape and the folder will alphabetically sort as you'd expect. Similarly some systems may or may not do well with including alphabetic characters mixed in with numbers.

      If using chunked groups of three numbers, one might consider using the number 0.110.001 as the next level of idea between them and then continuing from there. This may help to spread some of the ideas out as surely one may have yet another idea to wedge in between 0.110.000 and 0.110.001?

      One can naturally choose almost any any (decimal) number, so long as it it somewhat "near" the original behind which one places it. By going out further in the decimal expansion, one can always place any idea between two others and know that there will be a number that it can be given that will "work".

      Generally within numbers as we use them for mathematics, 0.100000001 is technically "closer" by distance measurement to 0.1 than 0.11, (and by quite a bit!) but somehow when using numbers for zettelkasten purposes, we tend to want to not consider them as decimals, as the Dewey Decimal System does. We also have the tendency to want to keep our numbers as short as possible when writing, so it seems more "natural" to follow 0.11 with 0.111, as it seems like we're "counting up" rather than "counting down".

      Another subtlety that one sees in numbering systems is the proper or improper use of the whole numbers in front of the decimal portions. For example, in Niklas Luhmann's system, he has a section of cards that start with 3.XXXX which are close to a section numbered 35.YYYY. This may seem a bit confusing, but he's doing a bit of mental gymnastics to artificially keep his numbers smaller. What he really means is 3000.XXX and 3500.YYY respectively, he's just truncating the extra zeros. Alternately in a fully "decimal system" one would write these as 0.3000.XXXX and 0.3500.YYYY, where we've added additional periods to the numbers to make them easier to read. Using our original example in an analog system, the user may have been using foreshortened indicators for their system and by writing 1/1a, they may have really meant something of the form 001.001/00a, but were making the number shorter in a logical manner (at least to them).

      The close observer may have seen Scott Scheper adopt the slightly longer numbers in the thousands (like 3500.YYYY) as a means of remedying some of the numbering confusion many have when looking at Luhmann's system.

      Those who build their systems on top of existing ones like the Dewey Decimal Classification, or the Universal Decimal Classification may wish to keep those broad categories with three to four decimal places at the start and then add their own idea number underneath those levels.

      As an example, we can use the numbering for Finsler geometry from the Dewey Decimal Classification wikipedia page shown as:

      ``` 500 Natural sciences and mathematics

      510 Mathematics
      
          516 Geometry
      
              516.3 Analytic geometries
      
                  516.37 Metric differential geometries
      
                      516.375 Finsler geometry
      

      ```

      So in our zettelkasten, we might add our first card on the topic of Finsler geometry as "516.375.001 Definition of Finsler geometry" and continue from there with some interesting theorems and proofs on those topics.

      Of course, while this is something one can do doesn't mean that one should do it. Going too far down the rabbit holes of "official" forms of classification this way can be a massive time wasting exercise as in most private systems, you're never going to be comparing your individual ideas with the private zettelkasten of others and in practice the sort of standardizing work for classification this way is utterly useless. Beyond this, most personal zettelkasten are unique and idiosyncratic to the user, so for example, my math section labeled 510 may have a lot more overlap with history, anthropology, and sociology hiding within it compared with others who may have all of their mathematics hiding amidst their social sciences section starting with the number 300. One of the benefits of Luhmann's numbering scheme, at least for him, is that it allowed his system to be much more interdisciplinary than using a more complicated Dewey Decimal oriented system which may have dictated moving some of his systems theory work out of his politics area where it may have made more sense to him in addition to being more productive on a personal level.

      Of course if you're using the older sort of commonplacing zettelkasten system that was widely in use before Luhmann's variation, then perhaps using a Dewey-based system may be helpful to you?

      A Touch of History

      As both a mathematician working in the early days of real analysis and a librarian, some of these loose ideas may have occurred tangentially to Gottfried Wilhelm Leibniz (1646 - 1716), though I'm currently unaware of any specific instances within his work. One must note, however, that some of the earliest work within library card catalogs as we know and use them today stemmed from 1770s Austria where governmental conscription needs overlapped with card cataloging systems (Krajewski, 2011). It's here that the beginnings of these sorts of numbering systems begin to come into use well before Melvil Dewey's later work which became much more broadly adopted.

      The German "file number" (aktenzeichen) is a unique identification of a file, commonly used in their court system and predecessors as well as file numbers in public administration since at least 1934. We know Niklas Luhmann studied law at the University of Freiburg from 1946 to 1949, when he obtained a law degree, before beginning a career in Lüneburg's public administration where he stayed in civil service until 1962. Given this fact, it's very likely that Luhmann had in-depth experience with these sorts of file numbers as location identifiers for files and documents. As a result it's reasonably likely that a simplified version of these were at least part of the inspiration for his own numbering system.

      Your own practice

      At the end of the day, the numbering system you choose needs to work for you within the system you're using (analog, digital, other). I would generally recommend against using someone else's numbering system unless it completely makes sense to you and you're able to quickly and simply add cards to your system with out the extra work and cognitive dissonance about what number you should give it. The more you simplify these small things, the easier and happier you'll be with your set up in the end.

      References

      Krajewski, Markus. Paper Machines: About Cards & Catalogs, 1548-1929. Translated by Peter Krapp. History and Foundations of Information Science. MIT Press, 2011. https://mitpress.mit.edu/books/paper-machines.

      Munkres, James R. Topology. 2nd ed. 1975. Reprint, Prentice-Hall, Inc., 1999.

  11. Feb 2022
    1. Only after aligning every single part of the delivery chain, frompackaging to delivery, from the design of the ships to the design ofthe harbours, was the full potential of the container unleashed.

      Streamlining one's entire workflow from start to finish can unleash tremendous amounts of additional system-wide productivity. Starting out by tinkering with small things here and there is more likely to doom these smaller individual changes to failure with out associated global changes.

      Once the overall system has been redesigned and reconfigured, then one can make and perfect smaller scale local changes.


      Link this to the idea of kelp and sailing/rowing from The West Wing.

  12. Dec 2021
  13. Jun 2021
    1. To put it succinctly, differential topology studies structures on manifolds that, in a sense, have no interesting local structure. Differential geometry studies structures on manifolds that do have an interesting local (or sometimes even infinitesimal) structure.

      Differential topology take a more global view and studies structures on manifolds that have no interesting local structure while differential geometry studies structures on manifolds that have interesting local structures.

  14. May 2021
    1. In an individual model of privacy, we are only as private as our least private friend.

      So don't have any friends?

      Obviously this isn't a thing, but the implications of this within privacy models can be important.

      Are there ways to create this as a ceiling instead of as a floor? How might we use topology to flip this script?

  15. Jan 2021
    1. In a more recent paper, Michelle Feng and Mason Porter used a new technique called persistent homology to detect political islands — geographical holes in one candidate’s support that serve as spots of support for the other candidate — in California during the 2016 presidential election.
  16. Nov 2020
  17. Oct 2020
  18. Jun 2020
  19. May 2020
  20. Apr 2020
  21. Oct 2019
  22. Apr 2019
  23. Dec 2015
    1. All this time, however, category theory was consistently seen by much of the mathe-matical community as ridiculously abstract. But in the 21st century it has finally cometo find healthy respect within the larger community of pure mathematics. It is the lan-guage of choice for graduate-level algebra and topology courses, and in my opinion willcontinue to establish itself as the basic framework in which mathematics is done