586 Matching Annotations
  1. May 2021
    1. Yet, we continue to depend upon something we might call the centralized trust paradigm, by which middlemen entities coordinate our monetary transactions and other exchanges of value. We trust banks to track and verify everyone’s account balances so that strangers can make payments to each other. We entrust our most sensitive health records to insurance firms, hospitals, and laboratories. We rely on public utilities to read our electricity meters, monitor our usage, and invoice us accordingly. Even our new, Internet-driven industries are led by a handful of centralized behemoths to which we’ve entrusted our most valuable personal data: Google, Facebook, Amazon, Uber, etc.

      Despite aggregators driving more decentralized economic exchanges, we continue to rely on a centralized trust paradigm.

      We trust our banks to verify everyone's account balances, we trust our health records to insurance firms, we rely on public utilities to read our electricity meters.

    2. Startups of all kinds are constantly pitching ideas for e-marketplaces and online platforms that would unlock new network effects by bypassing incumbent middlemen and letting people interact directly with each other. Although these companies are themselves centralized entities, the services they provide satisfy an increasing demand for more decentralized exchanges. This shift underpins social media, ride-sharing, crowdfunding, Wikipedia, localized solar microgrids, personal health monitoring, and everything else in the Internet of Things (IoT).

      Aggregators (ride-sharing, social media) have been driving an increase in decentralized economic exchanges, while being built on top of centralized network infrastructure.

      This has allowed them to capture a sizeable portion of the value that is generated by their platforms, but it has also burdened them with custody over large amounts of user data.

    3. At the heart of this failure lies the fact that the ongoing decentralization of our communication and business exchanges is in direct contradiction with the outdated centralized systems we use to secure them. Given that the decentralization trend is fueled by the distributed communications system of the Internet—one in which no central hub acts as information gatekeeper—what’s needed is a new approach to security that’s also based on a distributed network architecture.

      Michael Casey posits that at the heart of the colossal failure of securing the world's online commerce is the contradiction between two things:

      (1) The ongoing decentralization of our communication and business exchanges driven by the distributed communications system of the internet. (2) The centralized systems we use to secure them.

      Communication and economic exchanges are becoming increasingly decentralized, fueled by the distributed infrastructure of the internet. This requires a similar approach to security that is based on a distributed network architecture.

    4. Lloyd’s of London knows a thing or two about business losses—for three centuries, the world’s oldest insurance market has been paying out damages triggered by wars, natural disasters, and countless acts of human error and fraud. So, it’s worth paying attention when Lloyds estimates that cybercrime caused businesses to lose $400 billion between stolen funds and disruption to their operations in 2015. If that number seems weighty—and it ought to—try this one for size: $2.1 trillion. That’s Juniper Research’s total cybercrime loss forecast for the even more digitally interconnected world projected for 2019. To put that figure in perspective, at current economic growth rates, it would represent more than 2% of total world GDP. Learn faster. Dig deeper. See farther. Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful. Learn more We are witnessing a colossal failure to protect the world’s online commerce.

      Forecasts for cybercrime loss are in the 400 billion to 2.1 trillion range for 2019. This points to a "collosal failure to protect the world's online commerce"

    1. Storing any type of PII on a public blockchain, even encrypted or hashed, is dangerous for two reasons: 1) the encrypted or hashed data is a global correlation point when the data is shared with multiple parties, and 2) if the encryption is eventually broken (e.g., quantum computing), the data will be forever accessible on an immutable public ledger. So the best practice is to store all private data off-chain and exchange it only over encrypted, private, peer-to-peer connections.

      Storing sensitive information on a blockchain, whether encrypted or hashed, is a risk, because it's there forever and it forms a correlation point and the encryption might get broken.

    2. For self-sovereign identity, which can be defined as a lifetime portable digital identity that does not depend on any centralized authority, we need a new class of identifier that fulfills all four requirements: persistence, global resolvability, cryptographic verifiability, and decentralization.

      The four requirements that constitute self-sovereign identity:

      1. Persistence
      2. Global Resolvability
      3. Cryptographic Verifiability
      4. Decentralization
    1. The models for online identity have advanced through four broad stages since the advent of the Internet: centralized identity, federated identity, user-centric identity, and self-sovereign identity.

      Online identity advanced through 4 stages:

      Centralized identity Federated identity User-centric identity Self-sovereign identity

    2. Identity in the digital world is even trickier. It suffers from the same problem of centralized control, but it’s simultaneously very balkanized: identities are piecemeal, differing from one Internet domain to another.

      Identity in the digital world also gets muddied, but in addition it is also balkanized: different internet domains have different identities.

    3. However, modern society has muddled this concept of identity. Today, nations and corporations conflate driver’s licenses, social security cards, and other state-issued credentials with identity; this is problematic because it suggests a person can lose his very identity if a state revokes his credentials or even if he just crosses state borders. I think, but I am not.

      Christopher Allen posits that modern society has muddled the concept of identity by equating it to a driver license or national id card, thereby implying that it is something that can be taken away.

      I would say that it is not society, but the modern state that has not muddied, rather corrupted the concept of identity.

      This also reminds me of idea of how to draw the line of definition for a component with which greater complexity is built up.

  2. Apr 2021
    1. Acquiring viral drift sufficient to produce new influenza strains capable of escaping population immunity is believed to take years of global circulation, not weeks of local circulation.

      Experiencing enough viral drift to produce an influenza variant capable of escaping population immunity is believed to take years of global circulation (not weeks of local circulation).

  3. Mar 2021
    1. Private property rights are not absolute. The rule against the "dead hand" or the rule against perpetuities is an example. I cannot specify how resources that I own will be used in the indefinitely distant future. Under our legal system, I can only specify the use for a limited number of years after my death or the deaths of currently living people.

      Property rights are not absolute, our legal system does not support the ability to specify how resources should be used indefinitely in the future.

    2. Similarly, the set of resources over which property rights may be held is not well defined and demarcated. Ideas, melodies, and procedures, for example, are almost costless to replicate explicitly (near-zero cost of production) and implicitly (no forsaken other uses of the inputs). As a result, they typically are not protected as private property except for a fixed term of years under a patent or copyright.

      It's not well demarcated over what resources property rights may be held. Melodies and ideas, for instance, are virtually costless to replicate. These resources tend not to be protected by private property.

    3. Depending upon circumstances certain actions may be considered invasions of privacy, trespass, or torts. If I seek refuge and safety for my boat at your dock during a sudden severe storm on a lake, have I invaded "your" property rights, or do your rights not include the right to prevent that use? The complexities and varieties of circumstances render impossible a bright-line definition of a person's set of property rights with respect to resources.

      In real-life property rights there are also many gray areas. In programmatic property rights, there is none.

    4. The cost of establishing private property rights—so that I could pay you a mutually agreeable price to pollute your air—may be too expensive. Air, underground water, and electromagnetic radiations, for example, are expensive to monitor and control. Therefore, a person does not effectively have enforceable private property rights to the quality and condition of some parcel of air. The inability to cost-effectively monitor and police uses of your resources means "your" property rights over "your" land are not as extensive and strong as they are over some other resources, like furniture, shoes, or automobiles. When private property rights are unavailable or too costly to establish and enforce, substitute means of control are sought. Government authority, expressed by government agents, is one very common such means. Hence the creation of environmental laws.

      For some types of property and/or use of that property, the costs of monitoring them is too high. It is too expensive to monitor parcels of air for pollution or radiation.

      When a resource cannot be cost-effectively monitored and/or policed, your property rights over this resource become less strong.

      When property rights become too weak, alternative means of control are sought, e.g. government agents and environmental laws.

    5. The two extremes in weakened private property rights are socialism and "commonly owned" resources. Under socialism, government agents—those whom the government assigns—exercise control over resources. The rights of these agents to make decisions about the property they control are highly restricted. People who think they can put the resources to more valuable uses cannot do so by purchasing the rights because the rights are not for sale at any price. Because socialist managers do not gain when the values of the resources they manage increase, and do not lose when the values fall, they have little incentive to heed changes in market-revealed values. The uses of resources are therefore more influenced by the personal characteristics and features of the officials who control them. Consider, in this case, the socialist manager of a collective farm. By working every night for one week, he could make 1 million rubles of additional profit for the farm by arranging to transport the farm's wheat to Moscow before it rots. But if neither the manager nor those who work on the farm are entitled to keep even a portion of this additional profit, the manager is more likely than the manager of a capitalist farm to go home early and let the crops rot.

      Weakened property rights come in two forms (1) socialism and (2) the commons.

      If a socialist manager of a farm isn't entitled extra profit for working extra to transport of the farm's wheat to Moscow before it rots, he likely won't put int the extra hours.

    6. The fundamental purpose of property rights, and their fundamental accomplishment, is that they eliminate destructive competition for control of economic resources. Well-defined and well-protected property rights replace competition by violence with competition by peaceful means.

      Well defined property rights replace competition by violence with competition by peaceful means

    7. Finally, a private property right includes the right to delegate, rent, or sell any portion of the rights by exchange or gift at whatever price the owner determines (provided someone is willing to pay that price). If I am not allowed to buy some rights from you and you therefore are not allowed to sell rights to me, private property rights are reduced. Thus, the three basic elements of private property are (1) exclusivity of rights to the choice of use of a resource, (2) exclusivity of rights to the services of a resource, and (3) rights to exchange the resource at mutually agreeable terms.

      The are three elements that constitute private property:

      (1) Exclusive rights to determine how the property gets used (2) Exclusive rights to the services of a resource (3) Rights to sell the resource

    1. A normal household has to pay rent or make mortgage payments. To arbitrarily exclude the biggest expense to consumers from CPI is pretty misleading.When you create new money prices don't rise evenly. At the moment we have new money being created by central banks and given to privileged institutions who get access to free money. They use that to buy investments: real estate, stocks, etc. These are precisely the things getting really expensive. The last things to get more expensive during big cycles of inflation are employee wages.The world used gold/silver for its currency for most of human history until 1970 when we entered this period of worldwide fiat currencies. Our current situation is pretty remarkable.The whole argument for printing money being OK is dumb. If it's OK to print money to pay for some things why are you not doing it more? Why not make everyone a millionaire?I think that another deception is that we should ordinarily be experiencing price deflation. Every day our society is getting more efficient at making things. If prices for goods are staying the same then it may not be that their value has not changed, they may be less valuable goods, but they cost the same because you're also buying them with less valuable currency.If you have gone through years of moving everything to China to make it cheaper to manufacture, improved technology to make processes more efficient, etc. and I'm still paying the same amount for all of the stuff in my life, then again, maybe all these things are cheaper, but I'm also buying them with currency that's less valuable.Ultimately, printing money doesn't make anyone more productive or produce anything. All it does is redistribute wealth from those that were first to get the new free money away from those that were last to contact it.

      Solid HN comment on inflation

    1. For the evolutionary psychologists an explanation that humans do something for "the sheer enjoyment of it" is not an explanation at all – but the posing of a problem. Why do so many people find the collection and wearing of jewelry enjoyable? For the evolutionary psychologist, this question becomes – what caused this pleasure to evolve?

      For evolutionary psychologists an explanation that humans do something for their enjoyment is not an explanation at all. The question becomes: Why did this pleasure evolve?

    2. Collecting and making necklaces must have had an important selection benefit, since it was costly – manufacture of these shells took a great deal of both skill and time during an era when humans lived constantly on the brink of starvation[C94].

      Because evolution is ruthlessly energy preserving and because our African ancestors would have continuously lived on the brink of starvation, the costly manufacture of ornamental shells must have incurred a large selection benefit to those doing it.

    3. It continued to be used as a medium of exchange, in some cases into the 20th century – but its value had been inflated one hundred fold by Western harvesting and manufacturing techniques, and it gradually went the route that gold and silver jewelry had gone in the West after the invention of coinage – from well crafted money to decoration.

      The value of wampum became inflated more than one hundred fold by Wester harvesting and manufacturing techniques.

    4. The beginning of the end of wampum came when the British started shipping more coin to the Americas, and Europeans started applying their mass-manufacturing techniques. By 1661, British authorities had thrown in the towel, and decided it would pay in coin of the realm – which being real gold and silver, and its minting audited and branded by the Crown, had even better monetary qualities than shells. In that year wampum ceased to be legal tender in New England.

      Wampum stopped being legal considered legal tender by the British in 1661 when they started shipping gold and silver coins from Europe.

    5. Once they got over their hangup about what constitutes real money, the colonists went wild trading for and with wampum. Clams entered the American vernacular as another way to say "money". The Dutch governor of New Amsterdam (now New York) took out a large loan from an English-American bank – in wampum. After a while the British authorities were forced to go along. So between 1637 and 1661, wampum became legal tender in New England. Colonists now had a liquid medium of exchange, and trade in the colonies flourished.[D94]

      The colonists of New England started trading in Wampum and started using it as money. It was accepted as legal tender from 1637 to 1661.

    6. Clams were found only at the ocean, but wampum traded far inland. Sea-shell money of a variety of types could be found in tribes across the American continent. The Iriquois managed to collect the largest wampum treasure of any tribe, without venturing anywhere near the clam's habitat.[D94] Only a handful of tribes, such as the Narragansetts, specialized in manufacturing wampum, while hundreds of other tribes, many of them hunter-gatherers, used it. Wampum pendants came in a variety of lengths, with the number of beads proportional to the length. Pendants could be cut or joined to form a pendant of length equal to the price paid.

      Wampum was traded by hundreds of tribes, but it was only "mined" by a handful living close to the shore.

    7. The colonists' solution was at hand, but it took a few years for them to recognize it. The natives had money, but it was very different from the money Europeans were used to. American Indians had been using money for millenia, and quite useful money it turned out to be for the newly arrived Europeans – despite the prejudice among some that only metal with the faces of their political leaders stamped on it constituted real money. Worse, the New England natives used neither silver nor gold. Instead, they used the most appropriate money to be found in their environment – durable skeleton parts of their prey. Specifically, they used wampum, shells of the clam venus mercenaria and its relatives, strung onto pendants.

      Native American Indians used the shells of clams as money, strung onto pendants. It was the best form of money in their environment.

    1. So when I’m searching for information in this space, I’m much less interested in asking “what is this thing?” than I am in asking “what do the people who know a lot about this thing think about it?” I want to read what Vitalik Buterin has recently proposed regarding Ethereum scalability, not rote definitions of Layer 2 scaling solutions. Google is extraordinarily good at answering the “what is this thing?” question. It’s less good at answering the “what do the people who know about the thing think about it?” question. Why? 

      According to Devin Google is good at answering a question such as "what is this thing?", but not good at answering a questions "what do people who know a lot about this thing say about it?"

      This reminds me of social search

  4. Feb 2021
    1. But in credibly neutral mechanism design, the goal is that these desired outcomes are not written into the mechanism; instead, they are emergently discovered from the participants’ actions. In a free market, the fact that Charlie’s widgets are not useful but David’s widgets are useful is emergently discovered through the price mechanism: eventually, people stop buying Charlie’s widgets, so he goes bankrupt, while David earns a profit and can expand and make even more widgets. Most bits of information in the output should come from the participants’ inputs, not from hard-coded rules inside of the mechanism itself.

      This reminds me of Hayek worrying about the components/primitives of capitalism (e.g. property rights) were being corrupted by socialists.

      You could view the proper "pure" component of capitalism being credibly neutral property rights, and it becomes corrupted if you make it non-credibly neutral, e.g. you introduce preferences in terms of the outcomes.

    2. This is why private property is as effective as it is: not because it is a god-given right, but because it’s a credibly neutral mechanism that solves a lot of problems in society - far from all problems, but still a lot.

      Property rights are credibly neutral

    3. We are entering a hyper-networked, hyper-intermediated and rapidly evolving information age, in which centralized institutions are losing public trust and people are searching for alternatives. As such, different forms of mechanisms – as a way of intelligently aggregating the wisdom of the crowds (and sifting it apart from the also ever-present non-wisdom of the crowds) – are likely to only grow more and more relevant to how we interact.

      This is Jordan Hall's blue church vs. red church.

      Losing trust in institutions perhaps has more emphasis here.

    1. Finding clientsFinally, we were at the moment of truth. Luckily, from our user interviews we knew that companies were posting on forums like Reddit and Craigslist to find participants. So for 3 weeks we scoured the “Volunteers” and “Gigs” sections of Craigslist and emailed people who were looking for participants saying we could do it for them.Success!We were able to find 4 paying clients! 

      UserInterviews found their first clients by replying to ads on Craigslist and Reddit for user interview volunteers with the pitch that they could help the companies find them.

  5. Jan 2021
    1. Cognitive fusion isn’t necessarily a bad thing. If you suddenly notice a car driving towards you at a high speed, you don’t want to get stuck pondering about how the feeling of danger is actually a mental construct produced by your brain. You want to get out of the way as fast as possible, with minimal mental clutter interfering with your actions. Likewise, if you are doing programming or math, you want to become at least partially fused together with your understanding of the domain, taking its axioms as objective facts so that you can focus on figuring out how to work with those axioms and get your desired results.

      Cognitive Fusion serves an important role

      When you are driving a car, you want to be fused with its internal logic, because it will allow you to respond in the quickest possible way to threats. (I'm not sure if this is the same thing though)

    2. Cognitive fusion is a term from Acceptance and Commitment Therapy (ACT), which refers to a person “fusing together” with the content of a thought or emotion, so that the content is experienced as an objective fact about the world rather than as a mental construct. The most obvious example of this might be if you get really upset with someone else and become convinced that something was all their fault (even if you had actually done something blameworthy too). In this example, your anger isn’t letting you see clearly, and you can’t step back from your anger to question it, because you have become “fused together” with it and experience everything in terms of the anger’s internal logic. Another emotional example might be feelings of shame, where it’s easy to experience yourself as a horrible person and feel that this is the literal truth, rather than being just an emotional interpretation.

      Cognitive Fusion

      Cognitive Fusion is a term that comes from Acceptance and Commitment Therapy (ACT).

      CF happens when you identify so strongly with a thought or an emotion that its contents is experienced as the objective way the world is.

      "She is the one" for example is a cognitive fusion.

      The cognitive fusion prevents you from stepping back and examining the construct.

      You experience everything in terms of the belief's internal logic.

    1. This brings me to the fourth pattern of oscillating tension: Shadow values.The pattern goes something like this: We have two values that (without proper planning) tend to be in tension with each other. One of them, we acknowledge, as right and good and ok. One of them we repress, because we think it's bad or weak or evil.Safety vs. AdventureIndependence vs. Love Revenge vs. Acceptance All common examples of value tensions, where one of the values is often in shadow (which one depends on the person).So we end up optimizing for the value we acknowledge. We see adventure as "good", so we optimize for it, hiding from ourselves the fact we care about safety. And man, do we get a lot of adventure. Our adventure meter goes up to 11.But all the while, there's that little safety voice, the one we try ignore. Telling us that there's something we value that we're ignoring. And the more we ignore it, the louder it gets.And meanwhile, because we've gotten so much of it, our adventure voice is getting quieter. It's already up to 11, not a worry right now. Until suddenly, things shift. And where we were going on many adventures, now we just want to stay home, safe. Oscillating tension.

      Shadow Values

      Shadow Values are a pattern of Oscillating Tension.

      When we have two values, one which we make explicit and acknowledge, one which we don't, we might optimize for the one we made explicit.

      This results in our behavior pursuing the maximization of that value, all the while ignoring the implicit one (the shadow value).

      Because this value is getting trampled on, the voice that corresponds to it will start to speak up. The more it gets ignored, the more it speaks up.

      At the same time, the voice corresponding to the value that is getting maximized, becomes quiet. It's satisfied where it is.

      We find ourselves in a place where all we want to do is tend to the value that is not being met.

    1. Volkswagen, the world’s largest car maker, has outspent all rivals in a global bid by auto incumbents to beat Tesla. For years, industry leaders and analysts pointed to the German company as evidence that, once unleashed, the old guard’s raw financial power paired with decades of engineering excellence would make short work of Elon Musk’s scrappy startup. What they didn’t consider: Electric vehicles are more about software than hardware. And producing exquisitely engineered gas-powered cars doesn’t translate into coding savvy.

      Many thought Volkswagen would crush Tesla as soon as they put their weight behind an electric car initiative. What they didn't consider was that an electric car is more about software than it is about hardware.

    1. Note that I have defined privacy in terms of the condition ofothers' lack of access to you. Some philosophers, for example CharlesFried, have claimed that it is your control over who has access to youthat is essential to privacy. According to Fried, it would be ironic tosay that a person alone on an island had privacy.'0 I don't find thisironic at all. But more importantly, including control as part of pri-vacy leads to anomalies. For example, Fried writes that "in our cul-ture the excretory functions are shielded by more or less absoluteprivacy, so much so that situations in which this privacy is violated areexperienced as extremely distressing."" But, in our culture one doesnot have control over who gets to observe one's performance of theexcretory functions, since it is generally prohibited to execute them inpublic.'2 Since prying on someone in the privy is surely a violation of

      Reiman argues that in his definition privacy is defined in terms of others' lack of access to you, and not, as Charles Fried does for instance, in terms of your control over who has access to you.

      He argues this point by saying that since watching someone go to the toilet is certainly a violation of privacy, and since we don't have control over the law dictating that we cannot go to the toilet in public, privacy cannot be about control.

      I think this argument is redundant. Full control would imply that you can deny anyone access to you at their discretion.

    2. It might seem unfair to IVHS to consider it in light of all thisother accumulated information-but I think, on the contrary, that it isthe only way to see the threat accurately. The reason is this: We haveprivacy when we can keep personal things out of the public view.Information-gathering in any particular realm may not seem to pose avery grave threat precisely because it is' generally possible to preserveone's privacy by escaping into other realms. Consequently, as welook at each kind of information-gathering in isolation from the others,each may seem relatively benign.2 However, as each is put into prac-tice, its effect is to close off yet another escape route from public ac-cess, so that when the whole complex is in place, its overall effect onprivacy will be greater than the sum of the effects of the parts. Whatwe need to know is IVHS's role in bringing about this overall effect,and it plays that role by contributing to the establishment of the wholecomplex of information-gathering modalities.

      Reiman argues that we can typically achieve privacy by escaping into a different realm. We can avoid public eyes by retreating into our private houses. It seems we could avoid Facebook by, well, avoiding Facebook.

      If we treat each information-gather in one realm as separate, they may seem relatively benign.

      When these realms are connected, they close off our escape routes and the effect on privacy becomes greater than the sum of its parts.

    3. But notice here that the sort of privacy we wantin the bedroom presupposes the sort we want in the bathroom. Wecannot have discretion over who has access to us in the bedroom un-less others lack access at their discretion. In the bathroom, that is allwe want. In the bedroom, we want additionally the power to decide atour discretion who does have access. What is common to both sortsof privacy interests, then, is that others not have access to you at theirdiscretion. If we are to find the value of privacy generally, then it willhave to be the value of this restriction on others.

      The sort of privacy we want in the bedroom (control over who accesses us) presupposes the sort of privacy we want in the bathroom (others lack access to us at their discretion).

    4. In our bedrooms, we want to have powerover who has access to us; in our bathrooms, we just want others de-prived of that access.

      Reidman highlights two types of privacy.

      The privacy we want to have in the bathroom, which is the power to deprive others of access to us.

      And the privacy we want to have in the bedroom, which is the power to control who has access to us.

    5. By privacy, I understand the condition in which other people aredeprived of access to either some information about you or some ex-perience of you. For the sake of economy, I will shorten this and saythat privacy is the condition in which others are deprived of access toyou.

      Reiman defines privacy as the condition in which others are deprived of access to you (information (e.g. location) or experience (e.g. watching you shower))

    6. No doubt privacyis valuable to people who have mischief to hide, but that is not enoughto make it generally worth protecting. However, it is enough to re-mind us that whatever value privacy has, it also has costs. The moreprivacy we have, the more difficult it is to get the information that

      Privacy is valuable to people who have mischief to hide. This is not enough to make it worth protecting, but it tells us that there is also a cost.

    7. As Bentham realized and Foucault emphasized, the system workseven if there is no one in the guard house. The very fact of generalvisibility-being seeable more than being seen-will be enough toproduce effective social control.4 Indeed, awareness of being visiblemakes people the agents of their own subjection. Writes Foucault,He who is subjected to a field of visibility, and who knows it, as-sumes responsibility for the constraints of power; he makes themplay spontaneously upon himself; he inscribes in himself the powerrelation in which he simultaneously plays both roles; he becomesthe principle of his own subjection.

      The panopticon works as a system of social control even without someone in the guardhouse. It is being seeable, rather than being seen, which makes it effective.

      I don't understand what Foucault says here.

  6. Dec 2020
    1. The other complication is that the organizational techniques I described aren’t distinct. Hierarchies and links are a kind of relation; attributes can be seen as a type of hierarchy (just like songs can be “in” playlists, even though the implementation is a sort on a list) or a relation. All of these, in fact, can be coded using the same mathematical formalisms. What matters is how they differ when encountering each user’s cognitive peculiarities and workflow needs.

      Hierarchies, links and attributes are mathematically identical

      Hierarchies and links are a kind of relation. Attributes can be seen as a type of hierarchy (songs can be "in" a playlist"). These things can be coded with the same mathematical formalisms. What's important is how they differ when seen through the mental model of the user.

    2. First, nearly every application uses some mix of these techniques. The Finder, for instance, has hierarchies but can display them spatially in columns while using metaphors and (soon) attributes as labels.

      Applications tend to use a mix of these structurizing techniques.

    3. You could almost think of these as parts of a larger language: roughly verbs, nouns, and adjectives for the first three. Poetry for the last.

      These structurizing techniques form a language.

      Links — verbs Relationships — Nouns Attributes — Adjectives Metaphors — Poetry

    4. Types of Structure Outliners take advantage of what may be the most primitive of relationships, probably the first one you learned as an infant: in. Things can be in or contained by other things; alternatively, things can be superior to other things in a pecking order. Whatever the cognitive mechanics, trees/hierarchies are a preferred way of structuring things. But it is not the only way. Computer users also encounter: links, relationships, attributes, spatial/tabular arrangements, and metaphoric content. Links are what we know from the Web, but they can be so much more. The simplest ones are a sort of ad hoc spaghetti connecting pieces of text to text containers (like Web pages), but we will see many interesting kinds that have names, programs attached, and even work two-way. Relationships are what databases do, most easily imagined as “is-a” statements which are simple types of rules: Ted is a supervisor, supervisors are employees, all employees have employee numbers. Attributes are adjectives or tags that help characterize or locate things. Finder labels and playlists are good examples of these. Spatial/tabular arrangements are obvious: the very existence of the personal computer sprang from the power of the spreadsheet. Metaphors are a complex and powerful technique of inheriting structure from something familiar. The Mac desktop is a good example. Photoshop is another, where all the common tools had a darkroom tool or technique as their predecessor.

      Structuring Information

      Ted Goranson holds that there are only a couple of ways to structure information.

      In — Possibly the most primitive of relationships. Things can be in other things and things can be superior to other things.

      Links —Links are what we know from the web, but these types of links or only one implementation. There are others, like bi-directional linking.

      Relationships — This is what we typically use databases for and is most easily conceived as "is-a" statements.

      Attributes — Adjectives or tags that help characterize or locate things.

      Metaphors — A technique for inheriting structure from something familiar.

    5. Both of these products were abandoned as supported commercial products when the outlining paradigm was incorporated into other products, notably word processors, presentation applications, and personal information managers.

      MORE and Acta were abandoned as commercial pursuits once outliners were incorporated into other products such as word processors and presentation applications.

    6. Outlining is bred in the blood of Mac users. The Finder has an outliner. Nearly every mail client employs an outliner, as do many word processors and Web development tools.

      The outliner paradigm has seeped into many different applications such as the Mac Finder, development tools and word processors.

    1. Michael Jordan says that current hype around AI has focused mostly on human-imitative capabilities. This focus has hidden certain challenges and, according to Jordan, risks distracting us from major unsolved problems in AI that relate to our ability to make society-scale inference-and-decision-making systems.

      If we want such systems that actually work, we need to craft nothing short of a new engineering discipline (centered on the idea of "provenance") which builds on the building blocks of the past century (algorithms, inference, optimization, algorithms) but which also incorporates the human side and insights from the social sciences, cognitive sciences and the humanities. A true human-centric engineering discipline.

    2. In the current era, we have a real opportunity to conceive of something historically new — a human-centric engineering discipline.

      Michael Jordan refers to this as a human-centric engineering discipline.

    3. On the other hand, while the humanities and the sciences are essential as we go forward, we should also not pretend that we are talking about something other than an engineering effort of unprecedented scale and scope — society is aiming to build new kinds of artifacts. These artifacts should be built to work as claimed. We do not want to build systems that help us with medical treatments, transportation options and commercial opportunities to find out after the fact that these systems don’t really work — that they make errors that take their toll in terms of human lives and happiness. In this regard, as I have emphasized, there is an engineering discipline yet to emerge for the data-focused and learning-focused fields. As exciting as these latter fields appear to be, they cannot yet be viewed as constituting an engineering discipline.

      We want to build systems that actually work, for that we need to figure out the provenance aspects. This field is so nascent still, that it cannot be viewed yet as an engineering discipline.

    4. While industry will continue to drive many developments, academia will also continue to play an essential role, not only in providing some of the most innovative technical ideas, but also in bringing researchers from the computational and statistical disciplines together with researchers from other disciplines whose contributions and perspectives are sorely needed — notably the social sciences, the cognitive sciences and the humanities.

      Michael Jordan says that academia may serve to help bring together researchers from fields that are needed to solve these challenges, such as social sciences, cognitive sciences and the humanities.

      Reminds me of that book on social sciences.

    5. It is not hard to pinpoint algorithmic and infrastructure challenges in II systems that are not central themes in human-imitative AI research. II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries. Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers. And this must all be done within the context of evolving societal, ethical and legal norms.

      There are many challenges in Intelligent Infrastructure (II) which are not central themes in AI research.

      They need to deal with coordinating distributed, incoherent repositories of information.

      This involves things like:

      making decisions about where the host data to ensure fast delivery (edge computing)

      Dealing with long-tail phenomena, where there's a lot of data on a few individuals and little about most.

      And they need to deal with the human and civil aspects of data such as sharing across administrative and competitive boundaries.

      Lastly, they need to incorporate economics ideas such as incentives and pricing into the realm of computational infrastructures. This is creating markets (blockchain).

      Michael Jordan holds that fields such as music, literature and journalism are crying out for the emergence of such markets.

    6. We now come to a critical issue: Is working on classical human-imitative AI the best or only way to focus on these larger challenges?

      Having defined some terms to divide up the field of AI, Michael Jordan asks if focusing on human-imitative AI is the most productive way to advance these other fields that have been "hidden" under the same label.

    7. Hoping that the reader will tolerate one last acronym, let us conceive broadly of a discipline of “Intelligent Infrastructure” (II), whereby a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies. This emergence sometimes arises in conversations about an “Internet of Things,” but that effort generally refers to the mere problem of getting “things” onto the Internet — not to the far grander set of challenges associated with these “things” capable of analyzing those data streams to discover facts about the world, and interacting with humans and other “things” at a far higher level of abstraction than mere bits.

      Intelligent Infrastructure (II)

      Michael Jordan here coins the term Intelligent Infrastructure to refer to the discipline whereby a web of data, computation and physical entities exists that makes human environments more supportive, interesting and safe.

      We can already see this infrastructure in the fields of transportation, medicine, commerce and finance.

      This isn't captured by the Internet of Things, because IoT doesn't involve interactions with humans at higher levels of abstractions than mere bits.

    8. The past two decades have seen major progress — in industry and academia — in a complementary aspiration to human-imitative AI that is often referred to as “Intelligence Augmentation” (IA). Here computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate). Computing-based generation of sounds and images serves as a palette and creativity enhancer for artists. While services of this kind could conceivably involve high-level reasoning and thought, currently they don’t — they mostly perform various kinds of string-matching and numerical operations that capture patterns that humans can make use of.

      Intelligence Augmentation (IA)

      Computation and data are used to create services that augment human intelligence (e.g. a search engine augmenting human memory and factual knowledge).

    9. Historically, the phrase “AI” was coined in the late 1950’s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. We will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the notion that the artificially intelligent entity should seem to be one of us, if not physically at least mentally (whatever that might mean).

      The phrase AI emerged to refer to the aspiration of creating a computer system which possessed human-level intelligence.

    10. The current public dialog about these issues too often uses “AI” as an intellectual wildcard, one that makes it difficult to reason about the scope and consequences of emerging technology. Let us begin by considering more carefully what “AI” has been used to refer to, both recently and historically.

      This emerging field is often hidden under the label AI, which makes it difficult to reason about.

    11. Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.

      Analogous to the collapse of early bridges and building, before the maturation of civil engineering, our early society-scale inference-and-decision-making systems break down, exposing serious conceptual flaws.

    12. Much like civil engineering and chemical engineering in decades past, this new discipline aims to corral the power of a few key ideas, bringing new resources and capabilities to people, and doing so safely. Whereas civil engineering and chemical engineering were built on physics and chemistry, this new engineering discipline will be built on ideas that the preceding century gave substance to — ideas such as “information,” “algorithm,” “data,” “uncertainty,” “computing,” “inference,” and “optimization.” Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities.

      Michael Jordan draws the analogy with the emergence of civil and chemical engineering, building with the building blocks of the century prior: physics and chemistry. In this case the building blocks are ideas such as: information, algorithm, data, uncertainty, computing, inference and optimization.

    13. I’m also a computer scientist, and it occurred to me that the principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education. And it occurred to me that the development of such principles — which will be needed not only in the medical domain but also in domains such as commerce, transportation and education — were at least as important as those of building AI systems that can dazzle us with their game-playing or sensorimotor skills.

      This is the key point of the article.

      There is an emerging field, which relies heavily on the skill one might refer to as "provenance", which is necessary to build planetary-scale inference-and-decision-making systems.

    14. “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.

      Data Provenance

      The discipline of thinking about:

      (1) where did the data arise? (2) what inferences were drawn (3) how relevant are those inferences to the present situation?

    15. There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.”

      Example of where a global system for inference on healthcare data fails due to a lack of data provenance.

    1. these systems exist in isolation, cut off from the other systems of the city. The different systems overlap one another, and they overlap many other systems besides. The units, the physical places recognized as play places, must do the same. In a natural city this is what happens. Play takes place in a thousand places it fills the interstices of adult life. As they play, children become full of their surroundings. How can children become filled with their surroundings in a fenced enclosure! They cannot.

      Tree thinking leads to thinking of recreation as a separate concept. For example, by designing a separate area of children's play.

      This ignores the living reality that play crosses boundaries, changes contexts and that it is a mechanism through which children acquaint themselves with the world.

      Putting play inside a designated area goes against the spirit of play.

    2. It must be emphasized, lest the orderly mind shrink in horror from anything that is not clearly articulated and categorized in tree form, that the idea of overlap, ambiguity, multiplicity of aspect and the semilattice are not less orderly than the rigid tree, but more so. They represent a thicker, tougher, more subtle and more complex view of structure.

      Semilattices are not less ordered than a tree. They simply provide more degrees of order than a tree does.

    3. In a traditional society, if we ask a man to name his best friends and then ask each of these in turn to name their best friends, they will all name each other so that they form a closed group. A village is made up of a number of separate closed groups of this kind. But today's social structure is utterly different. If we ask a man to name his friends and then ask them in turn to name their friends, they will all name different people, very likely unknown to the first person; these people would again name others, and so on outwards. There are virtually no closed groups of people in modern society. The reality of today's social structure is thick with overlap - the systems of friends and acquaintances form a semilattice, not a tree

      Relationships in modern society, unlike traditional society, form overlapping, open structures

    4. However, in every city there are thousands, even millions, of times as many more systems at work whose physical residue does not appear as a unit in these tree structures. In the worst cases, the units which do appear fail to correspond to any living reality; and the real systems, whose existence actually makes the city live, have been provided with no physical receptacle.

      The problem with the tree model is that (in the worst case) the units that appear do not correspond to any living reality and many of the actual systems do not have a physical receptacle.

    5. Each unit in each tree that I have described, moreover, is the fixed, unchanging residue of some system in the living city (just as a house is the residue of the interactions between the members of a family, their emotions and their belongings; and a freeway is the residue of movement and commercial exchange).

      Residue of human activity When a city is conceived of as a tree, each unit represents the fixed residue of some system in the living city. Similarly, a house is the residue of the interactions between members of a family, their emotions, their belongings. A freeway is the residue of movement and commercial exchange.

    6. The enormity of this restriction is difficult to grasp. It is a little as though the members of a family were not free to make friends outside the family, except when the family as a whole made a friendship.

      The limitation of a tree structure is as if you limited members of a family to only make friends when the family as a whole made a friendship.

    7. So that we get a really clear understanding of what this means, and shall better see its implications, let us define a tree once again. Whenever we have a tree structure, it means that within this structure no piece of any unit is ever connected to other units, except through the medium of that unit as a whole.

      Another definition of a tree is that no unit is connected to any other unit except through its parent unit.

    8. Still more important is the fact that the semilattice is potentially a much more complex and subtle structure than a tree. We may see just how much more complex a semilattice can be than a tree in the following fact: a tree based on 20 elements can contain at most 19 further subsets of the 20, while a semilattice based on the same 20 elements can contain more than 1,000,000 different subsets.

      The semilattice is potentially a much more complex structure than the tree because it has orders of magnitude more possible permutations of subsets.

    9. Since this axiom excludes the possibility of overlapping sets, there is no way in which the semilattice axiom can be violated, so that every tree is a trivially simple semilattice.

      Every tree is also a (simple) semilattice.

    10. The tree axiom states: A collection of sets forms a tree if and only if, for any two sets that belong to the collection either one is wholly contained in the other, or else they are wholly disjoint.

      The tree axiom

    11. The semilattice axiom goes like this: A collection of sets forms a semilattice if and only if, when two overlapping sets belong to the collection, the set of elements common to both also belongs to the collection.

      The semilattice axiom

      A collection of sets forms a semilattice if and only if, when two overlapping sets belong to the collection, the set of elements common to both also belongs to the collection.

    12. As we see from these two representations, the choice of subsets alone endows the collection of subsets as a whole with an overall structure. This is the structure which we are concerned with here.

      When we draw a collection of subsets we can see that the choice of subsets endows the collection with a structure.

    13. Now, a collection of subsets which goes to make up such a picture is not merely an amorphous collection. Automatically, merely because relationships are established among the subsets once the subsets are chosen, the collection has a definite structure.

      A collection of subsets, seen as units, convey a structure through the relationships between them

      A collection of subsets, as seen by the viewer of a city, do not constitute an amorphous collection. By virtue of the relationships between the subsets, the collection has a definite structure.

    14. Of the many, many fixed concrete subsets of the city which are the receptacles for its systems and can therefore be thought of as significant physical units, we usually single out a few for special consideration. In fact, I claim that whatever picture of the city someone has is defined precisely by the subsets he sees as units.

      We think of cities as distinguished by the subsets that we see as units.

    15. When the elements of a set belong together because they co-operate or work together somehow, we call the set of elements a system.

      Definition of a System

      When a given set of elements co-operate or work together we call it a system.

    1. As the complexity of the topology underlying a hypermedia system increases, users have more ways to move from one information node to another, and thus can potentially find shorter paths to desired information. This very richness quickly leads to the problem of users becoming “lost in hyperspace,” reported as early as the ZOG work

      The Lost in Hyperspace Problem

      For more complex hypermedia there are more ways to navigate from one information node to the other. This leads to the problem of a user becoming "lost in hyperspace".

    2. Hypermedia is a set of nodes of information (the “hyperbase”) and a mechanism for moving among them.

      Hypermedia consists of a set of nodes of information (hyperbase) and a mechanism for navigating between them.

      A book is the degenerate case where the nodes are the paragraphs and the topology is a linear chain.

    1. Overview diagrams are one of the best tools for orientationand navigation in hypermedia documents [17]. By present-ing a map of the underlying information space, they allowthe users to see where they are, what other information isavailable and how to access the other information. How-ever, for any real-world hypermedia system with many nodesand links, the overview diagrams represent large complexnetwork structures. They are generally shown as 2D or 3Dgraphs and comprehending such large complex graphs is ex-tremely difficult. The layout of graphs is also a very difficultproblem [1].

      Overview diagrams are one of the best tools for orientation and navigation in hypermedia documents.

      For real-world hypermedia documents with many nodes, an overview diagram becomes cluttered and unusable.

    1. Treemaps are a visualization method for hierarchies based on enclosure rather than connection [JS91]. Treemaps make it easy to spot outliers (for example, the few large files that are using up most of the space on a disk) as opposed to parent-child structure.

      Treemaps visualize enclosure rather than connection. This makes them good visualizations to spot outliers (e.g. large files on a disk) but not for understanding parent-child relationships.

    1. Folding This is the one function whose name is confusing because many products use the term for what we called “collapsing” above. For this article, collapsing is the process of making whole headers and paragraphs invisible, tucking them up under a “parent.” Folding is a different kind of tucking under; it works on paragraphs and blocks to reduce them to a single line, hiding the rest. A simple case of folding might involve a long paragraph that is reduced to just the first line—plus some indication that it is folded; this shows that a paragraph is there and something about its content without showing the whole thing. Folding is most common in single-pane outline displays, and a common use is to fold everything so that every header and paragraph is reduced to a single line. This can show the overall structure of a huge document, including paragraph leaves in a single view. You can use folding and collapsing independently. At one time, folding was one of the basics of text editors, but it has faded somewhat. Now only about half of the full-featured editors employ folding. One of the most interesting of these is jEdit. It has a very strong implementation of folding, so strong in fact it subsumes outlining. Though intended as a full editor, it can easily be used as an outliner front end to TeX-based systems. jEdit is shown in the example screenshot in both modes. The view on the right shows an outline folded like MORE and NoteBook do it, where the folds correspond to the outline structure. But see on the left we have shifted to “explicit folding” mode where blocks are marked with triple brackets. Then these entire blocks can be folded independent of the outline. Alas, folding is one area where the Mac is weak, but NoteBook has an implementation that is handy. It is like MORE’s was, and is bound to the outline structure, meaning you can only fold headers, not arbitrary blocks. But it has a nice touch: just entering a folded header temporarily expands it.

      Folding is the affordance of being able to limit the space a block of a text (e.g. a paragraph) takes up to one line.

      This is different from collapsing, which hides nested subordinate elements under a parent element.

    1. The only piece of note taking software on the market that currently supports this feature (that I’m aware of) is Microsoft OneNote.

      OneNote supported on-the-fly interlinking with Double Square Bracket Linking

    2. Cunningham first developed the ability to automatically create internal links (read: new notes) when typing text in CamelCase. This meant you could easily be typing a sentence while describing a piece of information and simply type a word (or series of words) in CamelCase which would create a link to another piece of information (even if its page hadn’t already been created).This was quickly superseded by the double square bracket links most wiki’s use today to achieve the same results, and its the staple creation method in both wiki’s and other premier information systems today.

      History of the Wiki-style linking or [[Double Square Bracket Linking]]

    3. Evernote had long been the gold standard of note taking, flexible, functional and best of all affordable. While its user interface was a little odd at times, the features were excellent, but they made the simple mistake of not enabling wiki style internal links. Instead, they required a user to copy a note link from one note and paste it into another.

      Evernote made the mistake of not allowing on-the-fly wiki-style internal linking.

    4. It needs wiki-like superpowersIf there is one feature that excels above all others in information software of the past two decades that deserves its place in the note taking pantheon, its the humble double bracketed internal link.We all recognise power to store and retrieve information at will, but when you combine this power with the ability to successfully create new knowledge trees from existing documents, to follow thoughts in a ‘stream of consciousness’ non-linear fashion then individual notes transform from multiple static word-silos into a living information system system.Sadly, this is the one major feature that is always neglected, or is piecemeal at best… and one time note taking king Evernote is to blame.

      Tim Kling posits that one of the most important features for a note taking app to have (which most lack at the time of writing) is the ability to link to other notes with the wiki-standard double bracket command.

    5. The hardest part for anyone remotely interested in a solution among this immense array of software is that each and every note taking app developer to date has decided to reinvent the wheel every time they’ve turned on their compiler. It gets even worse once you open the door on purpose-specific note taking applications.

      There seems to be a tendency among developers of note taking apps to reinvent the wheel.

    1. The only piece of note taking software on the market that currently supports this feature (that I’m aware of) is Microsoft OneNote.

      OneNote supported on-the-fly interlinking with Double Square Bracket Linking

    1. Among its many other features, Ecco Pro installed an icon (the "Shooter") into other programs so that you can add text highlighted in the other program to your Ecco Pro outline. And better yet, the information stored in Ecco Pro could be synchronized with the then nearly ubiquitous PalmPilot hardware PIMs.

      Echo had a clipper tool which allowed you to add highlighted text from other programs.

      It also synced with the PalmPilot.

    2. The demise of Ecco Pro was blamed by many (including the publishers of Ecco Pro themselves) on Microsoft's decision to bundle Outlook with Office at no extra charge. And while that was undoubtedly part of the problem, Ecco Pro also failed by marketing itself as merely a fancy PIM to lawyers and others then lacking technological sophistication sufficient to permit them to appreciate that the value and functionality of the product went so far beyond that of supposedly "free" Outlook that the two might as well have originated on different planets.

      Ecco Pro's demise was attributed to Microsoft's decision to bundle Outlook with Office at no extra charge.

      This, even though, in terms of products, they could not have been more different.

    1. Guard fields proved invaluable for breaking cycles[5], a central anxiety of early hypertext [18][11].

      Storyspace used a scripting language to create what they called "Guard Fields" — boolean logic that will make a link clickable or not based on the pages the reader had already visited up until that point.

      What is interesting is that guard fields proved effective at breaking cycles (one of the risks of disorientation in hypertext).

    1. Jeff Sonnabend in the Ecco Yahoo forum: "I remember first trying to learn Ecco 1.0. It was tough until the proverbial light went on. Then it all made sense. For me, it was simply understanding that Ecco is just a data base. So called folders are nothing more than fields in a flat-file table (like a spreadsheet). The rest is interface and implementation of various users' work or management systems in Ecco. That learning curve, to me, is the primary Ecco "weakness", at least as far as new users go."

      There was a steep learning curve involved with using ECCO Pro. Reminds me of Roam, which also has a steep learning curve, but then it feels like it's worth it.

    2. Chris Thompson: "If your goals in using a PIM are mostly calendaring, todos, and a phonebook, then Maximizer, Outlook, and Time and Chaos all do a reasonable job. On an enterprise-level, Lotus Notes would be another good choice. If you're more interested in keeping track of notes or research, Lotus Agenda, Zoot, or InfoHandler are better choices. For keeping track of miscellaneous files, InfoSelect is pretty good. On the other hand, if you want to do a little of everything, and do it well, Ecco really has no rivals."

      ECCO Pro was loved for its ability to do a lot of different things versus being good at one narrow thing. Reminds me of Roam Research.

    1. Those were a few of the problems that could have brought down Ecco Pro. In the end, however, it was one massive problem: There was a company down the street that was developing a product that would make Ecco Pro obsolete. Microsoft would release Office ’97 on November 19th, 1996. Among the many components of the new suite of products was a personal information manager called Outlook. Eight months later, NetManage would release its last update of Ecco Pro, version 4.01. Development of the software effectively ceased after that.

      Price claims ECCO Pro was terminated because it couldn't compete with Microsoft Outlook, released in 1997.

    2. One fundamental issue with Ecco Pro I gleaned from the many phone calls I answered from customers was that people didn’t really know who the product was for. Sales people wanted to use it as a contact manager. Small business owners wanted to use it as a database. Home users wanted to use it to make to-do lists and track appointments. The problem was that it tried to be all those things at once. As a consequence, it did none of them very well. The product was bloated with features and extremely difficult to use. Even seasoned users did not understand its advanced functionality very well. After a year and a half as a phone rep, I still couldn’t offer a good explanation as to who the product was for.

      According to Price, ECCO Pro's problem was that it had so many features, users couldn't figure out who it was for.

  7. Nov 2020
    1. At the same time, use of the web is now ubiquitous,and ”Google” is a verb. With the advent of search en-gines, users have learned to find data by describing whatthey want (e.g., various characteristics of a photo) insteadof where it lives (i.e., the full pathname of the photo inthe filesystem). This can be seen in the popularity ofsearch as a modern desktop paradigm in such products asWindows Desktop Search (WDS) [26]; MacOS X Spot-light [21], which fully integrates search with the Mac-intosh journaled HFS+ file system [7]; and the variousdesktop search engines for Linux [4, 27]. Indeed, MacOS X in particular goes one step further and exports APIsto developers allowing applications to directly access themeta-data store and content index.

      With the advent of search engines, search as a paradigm for retrieving files has become ubiquitous.

    2. Interaction with stable storage in the modern world isgenerally mediated by systems that fall roughly into oneof two categories: a filesystem or a database. Databasesassume as much as they can about the structure of thedata they store. The type of any given piece of datais known (e.g., an integer, an identifier, text, etc.), andthe relationships between data are well defined. Thedatabase is the all-knowing and exclusive arbiter of ac-cess to data.Unfortunately, if the user of the data wants more di-rect control over the data, a database is ill-suited. At thesame time, it is unwieldy to interact directly with stablestorage, so something light-weight in between a databaseand raw storage is needed. Filesystems have traditionallyplayed this role. They present a simple container abstrac-tion for data (a file) that is opaque to the system, and theyallow a simple organizational structure for those contain-ers (a hierarchical directory structure)

      Databases and filesystems are both systems which mediate the interaction between user and stable storage.

      Often, the implicit aim of a database is to capture as much as they can about the structure of the data they store. The database is the all-knowing and exclusive arbiter of access to data.

      If a user wants direct access to the data, a database isn't the right choice, but interacting directly with stable storage is too involved.

      A Filesystem is a lightweight (container) abstraction in between a database and raw storage. Filesystems are opaque to the system (i.e. visible only to the user) and allow for a simple, hierarchical organizational structure of directories.

    1. I've spent the last 3.5 years building a platform for "information applications". The key observation which prompted this was that hierarchical file systems didn't work well for organising information within an organisation.However, hierarchy itself is still incredibly valuable. People think in terms of hierarchies - it's just that they think in terms of multiple hierarchies and an item will almost always belong in more than one place in those hierarchies.If you allow users to describe items in the way which makes sense to them, and then search and browse by any of the terms they've used, then you've eliminated almost all the frustrations of a file system. In my experience of working with people building complex information applications, you need: * deep hierarchy for classifying things * shallow hierarchy for noting relationships (eg "parent company") * multi-values for every single field * controlled values (in our case by linking to other items wherever possible) Unfortunately, none of this stuff is done well by existing database systems. Which was annoying, because I had to write an object store.

      Impressed by this comment. It foreshadows what Roam would become:

      • People think in terms of items belonging to multiple hierarchies
      • If you allow users to describe items in a way that makes sense to them and allow them to search and browse by any of the terms they've used, you've solved many of the problems of existing file systems

      What you need to build a complex information system is:

      • Deep hierarchies for classifying things (overlapping hierarchies should be possible)
      • Shallow hierarchies for noting relationships (Roam does this with a flat structure)
      • Multi-values for every single field
      • Controlled values (e.g. linking to other items when possible)
    1. It affords an immediate step, however, to associative indexing, the basic idea of which is a provision whereby any item may be caused at will to select immediately and automatically another. This is the essential feature of the memex. The process of tying two items together is the important thing.

      What Bush called "associative indexing" is the key idea behind the memex. Any item can immediately select others to which it has been previously linked.

    2. Thereafter, at any time, when one of these items is in view, the other can be instantly recalled merely by tapping a button below the corresponding code space.

      Once two items are linked, tapping a button would take you from one to the other.

    3. It is exactly as though the physical items had been gathered together from widely separated sources and bound together to form a new book. It is more than this, for any item can be joined into numerous trails.

      Although Bush envisioned associative trails to be navigable sequences of original content and notes interspersed, what seems to make more sense when viewed through today's technology, is a rich document of notes where the relevant pieces from external documents are transcluded.

    4. And his trails do not fade. Several years later, his talk with a friend turns to the queer ways in which a people resist innovations, even of vital interest. He has an example, in the fact that the outraged Europeans still failed to adopt the Turkish bow. In fact he has a trail on it. A touch brings up the code book. Tapping a few keys projects the head of the trail. A lever runs through it at will, stopping at interesting items, going off on side excursions. It is an interesting trail, pertinent to the discussion. So he sets a reproducer in action, photographs the whole trail out, and passes it to his friend for insertion in his own memex, there to be linked into the more general trail.

      I find this idea of saved associative trails very interesting. In Roam the equivalent would be that you can save a sequence of opened Pages.

    5. Selection by association, rather than indexing, may yet be mechanized. One cannot hope thus to equal the speed and flexibility with which the mind follows an associative trail, but it should be possible to beat the mind decisively in regard to the permanence and clarity of the items resurrected from storage.

      It should be easy to surpass the mind's performance in terms of storage capacity as well as lossiness. It might be more difficult to surpass it in terms of the speed and flexibility with which it "follows an associative trail"

    6. The human mind does not work that way. It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain. It has other characteristics, of course; trails that are not frequently followed are prone to fade, items are not fully permanent, memory is transitory. Yet the speed of action, the intricacy of trails, the detail of mental pictures, is awe-inspiring beyond all else in nature.

      The human mind doesn't work according to the file-cabinet metaphor — it operates by association.

      "With one items in its gras, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain."

    7. The real heart of the matter of selection, however, goes deeper than a lag in the adoption of mechanisms by libraries, or a lack of development of devices for their use. Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing. When data of any sort are placed in storage, they are filed alphabetically or numerically, and information is found (when it is) by tracing it down from subclass to subclass. It can be in only one place, unless duplicates are used; one has to have rules as to which path will locate it, and the rules are cumbersome. Having found one item, moreover, one has to emerge from the system and re-enter on a new path.

      Bush emphasises the importance of retrieval in the storage of information. He talks about technical limitations, but in this paragraph he stresses that retrieval is made more difficult by the "artificiality of systems of indexing", in other words, our default file-cabinet metaphor for storing information.

      Information in such a hierarchical architecture is found by descending down into the hierarchy, and back up again. Moreover, the information we're looking for can only be in one place at a time (unless we introduce duplicates).

      Having found our item of interest, we need to ascend back up the hierarchy to make our next descent.

    8. So much for the manipulation of ideas and their insertion into the record. Thus far we seem to be worse off than before—for we can enormously extend the record; yet even in its present bulk we can hardly consult it. This is a much larger matter than merely the extraction of data for the purposes of scientific research; it involves the entire process by which man profits by his inheritance of acquired knowledge. The prime action of use is selection, and here we are halting indeed. There may be millions of fine thoughts, and the account of the experience on which they are based, all encased within stone walls of acceptable architectural form; but if the scholar can get at only one a week by diligent search, his syntheses are not likely to keep up with the current scene.

      Retrieval is the key activity we're interested in. Storage only matters in as much as we can retrieve effectively. At the time of writing (1945) large amounts of information could be stored (extend the record), but consulting that record was still difficult.

    9. There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers—conclusions which he cannot find time to grasp, much less to remember, as they appear. Yet specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial.

      As scientific progress extends into increased specializations, efforts at integrating across disciplines are increasingly superficial.

    10. A record if it is to be useful to science, must be continuously extended, it must be stored, and above all it must be consulted.

      Bush emphasises the need for notes to not only be stored, but also to be queried (consulted).

    11. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.

      The rate at which we're generating new knowledge is increasing like never before (and this was written in 1945), but our ability to deal with that information has remained largely unimproved.

    12. Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose. If the aggregate time spent in writing scholarly works and in reading them could be evaluated, the ratio between these amounts of time might well be startling. Those who conscientiously attempt to keep abreast of current thought, even in restricted fields, by close and continuous reading might well shy away from an examination calculated to show how much of the previous month's efforts could be produced on call. Mendel's concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential.

      Specialization, although necessary, has rendered it impossible to stay up to date with the advances of a field.

    1. Semantically Annotated Content Opens Up Cost-Effective Opportunities: Search beyond keywords; Content aggregation beyond manual sifting through; Relationships discovery beyond human research.

      Benefits of semantic annotation

      1. Search beyond keywords
      2. Content aggregation
      3. Discovering relationships
    1. Knowledge graphs combine characteristics of several data management paradigms: Database, because the data can be explored via structured queries; Graph, because they can be analyzed as any other network data structure; Knowledge base, because they bear formal semantics, which can be used to interpret the data and infer new facts.

      Characteristics / benefits of a knowledge graph

    1. The ontology data model can be applied to a set of individual facts to create a knowledge graph – a collection of entities, where the types and the relationships between them are expressed by nodes and edges between these nodes, By describing the structure of the knowledge in a domain, the ontology sets the stage for the knowledge graph to capture the data in it.

      How ontologies and knowledge graphs relate.

    1. You need to have a habit of tagging something as a to-do to synthesize the idea further, and then periodically go back and review those and write them in a more crisp language, or build up your evergreen notes so that you have this library of thoughts that you are able to get that compound interest on.

      You need a system inside Roam which helps you review notes that are not yet refined.

    2. We encourage people to use the daily notes and to brainstorm and brain dump, and just write all the things they’re thinking. I think that the first thing that we’re interested in is, how do you build systems so that it’s easy for you to take those and gradually refine them?

      Conor is asking himself, how do you get people to take (daily) notes, and how do you get them to refine them.

    3. I think that you need to be able to get compound interest on your thoughts. Good ideas come from when ideas have sex: the intersection of different things that you’ve been reading or different things you’ve been seeing. So you can have better ideas faster if you are actually reviewing the old things and you are building up. You’re not throwing away work.

      Good ideas come when ideas meet, so it is beneficial to promote this.

    4. We’ve always wanted to build a layer on top of the web where every person can have their mental model of how the whole world works, and they can start to share ideas across everything.

      Conor's idea of Roam was a layer on top of the web where everyone can have their mental model of how the world works.

    5. I was originally interested in figuring out how you could figure out what’s actually true online.

      Conor was trying to figure out how to find out what's true online with Roam.

    1. With most mind mapping software something at the bottom of one branch cannot be elegantly linked to something that is categorized in a distant branch unless your mind map is really small. So “mind maps” essentially have the same linear limitation that your computer filing system does.

      Mind mapping runs into the same problem because it is also a hierarchy.

    2. Almost all interfaces today, with the exception of TheBrain visual user interface, are limited to organizing information into hierarchies, where a piece of information can only be categorized into one place. For simple applications this is fine, but for users engaging in more complex business processes, it is simply inadequate. A document will have a variety of different issues or people associated with it – with hierarchies one cannot show all these relationships without multiple copies of the information.

      Shelley Hayduk also identifies the issue that most information management software uses a file cabinet metaphor (i.e. hierarchy). This has the limitation that a piece of information can only be categorized in one place. For more complex things, this is inadequate.

    1. One major advantage of Lotus Notes is that it allows all the major information organization techniques to be used in one information space: outlines, graphics, hypertext links, relational databases, free (rich) text, expanding/collapsing reports, collapsi ng rich text sections, tabbed notebooks (like wizards) and tables. In other words, Lotus Notes is a hodgepodge of every information organization technique Lotus could think of, all thrown into one quirky product. As such, it is phenomenally satisfying a nd phenomenally frustrating at the same time.

      John Redmood claims that the advantage of Lotus Notes was that it brought together a wide range of information organization techniques: outlines, graphics, hypertext links, relational database, expanding/collapsing reports, collapsing rich text sections, tabbed notebooks and tables.

    2. Three panes: A three-pane outliner uses one pane for the table of contents, one pane for items in that "section" or "chapter", and a final pane for the currently highlighted document. I use three-pane outliners for shared projects, where there are many documents in a category that should be isolated from other items.

      A three pane interface introduces a third level of hierarchy that can be used in the organization. It can separate, for instance, the high level chapters in the first pane, the sections of those chapters in the second, and the content in the third.

    3. Two panes: When designing user interfaces for web-based software programs, or for designing web sites, I prefer two pane outliners. The category pane mimics a site map, or a navigation tree. For example, see my web interfac e for MailEngine: the left hand side lists all the possible interface pages, and clicking on a category page brings it up. This is the way two-pane outliners work, and so they work well for this kind of project. Steve Cohen writes: Re: Maple, Jot+, etc. tree-based (= 2-pane) PIMs. Yes, they're better for info storage & organizing, rather than composing/writing.

      In a two pane interface the left pane mimics a sitemap or navigation tree. The left hand side lists all possible parents to navigate to, and when clicked, the main pane will bring up the child content.

      This separates the content work from the organization work.

    4. One pane: With one pane outliners, the content is displayed immediately below the category. A printed legal document is an example of a one-pane document. A web site with a table-of-contents "frame" on the left hand side is similar to a two-pane outline. A Usene t news group is similar to a three pane outline. When writing documents, or organizing ideas for a project (such as a speech, or for software design) I much prefer one pane outlines. I find they are more conducive to collapsing ideas, because you can mix text with categories, rather than radically split ting the organizational technique from the content (as the two and three pane outlines do).

      In one pane outliners the text is displayed under its parent.

      This can be more conducive to writing because you're not splitting work on the organization from work on the content. In writing this separation is fuzzy anyway.

    5. With Lotus Notes, I can combine a hierarchically organized outline view of the documents, with full text searching, hypertext links and traditiona l relational database like reports (for example, a sorted view of items to do).

      What Lotus Notes allowed you to do is to combine a hierarchical organized overview, achieved through an outliner, with search, hyperlinks and relational-database-like reports. Lotus Notes also allowed you to organized different document formats (Word, emails, etc.)

    1. An ontology is as a formal, explicit specification of a sharedconceptualization that is characterized by high semantic ex-pressiveness required for increased complexity [9]. Ontolog-ical representations allow semantic modeling of knowledge,and are therefore commonly used as knowledge bases in artifi-cial intelligence (AI) applications, for example, in the contextof knowledge-based systems. Application of an ontology asknowledge base facilitates validation of semantic relationshipsand derivation of conclusions from known facts for inference(i.e., reasoning) [9]

      Definition of an ontology

    2. A knowledge graph acquires and integrates infor-mation into an ontology and applies a reasonerto derive new knowledge.

      Definition of a Knowledge Graph

    1. If every site that linked to yours was visible on your page, and you had no control over who could and couldn't link to you, it is not hard to imagine the Trollish implications...

      This is an important point. This is why the internet doesn't have contextual backlinks.

    1. Generally it takes a week or two after a person has been infected before they start to produce IgG, and with covid, you’re generally only infectious for about a week after you start to have symptoms, so antibody tests are not designed to find active infections. Instead the purpose is to see if you have had an infection in the past.

      It takes a week or two for an infected person to start producing the antibody IgG which is the type of antibody that typically gets tested for.

      [[Z: Antibody tests are only useful to see if you had an infection in the past]]

    2. In most clinical settings (including the one I work in), all the doctor is provided with is a positive or negative result. No mention is made of the number of cycles used to produce the positive result.

      [[Z: The number of PCR cycles has not been standardized, and is usually not even mentioned to the doctor]]

    3. If you get a positive PCR test and you want to be sure that what you’re finding is a true positive, then you have to perform a viral culture. What this means is that you take the sample, add it to respiratory cells in a petri dish, and see if you can get those cells to start producing new virus particles. If they do, then you know you have a true positive result. For this reason, viral culture is considered the “gold standard” method for diagnosis of viral infections. However, this method is rarely used in clinical practice, which means that in reality, a diagnosis is often made based entirely on the PCR test.

      [[Z: A positive PCR should be followed by a viral culture test to see if you're dealing with a live infection]]

      After a positive PCR test, you don't know if the virus is alive or not. To find this out you can add it to respiratory cells (in the case of a respiratory virus) and see if they start producing virus particles).

      [[Z: Viral culture tests are rarely used in clinical practice]]

      Positive diagnoses of COVID-19 are done base on PCR only.

    4. One thing that’s important to understand at this point is that PCR is only detecting sequences of the viral genome, it is not able to detect whole viral particles, so it is not able to tell you whether what you are finding is live virus, or just non-infectious fragments of viral genome.

      PCR only tells you if you're detecting sequences of the viral genome. It doesn't tell you that what you're finding is live virus or not.

    1. Finally, you gain the ability to reuse previously built packets for new projects. Maybe some research you did for an online marketing campaign becomes useful for a new campaign. Or some sketches that didn’t quite make it into an old design give you inspiration for a new one. Or some book notes you wrote down casually turn out to be very useful for an unforeseen challenge a year later.

      The Intermediate Packet approach allows you to reuse previously built packets for new projects

      By incorporating existing packets in new projects, you gain the ability to deliver new projects much faster.

    2. Fourth, big projects become less intimidating. Big, ambitious projects feel risky, because all the time you spend on it will feel like a waste if you don’t succeed. But if your only goal is to create an intermediate packet and show it to someone — good notes on a book, a Pinterest board of design inspirations, just one module of code — then you can trick yourself into getting started. And even if that particular Big Project doesn’t pan out, you’ll still have the value of the packets at your disposal!

      The Intermediate Packet approach make big projects less intimidating.

      Big projects feel risky because the time you spend on it feels like a waste if you don't succeed. Intermediate Packets allow you to finish smaller chunks. You can use this to trick yourself to get started on bigger things.

    3. By always having a range of packets ready to work on, each one pre-prepared to work on at any time, you can be productive under any circumstances – waiting in the airport before a flight, the doctor’s waiting room, 15 minutes in between meetings.

      If you have a range of packet sizes available to work on, you can use any time block size to deliver value.

    4. Third, you can create value in any span of time. If we see our work as creating these intermediate packets, we can find ways to create value in any span of time, no matter how short. Productivity becomes a game of matching each available block of time (or state of mind, or mood, or energy level) with a corresponding packet that is perfectly suited to it.

      The Intermediate Packet approach ensures you are delivering value after every iteration, regardless of size

      You no longer need to rely on large blocks on uninterrupted time if you focus on delivering something of value at the end of each block of time.

    5. Second, you have more frequent opportunities to get feedback. Instead of spending weeks hammering away in isolation, only to discover that you made some mistaken assumptions, you can get feedback at each intermediate stage. You become more adaptable and more accountable, because you are performing your work in public.

      Intermediate Packets give you more opportunities to get feedback

    6. The first benefit of working this way is that you become interruption-proof. Because you rarely even attempt to load the entire project into your mind all at once, there’s not much to “unload” if someone interrupts you. It’s much easier to pick up where you left off, because you’re not trying to juggle all the work-in-process in your head.

      The intermittent packet approach makes you more resilient towards interruptions

      Because you're not loading an entire project in your mind at once, you're not losing as much context when you get interrupted.

    1. Bringing this back to filtering, not only am I saving time and preserving focus by batch processing both the collection and the consumption of new content, I’m time-shifting the curation process to a time better suited for reading, and (most critically) removed from the temptations, stresses, and biopsychosocial hooks that first lured me in.I am always amazed by what happens: no matter how stringent I was in the original collecting, no matter how certain I was that this thing was worthwhile, I regularly eliminate 1/3 of my list before reading. The post that looked SO INTERESTING when compared to that one task I’d been procrastinating on, in retrospect isn’t even something I care about.What I’m essentially doing is creating a buffer. Instead of pushing a new piece of info through from intake to processing to consumption without any scrutiny, I’m creating a pool of options drawn from a longer time period, which allows me to make decisions from a higher perspective, where those decisions are much better aligned with what truly matters to me.

      Using read-it later apps helps you separate collection from filtering.

      By time-shifting the filtering process to a time better suited for reading, and removed from temptations, you will want to drop 2/3 of the content you save.

      This allows you to "make decisions from a higher perspective"

    1. There are different schools of thought in the realm of productivity.

      The energy school focuses on optimizing your energy levels. The focus school is all about getting into and staying in flow. The efficiency school is obsessed with the logistics of work.

      Tiago positions his philosophy as the value school: Making sure you deliver value after every block of work by delivering, what Tiago calls, Intermediate Packets.

      He draws parallels to Just In Time production from Toyota and Continuous Integration in software development.

      Intermediate Packets is continuous integration for knowledge work.

    1. Alexanderproposeshomesandofficesbedesignedandbuiltbytheireventualoccupants.Thesepeople,hereasons,knowbesttheirrequirementsforaparticularstructure.Weagree,andmakethesameargumentforcomputerprograms.Computerusersshouldwritetheirownprograms.KentBeck&WardCunningham,1987 [7]

      Users should program their own programs because they know their requirements the best.

      [7]: Beck, K. and Cunningham, W. Using pattern languages for object-oriented programs. Tektronix, Inc. Technical Report No. CR-87-43 (September 17, 1987), presented at OOPSLA-87 workshop on Specification and Design for Object-Oriented Programming. Available online at http://c2.com/doc/oopsla87.html (accessed 17 September 2009)

    2. Before the publication of the ‘Gang of Four’ book that popularised software patterns [4], Richard Gabriel described Christopher Alexander’s patterns in 1993 as a basis for reusable object‐oriented software in the following way:Habitabilityisthecharacteristicofsourcecodethatenablesprogrammers,coders,bug­fixers,andpeoplecomingtothecodelaterinitslifetounderstanditsconstructionandintentionsandtochangeitcomfortablyandconfidently.

      Interesting concept for how easy to maintain a piece of software is.

    1. Connected to this are Andy Matuschak’s comments about contextual backlinks bootstrapping new concepts before explicit definitions come into play.

      What Joel says here about Contextual Backlinks is that they allow you to "bootstrap" a concept (i.e. start working with it) without explicit definitions coming into play (or as Andy would say, the content is empty).

    2. Easily updated pages: don’t worry about precisely naming something at first. Let the meaning emerge over time and easily change it (propagating through all references).

      Joel highlights a feature here of Roam and ties it to incremental formalisms.

      In Roam you can update a page name and it propagates across all references.

    3. Cognitive Overhead (aka Cognitive Load): often the task of specifying formalism is extraneous to the primary task, or is just plain annoying to do.

      This is the task that you're required to do when you want to save a note in Evernote or Notion. You need to choose where it goes.

    4. The basic intuition is described well by the Shipman & Marshall paper: users enter information in a mostly informal fashion, and then formalize only later in the task when appropriate formalisms become clear and also (more) immediately useful.

      Incremental formalism

      Users enter information in an informal fashion. They only formalize later when the appropriate formalism becomes clear and/or immediately useful.

    5. It’s important to notice something about these examples of synthesis representations: they go quite a bit further than simply grouping or associating things (though that is an important start). They have some kind of formal semantic structure (otherwise known as formality) that specifies what entities exist, and what kinds of relations exist between the entities. This formal structure isn’t just for show: it’s what enables the kind of synthesis that really powers significant knowledge work! Formal structures unlock powerful forms of reasoning like conceptual combination, analogy, and causal reasoning.

      Formalisms enable synthesis to happen.

    6. I understand synthesis to be fundamentally about creating a new whole out of components (Strike & Posner, 1983).

      A definition for synthesis.

    1. Systems which display backlinks to a node permit a new behavior: you can define a new node extensionally (rather than intensionally) by simply linking to it from many other nodes—even before it has any content.

      Nodes in a knowledge management system can be defined extensionally, rather than intensionally, through their backlinks and their respective context.

    2. This effect requires Contextual backlinks: a simple list of backlinks won’t implicitly define a node very effectively. You need to be able to see the context around the backlink to understand what’s being implied.

      Bi-Directional links, or backlinks, only help define the node being linked to if the context in which the links occur is also provided.

    1. Using Next's special getStaticProps hook and glorious dynamic imports, it's trivial to import a Markdown file and pass its contents into your React components as a prop. This achieves the holy grail I was searching for: the ability to easily mix React and Markdown.

      Colin has perhaps found an alternative to jsx, getting js content into md files.

    1. In such cases it is important to capture the connections radially, as it were, but at the same time also by right away recording back links in the slips that are being linked to. In this working procedure, the content that we take note of is usually also enriched

      By adding a backlink for every link we make, we are also enriching the content we are linking to.

    2. 2. Possibility of linking (Verweisungsmöglichkeiten). Since all papers have fixed numbers, you can add as many references to them as you may want. Central concepts can have many links which show on which other contexts we can find materials relevant for them. Through references, we can, without too work or paper, solve the problem of multiple storage. Given this technique, it is less important where we place a new note. If there are several possibilities, we can solve the problem as we wish and just record the connection by a link [or reference].

      Since a note has a unique identifier, you can link to it.

      Since we can link to notes, it doesn't matter where we place a note.

    1. The future increasingly looks like one where companies use very specific apps to solve their jobs to be done. And collaboration is right where we work. And that makes sense, of course. Collaboration *should* be where you work.

      Collaboration, increasingly, happens where we work.

    2. As it becomes more clear what are specific functional jobs to be done, we see more specialized apps closely aligned with solving for that specific loop. And increasingly collaboration is built in natively to them. In fact, for many reasons collaboration being natively built into them may be one of the main driving forces behind the venture interest and success in these spaces.

      As it becomes more clear what the functional job to be done is, we see more specialized apps aligned with solving that specific loop. Collaboration is increasingly built natively into them.

    3. To understand this is to understand that there is no distinction between productivity and collaboration. But we’re only now fully appreciating it.

      This is perhaps Kwok's central claim in the article. We used to think of productivity and collaboration as separate things when in reality they are inseparable.

    1. This is why social media services are free to use. The added signaling value is solely captured by the physical products that are being shared.

      Social media offers signalling distribution and amplification. But because they are not able to capture any of that value, it is free.

    2. Fortnite’s monetization model is based on cosmetics: The skin your character wears; the looks of your glider and the tools you use; the way your character dances (emotes) – all of these are signaling amplifiers with different signal messages to uniquely express yourself in the game. And you have to purchase them

      Julian posits that Fortnite's revenue model is also based on signalling. People buy cosmetic upgrades to their character like your tools, your skin color etc.

    3. Luckily, Tinder offers a variety of additional signal amplifiers that help you to stand out. The sole purpose of features like Tinder Boost and Super Likes is to outcompete status rivals by giving you preferential signaling treatment. And guess what – they come with a price tag.

      Julian claims Tinder is monetizing on signal amplifiers like Boost and Super Like.

    4. If membership isn’t scarce, the membership loses its signal message. The same applies to physical products: Apple will never offer a cheap iPhone to compete with low-end Android devices – it would destroy the company’s signal message that the iPhone is a luxury product.

      If a high-end brand comes out with a low-end offering, it is diluting the high-end part of their brand message. Apple will never come out with a low-end version of the iPhone because it would dilute the message of being a premium phone.

    5. Digital products have one crucial disadvantage over atom-based products and services: Intangibility. Apps live on your phone or computer. No one can see them except for you. The signal message of a fitness app is the same as that of a gym membership or athletic wear (strength & fitness display), but the signal is much weaker because you can’t distribute it to anyone.

      One of Julian's central claims is that although the signalling message of software ownership is the same as the ownership of a physical product, because it's intangible, it's much less effective as a signalling tool.

    6. The app that comes closest to a luxury service that I can think of is Superhuman, which charges its users $30 a month for an email client (which you could also get for free by just using Gmail). But there’s a difference to other software products: Superhuman has signal distribution built in. Every time you send an email via Superhuman, your recipient will notice a little “Sent via Superhuman” in your signature.

      Superhuman is the closest thing Julian can think of to a luxury software product. One reason might be that Superhuman has some signalling built in: It will add a little "sent by superhuman" to your signature.

    7. Another point of evidence is the lack of luxury software products. People spend absurd amounts of money on jewellery, handbags and cars, but I can’t think of a piece of software with an even remotely similar price tag. Sure, people have tried to sell $999 apps but those never took off.

      Julian Lehr posits that because software purchases are less visible, their signalling power is reduced. This is why, for instance, you don't see any luxury software products: Because you cannot signal you're in on it.

    1. The GUI was initially developed as one of many innovative new research projects at Xerox's Palo Alto Research Center1. Silicon Valley being a small place back then, Steve Jobs got himself a tour one day, and just flat out fell in love with their GUI.

      The GUI was first developed at Xerox's Palo Alto Research Center (PARC). Silicon Valley being a small place at the time, Steve Jobs had people around him prod him to take a tour, which he took them up on. When he first saw the GUI they were working on, he knew it would be the future.

    1. In 1995 Steve Jobs could still remember it exactly. In an interview with Robert X. Cringely for the PBS show “Triumph of the nerds” he said:I had three or four people (at Apple) who kept bugging that I get my rear over to Xerox PARC and see what they are doing. And, so I finally did. I went over there. And they were very kind. They showed me what they are working on. And they showed me really three things. But I was so blinded by the first one that I didn’t even really see the other two. One of the things they showed me was object oriented programming – they showed me that but I didn’t even see that. The other one they showed me was a networked computer system… they had over a hundred Alto computers all networked using email etc., etc., I didn’t even see that. I was so blinded by the first thing they showed me, which was the graphical user interface. I thought it was the best thing I’d ever seen in my life. Now remember it was very flawed. What we saw was incomplete, they’d done a bunch of things wrong. But we didn’t know that at the time but still thought they had the germ of the idea was there and they’d done it very well. And within – you know – ten minutes it was obvious to me that all computers would work like this some day. It was obvious. You could argue about how many years it would take. You could argue about who the winners and losers might be. You could’t argue about the inevitability, it was so obviousSteve Jobs about his visit to Xerox PARC – Clip from Robert Cringley’s TV documentation “Triumph of the Nerds“.

      Steve Jobs when given a tour at the Xerox PARC in 1979 was so struck by the GUI that they were developing that he could not even process the other things he was shown (Object Oriented Programming and Networked Computing).

      "And within - you know - ten minutes it was obvious to me that all computers would work like this some day. It was obvious. You could argue about how many years it would take. You could argue about who the winners or losers might be. You couldn't argue about the inevitability, it was obvious."

      This reminds me of the moment Roam first clicked for me.

    1. This whole system is much, much better than having to manually update some CRM like in Airtable. Since you're naturally tagging people as you interact with them, you can create an easy record of your relationship with them and compile any useful notes on them as you go.

      If you use Roam as a CRM, in your daily note you can simply tag a person you just had a meeting with and log some notes. Those notes will then show up under that person in the linked references under a block for the current date.

      So in one sentence, with using only your keyboard, you've created a meeting note linked to a person and linked to a specific date.

      With any other solution you'd have to navigate to a person, create an entry, set a date and write the note.

      This "decide where to put it" step is completely replaced with "what entities does this pertain to".

    2. The references also have a really robust filtering tool. For example, I could filter all the references to Mindfulness to only include pages that also reference Books:

      You can also filter the references by Tags/Pages.

    3. This is the best feature I’ve found for discovering new relationships between information.

      The unlinked references section is a great way to discover new relationships between information.

      It's also an area where a digital Zettelkasten outperforms an analog one.

    4. This is another area where Roam really stands out from Evernote and Notion. Have you tried to link to another page in either of them? It’s a nightmare of right clicks or slash commands, it takes way too long. In Roam it’s so seamless that you can do it without interrupting your typing flow.

      A big benefit of Roam is the speed with which you can make a link to a another page.

    5. This removes all the decision making about where to put things that you frequently run into with Evernote, Notion, etc. When everything can be everywhere, you don’t have to worry about the filing structure. You just keep adding links. 

      Nat's conclusion is correct, but his reason for arriving at that conclusion is wrong.

      You're not faced with the question of where to put things with Roam because you can do the following:

      (1) You can tag a new entry on the fly, in-line, CLI style. (2) If the tag exists, it will autocomplete, if it doesn't you can create it with no extra effort (3) Any tags you add are links to their respective pages, which allows you to (a) navigate their as soon as you've typed the tag/page name and (b) it creates a backlink on those pages so your new entry is automatically linked to from there.

    6. By structuring information in this way, Roam makes it super easy to move laterally across your information, while retaining vertical references. The book Emergency by Neil Strauss can live in my Book Notes page, my Prepping page, and my Neil Strauss page, without having to be moved. 

      I think Nat touches on an important use case here, but I wouldn't call it "moving laterally while retaining vertical references."

      He's referring to a link to the book Emergency, not some content of the book itself. So each page can link to the book, that's not novel.

      What is novel is that when entering in the book into your Roam database you can tag it with Prepping and Neil Strauss and it will show up under those pages automatically.

    7. This also highlights a big difference between Roam and other note taking tools: tags are both everything and nothing. Every page is a tag, and every tag is a page.

      Nat says that tags are everything and nothing, but I don't agree with that.

      Pages consist of blocks.

      A reference to a page is treated in the exact same way as a tag.

      A block is not treated in the same way. A block is not a tag.

    8. Evernote’s is based on three levels: Stacks, Notebooks, and notes. Each note lives in one notebook, which lives in one stack. Notion, Workflowy, and a few others allow infinite nesting. A note lives in a note lives in a note and so on. 

      Two top-down approaches to note taking.

      In evernote your notes live in Stacks, notebooks or notes.

      In Notion and Workflowy you've got blocks than can be infinitely nested.

    1. A better definition I've been using since then, thanks to Jason Hwang, is "fixed output." A company that delivers the same thing to all customers is going to be organized differently than one that does things made-to-order.

      Fixed output companies

      A company that delivers the same thing to all customers. These companies are going to be organized differently than ones that do things bespoke for their customers.

    1. What this all means is that persuading people that there is no fraud is an integral part of committing fraud: controlling the social epistemological consensus becomes a weapon. That doesn’t necessarily mean that fraud claims are always true/decisive, it just means that claims against fraud should be treated with a healthy dose of skepticism.

      Persuading people that no fraud took place is an essential part of committing fraud.

      This doesn't mean fraud claims are always true. It only means we should claims against fraud with a health dose of skepticism.

    2. Election fraud is even more interesting, because if you (the fraudster) can win the election then the victim of fraud has an incentive to back down and let you get away with it in order to preserve the general public’s belief in democracy. Democracy has both a practical function (effective government via peaceful transfer of power) and a “spiritual” function (keeping the peace by persuading people that they are being represented). Overturning an election seriously undermines the spiritual function of democracy as it confirms to people that elections do get rigged and fraud does happen and it does sometimes determine election outcomes.

      Roko talks about democracy having a practical function (effective government via peaceful transfer of power) and a spiritual function (keeping the peace by persuading people that they are being represented).

      Overturning an election would undermine the spiritual function. This creates an incentive for the loser to swallow the loss, even if he has been cheated, so as to preserve the spiritual function of democracy.

    1. Now in fairness, one significant point that fraud claimers can make is that even if the phenomenon is modest in scale in the US, it can still be sufficient to overturn the results of elections given the peculiarities of its election system, in which outcomes are sometimes decided by a few hundred votes in a key state. But while valid for some elections – most notably, 2000 – it is most certainly not the case during this election, where even a reversal of the Georgia and Pennsylvania results will not be sufficient to give Trump victory.

      Even though fraud claimers say a small amount of votes can sway the election, this isn't the case for this election. Even swinging Georgia or Pennsylvania to Trump still results in a Biden win.

    2. One way of looking at it is that Trump was simply “lucky” in 2016, winning the crucial states of PA/WI/MI by <1%, and unlucky in 2020, losing those same states by higher though still modest margins.

      Anatoly offers this way of looking at the 2020 election:

      Trump was simply “lucky” in 2016, winning the crucial states of PA/WI/MI by <1%, and unlucky in 2020, losing those same states by higher though still modest margins.

    3. And you also need the conspiracy to be competent. This is outright impossible. Even a reasonably effective and high IQ semi-authoritarian regime such as Russia hasn’t learned how to hide electoral fraud from statistical analysts over 20 years and counting for the banal reason that you can’t expect much in the way of conscientiousness or even intelligence from people who accede to participating in electoral fraud. You people seriously expect that level of competence from… inner city Dems?

      Anatoly here makes the argument that to pull off large scale voter fraud, you need a large scale conspiracy and you need large scale competency. Particularly that last category is unlikely.

      Russia, which can be considered a high-IQ semi-authoritarian regime, still hasn't figured out how to hide electoral fraud from statistical analysts.

    4. Instead, I want to make a seemingly obvious game theoretical point. In a country with a balance of power between two or more parties, nobody but the most cavalier ideologues are going to stick their necks out for “The Resistance” when they know that there is a high probability that a Trumpist DoJ could subsequently prosecute them. (For that matter, several Chicago poll workers were convicted and went to jail in 1962). To enact large-scale fraud, you need to convince underlings to collude, but this only happens if they can be sure that they will not be put out to grass later. The GOP can’t credibly offer such guarantees, so there won’t be many people rushing to stick out their neck out for Trump. This also works in reverse, which is why back in August, I similarly dismissed Resistance fantasies that the Bad Orange Man will orchestra mass electoral fraud to keep himself in power:

      Here Anatoly Karlin makes a game theoretical argument that in a system with two or more adversarial, equally powerful parties, there's a self-preservationist incentive not to take a risk with something like voter fraud. The risk being, that the other party might find out and prosecute you.

      To be able to pull it off you need to be able to make guarantees that the colluders won't be prosecuted, and neither party can make such guarantees.

    1. Oh, and from a language/design perspective, you can actually turn regular words in a sentence into channels, just as many people do with @replies. For example: I’m coming to #barcamp later today.

      Because the use of hashtags is inline and you can turn regular words into hashtags (and therefor channels), there is no friction to do so.

    2. It also enforces actual use in the wild of tags, since no evidence of a tag will exist without it first being used in conversation. This means that representing channels in tagclouds across the site that grow and fade over time, and are contextual to all of Twitter or to a single user, is the ideal interface for displaying this information.

      Hashtags have the added benefit that they won't show up for others if they're not used.

      If you look at which hashtags are being used (trending), you get a taxonomy of micro-contexts, ranked by popularity, with which you can navigate Twitter. All from the bottom up.

    3. I also like that the folksonomic approach (as in, there are no “pre-established groups”) allows for a great deal of expression, of negotiation (I imagine that #barcamp will be a common tag between events, but that’s fine, since if there is a collision, say between two separate BarCamps on the same day, they’ll just have to socially engineer a solution and probably pick a new tag, like #barcampblock) and of decay (that is, over time, as tags are used less frequently, other people can reuse them — no domain squatting!).

      The folksonomic approach (user-generated tagging) is beneficial because it allows complexity to emerge bottom-up.

    4. Every time someone uses a channel tag to mark a status, not only do we know something specific about that status, but others can eavesdrop on the context of it and then join in the channel and contribute as well. Rather than trying to ping-pong discussion between one or more individuals with daisy-chained @replies, using a simple #reply means that people not in the @reply queue will be able to follow along, as people do with Flickr or Delicious tags. Furthermore, topics that enter into existing channels will become visible to those who have previously joined in the discussion. And, perhaps best of all, anyone can choose to leave or remove topics that don’t interest them.

      Twitter's hashtags form a dual purpose. They label a status with a certain tag, telling us something about the intended context of that Tweet.

      The ease of which makes it frictionless for anyone to jump into the conversation.

      But they also equip an interested eavesdropper with the ability to follow along with a conversation. This idea (at the time this was being discussed at Twitter) was already happening with Flickr and Delicious tags.

    5. This is how it works in IRC, and how it needed to work in Twitter.

      The idea of:

      When you use a hastag and the channel with that name doesn't exist, it gets created, is an idea that came from IRC.

    6. Now, in thinking about implementing channels, it was imperative that I not introduce any significant changes into the way that I currently use Twitter any more than I have for other features that have been added to Twitter (for example, @replies or direct messages). Channels would need to be a command-line-friendly addition, and one that would require absolutely zero web-based management to make the most of it (to draw a distinction, Pownce fails this test with its Friend Sets, since it requires use of their website to take advantage of this feature).

      The requirements [[Joe Messina]] laid out for a concept of "channels" on Twitter was that:

      1. It shouldn't add any friction to his current use
      2. It shouldn't require any web-based management to make the most of

      Twitter of 2020 satisfies these requirements. You just type #something, and you can click on that hash or search for it to see results.

    7. Jaiku comes closest with their channels implementation, making it extremely easy to create new channels (simply post a message that begins with a hash (#) and your intended channel name — and if the channel doesn’t exist, it’ll be created for you):

      [[Joe Messina]] details an example from [[Jaiku]] where you can create a channel by simply posting a message that starts with a hash (#). If the channel doesn't exist, it will be created for you.

    8. I’m more interested in simply having a better eavesdropping experience on Twitter.

      [[Joe Messina]]'s reason for suggesting the hashtag was his interest in having "better eavesdropping experience on Twitter"

    1. Hashtags are widely used on microblogging and photo-sharing services such as Twitter and Instagram as a form of user-generated tagging that enables cross-referencing of content sharing a subject or theme.

      Hashtags are a form of user-generated tagging that enables cross-referencing.

    1. Long term keys are almost never what you want. If you keep using a key, it eventually gets exposed. You want the blast radius of a compromise to be as small as possible, and, just as importantly, you don’t want users to hesitate even for a moment at the thought of rolling a new key if there’s any concern at all about the safety of their current key.

      You want to blast radius of a compromise to be as small as possible

      Therefore a long-term key is almost never what you want. You don't want users to hesitate about rolling out a new key if they suspect theirs is compromised.

    2. PGP begs users to keep a practically-forever root key tied to their identity. It does this by making keys annoying to generate and exchange, by encouraging “key signing parties”, and by creating a “web of trust” where keys depend on other keys.

      PGP encourages users to keep long-term keys tied to their identity. It does this by making it annoying to generate and exchange keys.

    3. We can’t say this any better than Ted Unangst: There was a PGP usability study conducted a few years ago where a group of technical people were placed in a room with a computer and asked to set up PGP. Two hours later, they were never seen or heard from again.

      The UX problems with PGP/GPG.

    4. There are, as you’re about to see, lots of problems with PGP. Fortunately, if you’re not morbidly curious, there’s a simple meta-problem with it: it was designed in the 1990s, before serious modern cryptography. No competent crypto engineer would design a system that looked like PGP today, nor tolerate most of its defects in any other design. Serious cryptographers have largely given up on PGP and don’t spend much time publishing on it anymore (with a notable exception). Well-understood problems in PGP have gone unaddressed for over a decade because of this.

      The meta-problem with PGP is that it was designed by crypto-engineers in the 90s and it is horribly outdated, yet due to its federated architecture, difficult to update.

    1. In 1997, at the dawn of the internet’s potential, the working hypothesis for privacy enhancing technology was simple: we’d develop really flexible power tools for ourselves, and then teach everyone to be like us. Everyone sending messages to each other would just need to understand the basic principles of cryptography. GPG is the result of that origin story. Instead of developing opinionated software with a simple interface, GPG was written to be as powerful and flexible as possible. It’s up to the user whether the underlying cipher is SERPENT or IDEA or TwoFish. The GnuPG man page is over sixteen thousand words long; for comparison, the novel Fahrenheit 451 is only 40k words. Worse, it turns out that nobody else found all this stuff to be fascinating. Even though GPG has been around for almost 20 years, there are only ~50,000 keys in the “strong set,” and less than 4 million keys have ever been published to the SKS keyserver pool ever. By today’s standards, that’s a shockingly small user base for a month of activity, much less 20 years.

      The failure of GPG

    1. A long term key is as secure as the minimum common denominator of your security practices over its lifetime. It's the weak link.

      Good phrasing of the idea: "you're only as secure as your weakest link".

      You are only as secure as the minimum common denominator of your security practices.

    2. Then, there's the UX problem. Easy crippling mistakes. Messy keyserver listings from years ago. "I can't read this email on my phone". "Or on the laptop, I left the keys I never use on the other machine".

      UX issues in GPG

    1. So while it’s nice that I’m able to host my own email, that’s also the reason why my email isn’t end-to-end encrypted, and probably never will be. By contrast, WhatsApp was able to introduce end-to-end encryption to over a billion users with a single software update.

      Although the option to host your own email offers you freedom, it's precisely this freedom that makes change more difficult and the reason why email isn't yet end-to-end encrypted.

      Centralized architectures, like whatsapp, allow you to roll out end-to-end encryption to the entire network with 1 software update.

    2. That has taken us pretty far, but it’s undeniable that once you federate your protocol, it becomes very difficult to make changes. And right now, at the application level, things that stand still don’t fare very well in a world where the ecosystem is moving.

      Because the ecosystem around software application is quickly evolving, you need to be able to adapt in order to be competitive.

      Once you federate your technology, however, you lose this ability to adapt quickly, as is evidenced by the relative stagnation of federated standards such as IP, SMTP, IRC, DNS etc.

    3. This reduced user friction has begun to extend the implicit threat that used to come with federated services into centralized services as well. Where as before you could switch hosts, or even decide to run your own server, now users are simply switching entire networks.

      The implicit threat of federated architectures is also emerging in centralized services. It emerges there because the core of the social network, the address book, is saved locally (i.e. federated). This makes it easy for users to switch networks, and this ease keeps the providers honest.

    4. Given that federated services always seem to coalesce around a provider that the bulk of people use, federation becomes a sort of implicit threat. Nobody really wants to run their own servers, but they know that it might be possible if their current host does something egregious enough to make it worth the effort.

      The implicit threat of federation

      In a federated architecture, most users tend to coalesce around one provider. Few actually want to run their own server, but the fact that that option exists, acts as an implicit threat which keeps the current host honest.