383 Matching Annotations
  1. Nov 2022
    1. "This is a job market that just won't quit. It's challenging the rules of economics," said Becky Frankiewicz,  chief commercial officer of hiring company ManpowerGroup in an email after the data was released. "The economic indicators are signaling caution, yet American employers are signaling confidence."

      This article explains the economic market. Creating 528,000 jobs is an outstanding aspect for the American people. But It also needs to explain the bad parts of creating jobs in this situation. Because challenging the rules of economics should not make a better situation, There are also high risks.

    1. That could create even more burdens for businesses because hiking interest rates tends to create higher rates on consumer and business loans, which slows the economy by forcing employers to cut back on spending.

      This article describes the disadvantages of high-interest rates. Although there are facts and parts that we need to be concerned about, high-interest rates also have advantages. There are more information about advantages about high-interest.

  2. Oct 2022
    1. An adviser should have their students explicitly practice decisions 25 and 26, test their solutions, and try to come up with the ways their decisions could fail, including alternative conclusions that are not the findings that they were hoping for. Thinking of such failure modes is something that even many experienced physicists are not very good at, but our research has shown that it can be readily learned with practice.

      To help fight cognitive bias, one should actively think about potential failure modes of one's decisions and think about alternative conclusions which aren't part of the findings one might have hoped for. Watching out for these can dramatically help increase solution spaces and be on the watch out for innovative alternate or even better solutions.

    2. The third and probably most serious difficulty in making good reflective decisions is confirmation bias.

      Confirmation bias can be detrimental when making solid reflective decisions.

    1. You do not reallyhave to study a topic you are working on; once your areinto it, it is everywhere. You are sensitive to its themes;you see and hear them everywhere in your experience,especially, it always seems to me, in apparently unrelatedareas. Even the mass media, especially bad movies andcheap novels and picture magazines and night radio, aredisclosed in fresh importance to you.
    2. To be able to trustone's own experience, even if it often turns out to beinadequate, is one mark of the mature workman. Suchconfidence in o n e ' s own experience is indispensable tooriginality in any intellectual pursuit, and the file is onetool by which I have tried to develop and justify suchconfidence.

      The function of memory served by having written notes is what allows the serious researcher or thinker to have greater confidence in their work, potentially more free from cognitive bias as one idea can be directly compared and contrasted with another by direct juxtaposition.

    3. whether he knows it or not, the intellec-tual workman forms his own self as he works towards theperfection of his craft.

      Here Mills seems to be defining (in 1952) an "intellectual workman" as an academic, but he doesn't go as broad as a more modern "knowledge worker" (2022) which includes those who broadly do thinking in industry as well as in academia. His older phrase also has a more gendered flavor to it that knowledge worker doesn't have now.

  3. Sep 2022
    1. But most of humanity—not just medieval people—lacked the ability to fight infections or even under-stand how they spread for much of history. England during the Renaissance suffered regular deadly outbreaks of plague, smallpox, syphilis, typhus, malaria, and a mysterious illness called “sweating sickness.” Upon contact with Europeans, upwards of 95 per cent of the Indigenous peoples of the Americas were killed by European diseases. Plagues even rav-aged the twentieth century: from 1918–1920, half a billion 14 / The Devil’s Historianspeople were infected with the Spanish Flu global pandemic, which killed between 50 and 100 million people. And let’s not forget that we are currently living with the global pandemic of HIV/AIDS

      Maybe people in the future will see today as the dark ages becuase of the outbreak of COVID-19 pandemic So it is biased to call the middle ages as "dark ages" when the level of science during the middle ages cannot heal or prevent people from the infection of plagues such as the "black death".

    1. The problem is that if one player finds a way to undermine orcircumvent the rules and gets away with it then the others have no choicebut to follow. If they don’t they’ll lose out.

      !- for : race to the bottom !- for : conformity bias - spiraling destructive entrainment

    Tags

    Annotators

    1. https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/

      Good overview article of some of the psychology research behind misinformation in social media spaces including bots, AI, and the effects of cognitive bias.

      Probably worth mining the story for the journal articles and collecting/reading them.

    2. n a recent laboratory study, Robert Jagiello, also at Warwick, found that socially shared information not only bolsters our biases but also becomes more resilient to correction.
    3. We confuse popularity with quality and end up copying the behavior we observe.

      Popularity ≠ quality in social media.

    4. Even our ability to detect online manipulation is affected by our political bias, though not symmetrically: Republican users are more likely to mistake bots promoting conservative ideas for humans, whereas Democrats are more likely to mistake conservative human users for bots.
    5. Unable to process all this material, we let our cognitive biases decide what we should pay attention to.

      In a society consumed with information overload, it is easier for our brains to allow our well evolved cognitive biases to decide not only what to pay attention to, but what to believe.

    1. https://thehill.com/homenews/senate/3641225-mcconnell-throws-shade-on-grahams-proposed-national-abortion-ban/

      I've recently run across a few examples of a pattern that should have a name because it would appear to dramatically change the outcomes. I'm going to term it "decisions based on possibilities rather than realities". It's seen frequently in economics and politics and seems to be a form of cognitive bias. People make choices (or votes) about uncertain futures, often when there is a confluence of fear, uncertainty, and doubt, and these choices are dramatically different than when they're presented with the actual circumstances in practice.

      A recent example was a story about a woman who was virulently pro-life who when presented with a situation required her to switch her position to pro-choice.

      Another relates to choices that people want to make about where their children might go to school versus where they actually send them, and the damage this does to public education.

      Let's start collecting examples of these quandaries at all levels of making choices in the real world.


      What is the relationship to this with the mental exercise of "descending into the particular"?

      Does this also potentially cause decision fatigue in cases of voting spaces when constituents are forced to vote for candidates on thousands of axes which they may or may not agree with?

  4. Aug 2022
    1. The point is to write bug-free code.

      With this comment, the anti-JS position is becoming increasingly untenable. The author earlier suggested C as an alternative. So their contention is that it's easier to write bug-free code in C than it is in JS. This is silly.

      C hackers like Fabrice Bellard don't choose C for the things they do because it's easier to write bug-free code in C.

  5. Jul 2022
    1. While Brave Search does not have editorial biases, all search engines have some level of intrinsic bias due to data and algorithmic choices. Goggles allows users to counter any intrinsic biases in the algorithm.
    1. We also tend to preferinformation we have seen more recently to informationwe learned a long time ago.

      Does this effect have a name? references?


      Apparently called the recency bias: https://en.wikipedia.org/wiki/Recency_bias which may be entangled with availability bias or heuristic.


      Are both recency and availability biases the foundations for causing the Baader–Meinhof phenomenon or frequency bias?

    1. Even though human existence in such a bare state may seem inconceivable, it is therenevertheless: every time a baby is born, a new, not yet programmed, prepersonal human is lookinginto somebody’s eyes ([27 ]: p. 133). This undeniable prepersonal presence we already call human leadsus to logically infer that humans do happen to exist prior to their personware [ 20 ,25 ,28 ]. It is thereforeour fundamental point of departure that humans are marvellous, intelligent, living cognitive agents inthemselves that can be said to exist prior to and independently of any particularly determined socialpersona. The point of acknowledging a prior prepersonal platform is not made towards arguing that ahuman can exist without any personware.

      !- for : altricial, feral children, mOTHER as the significant OTHER * The bare state of zero culture, zero social context is what each and every neonate starts with in life * The mOTHER is the most significant OTHER that begins the process of socializing and enculturating the neonate into a social system * Altrciality forces human parent into role of strong socialization * Without culture, the neonate born into the world outside the womb can become a feral child * https://www.zmescience.com/other/feature-post/feral-children/ * The state of human ferality can tell us an enormous amount of the perspective of virtually every modern, encultured person - we have a bias towards a cultural perspective because almost noone has seen from a feral perspective * Language is the gateway into the symbolosphere, where enculturated, modern humans spend a significant portion of their lives immersed in this ubiquitous, constructed, symbolic reality

    1. It feels like « removing spring » is one of those unchallenged truths like « always remove Turbolinks » or « never use fixtures ». It also feels like a confirmation bias when it goes wrong.

      "unchallenged truths" is not really accurate. More like unchallenged assumption.

  6. Jun 2022
    1. If we overlay the four steps of CODE onto the model ofdivergence and convergence, we arrive at a powerful template forthe creative process in our time.

      The way that Tiago Forte overlaps the idea of C.O.D.E. (capture/collect, organize, distill, express) with the divergence/convergence model points out some primary differences of his system and that of some of the more refined methods of maintaining a zettelkasten.

      A flattened diamond shape which grows from a point on the left so as to indicate divergence from a point to the diamond's wide middle which then decreases to the right to indicate convergence  to the opposite point. Overlapping this on the right of the diamond are the words "capture" and "organize" while the converging right side is overlaid with "distill" and "express". <small>Overlapping ideas of C.O.D.E. and divergence/convergence from Tiago Forte's book Building a Second Brain (Atria Books, 2022) </small>

      Forte's focus on organizing is dedicated solely on to putting things into folders, which is a light touch way of indexing them. However it only indexes them on one axis—that of the folder into which they're being placed. This precludes them from being indexed on a variety of other axes from the start to other places where they might also be used in the future. His method requires more additional work and effort to revisit and re-arrange (move them into other folders) or index them later.

      Most historical commonplacing and zettelkasten techniques place a heavier emphasis on indexing pieces as they're collected.

      Commonplacing creates more work on the user between organizing and distilling because they're more dependent on their memory of the user or depending on the regular re-reading and revisiting of pieces one may have a memory of existence. Most commonplacing methods (particularly the older historic forms of collecting and excerpting sententiae) also doesn't focus or rely on one writing out their own ideas in larger form as one goes along, so generally here there is a larger amount of work at the expression stage.

      Zettelkasten techniques as imagined by Luhmann and Ahrens smooth the process between organization and distillation by creating tacit links between ideas. This additional piece of the process makes distillation far easier because the linking work has been done along the way, so one only need edit out ideas that don't add to the overall argument or piece. All that remains is light editing.

      Ahrens' instantiation of the method also focuses on writing out and summarizing other's ideas in one's own words for later convenient reuse. This idea is also seen in Bruce Ballenger's The Curious Researcher as a means of both sensemaking and reuse, though none of the organizational indexing or idea linking seem to be found there.


      This also fits into the diamond shape that Forte provides as the height along the vertical can stand in as a proxy for the equivalent amount of work that is required during the overall process.

      This shape could be reframed for a refined zettelkasten method as an indication of work


      Forte's diamond shape provided gives a visual representation of the overall process of the divergence and convergence.

      But what if we change that shape to indicate the amount of work that is required along the steps of the process?!

      Here, we might expect the diamond to relatively accurately reflect the amounts of work along the path.

      If this is the case, then what might the relative workload look like for a refined zettelkasten? First we'll need to move the express portion between capture and organize where it more naturally sits, at least in Ahren's instantiation of the method. While this does take a discrete small amount of work and time for the note taker, it pays off in the long run as one intends from the start to reuse this work. It also pays further dividends as it dramatically increases one's understanding of the material that is being collected, particularly when conjoined to the organization portion which actively links this knowledge into one's broader world view based on their notes. For the moment, we'll neglect the benefits of comparison of conjoined ideas which may reveal flaws in our thinking and reasoning or the benefits of new questions and ideas which may arise from this juxtaposition.

      Graphs of commonplace book method (collect, organize, distill, express) versus zettelkasten method (collect, express, organize (index/link), and distill (edit)) with work on the vertical axis and time/methods on the horizontal axis. While there is similar work in collection the graph for the zettelkasten is overall lower and flatter and eventually tails off, the commonplace slowly increases over time.

      This sketch could be refined a bit, but overall it shows that frontloading the work has the effect of dramatically increasing the efficiency and productivity for a particular piece of work.

      Note that when compounded over a lifetime's work, this diagram also neglects the productivity increase over being able to revisit old work and re-using it for multiple different types of work or projects where there is potential overlap, not to mention the combinatorial possibilities.

      --

      It could be useful to better and more carefully plot out the amounts of time, work/effort for these methods (based on practical experience) and then regraph the resulting power inputs against each other to come up with a better picture of the efficiency gains.

      Is some of the reason that people are against zettelkasten methods that they don't see the immediate gains in return for the upfront work, and thus abandon the process? Is this a form of misinterpreted-effort hypothesis at work? It can also be compounded at not being able to see the compounding effects of the upfront work.

      What does research indicate about how people are able to predict compounding effects over time in areas like money/finance? What might this indicate here? Humans definitely have issues seeing and reacting to probabilities in this same manner, so one might expect the same intellectual blindness based on system 1 vs. system 2.


      Given that indexing things, especially digitally, requires so little work and effort upfront, it should be done at the time of collection.


      I'll admit that it only took a moment to read this highlighted sentence and look at the related diagram, but the amount of material I was able to draw out of it by reframing it, thinking about it, having my own thoughts and ideas against it, and then innovating based upon it was incredibly fruitful in terms of better differentiating amongst a variety of note taking and sense making frameworks.

      For me, this is a great example of what reading with a pen in hand, rephrasing, extending, and linking to other ideas can accomplish.

    2. If you ignore that inner voice of intuition, over time it will slowlyquiet down and fade away. If you practice listening to what it is tellingyou, the inner voice will grow stronger. You’ll start to hear it in allkinds of situations. It will guide you in what choices to make andwhich opportunities to pursue. It will warn you away from people andsituations that aren’t right for you. It will speak up and take a standfor your convictions even when you’re afraid.I can’t think of anything more important for your creative life—andyour life in general—than learning to listen to the voice of intuitioninside. It is the source of your imagination, your confidence, and yourspontaneity

      While we have evolved a psychological apparatus that often gives us good "gut feelings" (an actual physical "second brain"), we should listen careful to them, but we should also learn to think about, analyze, and verify these feelings so we don't fall prey to potential cognitive biases.

    1. It is now impossible for the world’s leaders to say that they “didn’t know” that this was going on, and that we didn’t have the power to prevent it all along. We scientists have been working hard, collecting evidence, writing reports, and presenting it all to the world’s leaders and the broader public. No one can honestly say that we haven’t been warning the world for decades.

      And therein lies the great mystery. How is it that with this specific way of knowing, we can still ignore the overwhelming science? It's not just a small minority either, but the majority of the elites. As research from Yale and other leading research institutions on climate communications have discovered, it is not so much a knowledge deficit problem, as it is a sociological / pyschological ingroup/outgroup conformity bias problem.

      This would suggest that the scientific community must rapidly pivot and place more resources on studying this important area to find the leverage points for penetrating conformity bias.

    1. but suppressed through a series of abandonments made in a vain effort to conform with societal expectations.

      Our propensity for conformity bias is extremely powerful....the same thing is destroying political discourse. However, an antidote to conformity bias is awe:

      https://hyp.is/go?url=http%3A%2F%2Fdocdrop.org%2Fvideo%2F17D5SrgBE6g%2F&group=world

    1. if you accept that this is pretty widespread and we we can talk about all the evidence for that the question is then why like how why are we so susceptible to being spectacularly wrong about the 00:04:13 group and then end up like making something true that never was true and it's really like two underlying mechanisms right so the first is this conformity bias which that's not very novel like we know we've known for a 00:04:25 long time that is a species humans are a conforming species

      Two mechanisms behind collective illusions. The first is conformity bias.

  7. May 2022
    1. The student doesn’t have a strong preference for any of these archetypes. Their notes serve a clear purpose that’s often based on a short-term priority (e.g, writing a paper or passing a test), with the goal to “get it done” as simply as possible.

      The typical student note taking method of transcribing, using (or often not using at all), and keeping notes is doomed to failure.

      Many students make the mistake of not making their own actual notes. By this I don't mean they're not writing information down. In fact many are writing information down, but we can't really call these notes. Notes by definition ought to transform something seen or heard into one's own words. Without the transformation, these students think that they're taking notes, but in reality they're focusing their efforts on being transcriptionists. They're attempting to capture something for later consumption. This is a deadly trap! By only transcribing, they're not taking advantage of transforming information by putting ideas down in their own words to test their understanding. Often worse, even if they do transcribe notes, they don't revisit them. If they do revisit them, they're simply re-reading them and not actively working with them. Only re-reading them will lead to the illusion that they're learning something when in fact they're falling into the mere-exposure effect.

      Students who are acting as transcriptionists would be better off simply reading a textbook and taking notes directly from that.

      A note that isn't revisited or revised, may as well be a note not taken. If we were to consider a spectrum of useful, valuable, and worthwhile notes, these notes would be at the lowest end of the spectrum.

      link to: https://hypothes.is/a/QgkL6IkIEeym7OeN9v9New

  8. Apr 2022
    1. Before 2009, Facebook had given users a simple timeline––a never-ending stream of content generated by their friends and connections, with the newest posts at the top and the oldest ones at the bottom. This was often overwhelming in its volume, but it was an accurate reflection of what others were posting. That began to change in 2009, when Facebook offered users a way to publicly “like” posts with the click of a button. That same year, Twitter introduced something even more powerful: the “Retweet” button, which allowed users to publicly endorse a post while also sharing it with all of their followers. Facebook soon copied that innovation with its own “Share” button, which became available to smartphone users in 2012. “Like” and “Share” buttons quickly became standard features of most other platforms.Shortly after its “Like” button began to produce data about what best “engaged” its users, Facebook developed algorithms to bring each user the content most likely to generate a “like” or some other interaction, eventually including the “share” as well. Later research showed that posts that trigger emotions––especially anger at out-groups––are the most likely to be shared.

      The Firehose versus the Algorithmic Feed

      See related from The Internet Is Not What You Think It Is: A History, A Philosophy, A Warning, except with more depth here.

    1. Algorithms in themselves are neither good nor bad. And they can be implemented even where you don’t have any technology to implement them. That is to say, you can run an algorithm on paper, and people have been doing this for many centuries. It can be an effective way of solving problems. So the “crisis moment” comes when the intrinsically neither-good-nor-bad algorithm comes to be applied for the resolution of problems, for logistical solutions, and so on in many new domains of human social life, and jumps the fence that contained it as focusing on relatively narrow questions to now structuring our social life together as a whole. That’s when the crisis starts.

      Algorithms are agnostic

      As we know them now, algorithms—and [[machine learning]] in general—do well when confined to the domains in which they started. They come apart when dealing with unbounded domains.

    1. The way technologies like fMRI are applied is aproduct of our brainbound orientation; it has not seemed odd or unusual toexamine the individual brain on its own, unconnected to others.

      In part because of modalities of studying the brain using methods like fMRI where the images are of an individual's head, we focus too much and too exclusively on single brains bound to individuals rather than on brains working in concert.

      Greater flexibilities in tools and methods should help do studies of humans working in concert.


      Link this to the anecdote:

      I recall a radiology test within a medical school setting in which students were asked to diagnose an x-ray of a human patient's skull. Most either guessed small hairline fractures in the skull or that there was nothing wrong with the patient.

      Can you diagnose the patient?

      Almost all the students failed the question, and worse felt like idiots when the answer was revealed: the patient must be dead because the spinal column and the rest of the body are not attached. Compare:

  9. Mar 2022
    1. computers might therefore easily outperform humans at facial recognition and do so in a much less biased way than humans. And at this point, government agencies will be morally obliged to use facial recognition software since it will make fewer mistakes than humans do.

      Banning it now because it isn't as good as humans leaves little room for a time when the technology is better than humans. A time when the algorithm's calculations are less biased than human perception and interpretation. So we need rigorous methodologies for testing and documenting algorithmic machine models as well as psychological studies to know when the boundary of machine-better-than-human is crossed.

    1. In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible.

      Although the model was driven "towards compounds such as the nerve agent VX", it found VX but also many other known chemical warfare agents and many new molecules...that looked equally plausible."

      AI is the tool. The parameters by which it is set up makes something "good" or "bad".

    1. The study’s authors suggest that this discrepancy may emerge fromdifferences in boys’ and girls’ experience: boys are more likely to play withspatially oriented toys and video games, they note, and may become morecomfortable making spatial gestures as a result. Another study, this oneconducted with four-year-olds, reported that children who were encouraged togesture got better at rotating mental objects, another task that draws heavily onspatial-thinking skills. Girls in this experiment were especially likely to benefitfrom being prompted to gesture.

      The gender-based disparity of spatial thinking skills between boys and girls may result from the fact that at an early age boys are more likely to play with spatially oriented toys and video games. Encouraging girls to do more spatial gesturing at an earlier age can dramatically close this spatial thinking gap.

    1. Newton arranged an experiment in which one person — a “tapper” — was asked to tap out the melody of a popular song, while another person — the “listener” — was asked to identify it. The tappers assumed that their listeners would correctly identify about 50% of their melodies; they were amazed to learn that the listeners only got about one out of 40 songs correct. To the tappers, their melodies sounded perfectly clear and obvious, but the listeners heard no music, no instrumentation in their heads — only the muffled noise of a finger tapping on a table.

      An example of the curse of knowledge effect.

  10. Feb 2022
    1. The velocity of social sharing, the power of recommendation algorithms, the scale of social networks, and the accessibility of media manipulation technology has created an environment where pseudo events, half-truths, and outright fabrications thrive.

      As it has been stated by Daniel Kahneman, we all are "cognitively lazy." This a very telling statement that helps to reveal the different reasonings of why we are in a world full of "half-truths" but, deeper than that, why we all continue to accept these half-truths. A lot of times we do not want to take the necessary time it takes to evaluate information instead of just accepting things to be true.

    1. Deepti Gurdasani. (2022, January 10). Lots of people dismissing links between COVID-19 and all-cause diabetes. An association that’s been shown in multiple studies- whether this increase is due to more diabetes or SARS2 precipitating diabetic keto-acidosis allowing these to be diagnosed is not known. A brief look👇 [Tweet]. @dgurdasani1. https://twitter.com/dgurdasani1/status/1480546865812840450

    1. Read for Understanding

      Ahrens goes through a variety of research on teaching and learning as they relate to active reading, escaping cognitive biases, creating understanding, progressive summarization, elaboration, revision, etc. as a means of showing and summarizing how these all dovetail nicely into a fruitful long term practice of using a slip box as a note taking method. This makes the zettelkasten not only a great conversation partner but an active teaching and learning partner as well. (Though he doesn't mention the first part in this chapter or make this last part explicit.)

    2. Reading, especially rereading, caneasily fool us into believing we understand a text. Rereading isespecially dangerous because of the mere-exposure effect: Themoment we become familiar with something, we start believing wealso understand it. On top of that, we also tend to like it more(Bornstein 1989).

      The mere-exposure effect can be dangerous when rereading a text because we are more likely to falsely believe we understand it. Robert Bornstein's research from 1989 indicates that we will tend to like the text more, which can pull us into confirmation bias.

      Bornstein, Robert F. 1989. “Exposure and Affect: Overview and Meta-Analysis of Research, 1968-1987.” Psychological Bulletin 106 (2): 265–89.

    3. The linear process promoted by most study guides, which insanelystarts with the decision on the hypothesis or the topic to write about,is a sure-fire way to let confirmation bias run rampant.

      Many study and writing guides suggest to start ones' writing or research work with a topic or hypothesis. This is a recipe for disaster to succumb to confirmation bias as one is more likely to search out for confirming evidence rather than counter arguments. Better to start with interesting topic and collect ideas from there which can be pitted against each other.

    4. “I had [...]during many years followed a golden rule, namely, that whenever apublished fact, a new observation or thought came across me, whichwas opposed to my general results, to make a memorandum of itwithout fail and at once; for I had found by experience that such factsand thoughts were far more apt to escape from the memory thanfavorable ones. Owing to this habit, very few objections were raisedagainst my views, which I had not at least noticed and attempted toanswer.” (Darwin 1958, 123)

      Charles Darwin fought confirmation bias by writing down contrary arguments and criticisms and addressing them.

    5. psychologists call the mere-exposure effect: doing something many times makes us believe wehave become good at it – completely independent of our actualperformance (Bornstein 1989). We unfortunately tend to confusefamiliarity with skill.

      The mere-exposure effect leads us to confuse familiarity with a process with actual skill.

    6. Our brains work not that differently in terms of interconnectedness.Psychologists used to think of the brain as a limited storage spacethat slowly fills up and makes it more difficult to learn late in life. Butwe know today that the more connected information we alreadyhave, the easier it is to learn, because new information can dock tothat information. Yes, our ability to learn isolated facts is indeedlimited and probably decreases with age. But if facts are not kept

      isolated nor learned in an isolated fashion, but hang together in a network of ideas, or “latticework of mental models” (Munger, 1994), it becomes easier to make sense of new information. That makes it easier not only to learn and remember, but also to retrieve the information later in the moment and context it is needed.

      Our natural memories are limited in their capacities, but it becomes easier to remember facts when they've got an association to other things in our minds. The building of mental models makes it easier to acquire and remember new information. The down side is that it may make it harder to dramatically change those mental models and re-associate knowledge to them without additional amounts of work.


      The mental work involved here may be one of the reasons for some cognitive biases and the reason why people are more apt to stay stuck in their mental ruts. An example would be not changing their minds about ideas of racism and inequality, both because it's easier to keep their pre-existing ideas and biases than to do the necessary work to change their minds. Similar things come into play with respect to tribalism and political party identifications as well.

      This could be an interesting area to explore more deeply. Connect with George Lakoff.

    7. Just followyour interest and always take the path that promises the mostinsight.

      What specific factors does one evaluate for determining what particular paths will provide actual (measurable) insight?

      Most people have a personal gut reaction about which directions to go in heuristically, but can these heuristics be broken down explicitly to enable better evaluating them? How can they be used to avoid cognitive biases?

    1. Deepti Gurdasani. (2022, January 30). Have tried to now visually illustrate an earlier thread I wrote about why prevalence estimates based on comparisons of “any symptom” between infected cases, and matched controls will yield underestimates for long COVID. I’ve done a toy example below here, to show this 🧵 [Tweet]. @dgurdasani1. https://twitter.com/dgurdasani1/status/1487578265187405828

  11. Jan 2022
    1. An over-reliance on numbers often leads to bias and discrimination.

      By their nature, numbers can create an air of objectivity which doesn't really exist and may be hidden by the cultural context one is working within. Be careful not to create an over-reliance on numbers. Particularly in social and political situations this reliance on numbers and related statistics can create dramatically increased bias and discrimination. Numbers may create a part of the picture, but what is being left out or not measured? Do the numbers you have with respect to your area really tell the whole story?

    2. Current approaches to improving digital well-being also promote tech solutionism, or the presumption that technology can fix social, cultural, and structural problems.

      Tech solutionism is the presumption that technology (usually by itself) can fix a variety of social, cultural, and structural problems.

      It fits into a category of problem that when one's tool is a hammer then every problem looks like a nail.

      Many tech solutionism problems are likely ill-defined to begin with. Many are also incredibly complex and difficult which also tends to encourage bikeshedding, which is unlikely to lead us to appropriate solutions.

    1. Most of us simply take it for granted that ‘Western’observers, even seventeenth-century ones, are simply an earlierversion of ourselves;

      It is likely a good broad generality that from a historical perspective, those looking at people from the past do so by considering them simply an earlier version of ourselves.

      This sort of isocultural cognitive bias is something to be very cognizant of particularly in cases without extensive context as it is likely to cause massive context collapse.

    1. many people accept the scientific consensus on, say, vaccine effectiveness not because they value peer-reviewed research but because they are impressed by people in lab coats who use big words
    2. the fact that many Bitcoin enthusiasts say bizarre things does not, in itself, mean that cryptocurrencies are a bad idea

      Is this some kind of attribution bias?

    1. In the new film, she has been in the city for years, caring for her father (it’s hinted that he died), and she expresses, in a single line, a desire to go to college. Bernardo is now a boxer just beginning his career. Chino, an undefined presence in the original, is now in night school, studying accounting and adding-machine repair. But nothing comes of these new practical emphases; the characters have no richer inner lives, cultural substance, or range of experience than they do in the first film. Maria still has little definition beyond her relationship with Tony; she remains as much of a cipher as she was in the 1961 film.

      The writer is purposely making these characters seem way different while ignoring that the movie was made in a completely different era to relate more to today's problems rather than problems in 1961. The speaker fails to recognize that the movie is going to have a different look because it is a new producer.

    1. Always listen to your patients before runningtests—they will tell you their diagnosi

      bias

    Tags

    Annotators

  12. Dec 2021
    1. When we simply guess as to whathumans in other times and places might be up to, we almostinvariably make guesses that are far less interesting, far less quirky– in a word, far less human than what was likely going on.

      Definitely worth keeping in mind, even for my own work. Providing an evidential structure for claims will be paramount.

      Is there a well-named cognitive bias for the human tendency to see everything as nails when one has a hammer in their hand?

    2. ‘What is it about the ancients,’ Pinker asks at one point, ‘that theycouldn’t leave us an interesting corpse without resorting to foul play?’

      Part of their point here seems to be that Pinker is suffering from a form of bias related to the most sensational cases which will tend to heighten the availability bias. (Is there a name for this sort of sensationalism effect?)

      Is there also some survivorship bias at play here as well?

      We don't have access to a wide statistical survey of dead bodies from a large swath of times and places which makes it difficult to determine actual numbers.

    3. Now, this may seem counter-intuitive to anyone who spendsmuch time watching the news, let alone who knows much about thehistory of the twentieth century.

      Are they suffering from potential availability heuristic (cognitive bias) here? Are they encouraging it in us? Just because we see violence on the news every day doesn't mean it's ubiquitous.

      Apparently we'll need real evidence here to provide actual indications.

      Does Steven Pinker provide archaeological evidence in his book? What are the per capita rates of violence and/or death over time?

    1. In a nutshell, then, there was never a time when humans uniformly lived in small, simple egalitarian hunter-gatherer societies, and a time when they started to switch to agriculture- thus inevitably switching to a  sedentary, hierarchical, and more complex life style. This is not because the correct trajectory is a different one, but because there was never a linear trajectory to begin with.

      Is there a reason or cognitive bias we've got that would tend to make us think that there's a teleological outcome in these cases?

      Why should it seem like there would be a foregone conclusion to all of human life or history? Why couldn't/shouldn't it just keep evolving from its current context to the next

    1. A sharp rise in reported active volcanoes immediately post-WW II was followed by another steep increase in the early 1950s that has no obvious relationship to historic events.

      'No obvious relationship to historic events' is blatantly inaccurate here. The US military was active in the Pacific for the entirety of this time frame reestablishing the power in the Pacific US colonies. It naturally would follow that volcanic activity would be reported at higher rates as military vessels were combing the area.

    1. Sean Phelan. (2021, November 26). Striking how some media coverage is assuming (without caveats) that the Belgian case brought the new variant “from” Egypt or Turkey.There’s no chance they picked it up after returning to Belgium of course. How could that happen..we only have a 7-day average of 17,000 cases a day [Tweet]. @seanphelan8. https://twitter.com/seanphelan8/status/1464252432033136659

  13. Nov 2021
    1. I know a number of my subs and viewers are in India and I've noticed on Twitter and on Abhijit Chavda's channel that there's quite a bit of controversy about the way Indian History is taught to Indian students. That interests me a lot, but what I'm PARTICULARLY interested in is, how World History surveys throughout the world cover world history. If part of this involves continuing the narratives introduced by colonizers, like the Aryan Invasion myth, that's relevant to my question.
  14. Oct 2021
    1. What the world is seeing now, through the window provided by reams of internal documents, is that Facebook catalogs and studies the harm it inflicts on people. And then it keeps harming people anyway.

      One of the flaws of Mark Zuckerberg's spectrum disorder is that he either has no sense of shame or his confirmation bias and loss aversion biases are incredibly large.

    1. There are many other more subtle biases of the evolved human brain—its tendency to focus on the thing that changes rather than the thing that’s constant,

      Is there a name for this bias?

    1. 02:18 So we gave people information and as a result it caused polarization, it didn’t cause 02:23 people to come together.
  15. Sep 2021
    1. One last resource for augmenting our minds can be found in other people’s minds. We are fundamentally social creatures, oriented toward thinking with others. Problems arise when we do our thinking alone — for example, the well-documented phenomenon of confirmation bias, which leads us to preferentially attend to information that supports the beliefs we already hold. According to the argumentative theory of reasoning, advanced by the cognitive scientists Hugo Mercier and Dan Sperber, this bias is accentuated when we reason in solitude. Humans’ evolved faculty for reasoning is not aimed at arriving at objective truth, Mercier and Sperber point out; it is aimed at defending our arguments and scrutinizing others’. It makes sense, they write, “for a cognitive mechanism aimed at justifying oneself and convincing others to be biased and lazy. The failures of the solitary reasoner follow from the use of reason in an ‘abnormal’ context’” — that is, a nonsocial one. Vigorous debates, engaged with an open mind, are the solution. “When people who disagree but have a common interest in finding the truth or the solution to a problem exchange arguments with each other, the best idea tends to win,” they write, citing evidence from studies of students, forecasters and jury members.

      Thinking in solitary can increase one's susceptibility to confirmation bias. Thinking in groups can mitigate this.

      How might keeping one's notes in public potentially help fight against these cognitive biases?

      Is having a "conversation in the margins" with an author using annotation tools like Hypothes.is a way to help mitigate this sort of cognitive bias?

      At the far end of the spectrum how do we prevent this social thinking from becoming groupthink, or the practice of thinking or making decisions as a group in a way that discourages creativity or individual responsibility?

  16. Aug 2021
    1. The Attack on "Critical Race Theory": What's Going on?

      https://www.youtube.com/watch?v=P35YrabkpGk

      Lately, a lot of people have been very upset about “critical race theory.” Back in September 2020, the former president directed federal agencies to cut funding for training programs that refer to “white privilege” or “critical race theory, declaring such programs “un-American propaganda” and “a sickness that cannot be allowed to continue.” In the last few months, at least eight states have passed legislation banning the teaching of CRT in schools and some 20 more have similar bills in the pipeline or plans to introduce them. What’s going on?

      Join us for a conversation that situates the current battle about “critical race theory” in the context of a much longer war over the relationship between our racial present and racial past, and the role of culture, institutions, laws, policies and “systems” in shaping both. As members of families and communities, as adults in the lives of the children who will have to live with the consequences of these struggles, how do we understand what's at stake and how we can usefully weigh in?

      Hosts: Melissa Giraud & Andrew Grant-Thomas

      Guests: Shee Covarrubias, Kerry-Ann Escayg,

      Some core ideas of critical race theory:

      • racial realism
        • racism is normal
      • interest convergence
        • racial equity only occurs when white self interest is being considered (Brown v. Board of Education as an example to portray US in a better light with respect to the Cold War)
      • Whiteness as property
        • Cheryl Harris' work
        • White people have privilege in the law
        • myth of meritocracy
      • Intersectionality

      People would rather be spoon fed rather than do the work themselves. Sadly this is being encouraged in the media.

      Short summary of CRT: How laws have been written to institutionalize racism.

      Culturally Responsive Teaching (also has the initials CRT).

      KAE tries to use an anti-racist critical pedagogy in her teaching.

      SC: Story about a book Something Happened in Our Town (book).

      • Law enforcement got upset and the school district
      • Response video of threat, intimidation, emotional blackmail by local sheriff's department.
      • Intent versus impact - the superintendent may not have had a bad intent when providing an apology, but the impact was painful

      It's not really a battle about or against CRT, it's an attempt to further whitewash American history. (synopsis of SC)

      What are you afraid of?

    1. Named after Soviet psychologist Bluma Zeigarnik, in psychology the Zeigarnik effect occurs when an activity that has been interrupted may be more readily recalled. It postulates that people remember unfinished or interrupted tasks better than completed tasks. In Gestalt psychology, the Zeigarnik effect has been used to demonstrate the general presence of Gestalt phenomena: not just appearing as perceptual effects, but also present in cognition.

      People remember interrupted or unfinished tasks better than completed tasks.

      Examples: I've had friends remember where we left off on conversations months/years later and we picked right back up.

      I wonder what things effect these memories/abilities? Context? Importance? Other?

  17. Jul 2021