33 Matching Annotations
  1. Dec 2023
  2. Jul 2023
    1. weakly informative approach to Bayesian analysis

      In [[Richard McElreath]]'s [[Statistical Rethinking]], he defines [[weakly informative priors]] (aka [[regularizing priors]]) as

      priors that gently nudge the machine [which] usually improve inference. Such priors are sometimes called regularizing or weakly informative priors. They are so useful that non-Bayesian statistical procedures have adopted a mathematically equivalent approach, [[penalized likelihood]]. (p. 35, 1st ed.)

    1. Science is not described by thefalsification standard, as Popper recognized and argued.4 In fact, deductive falsification isimpossible in nearly every scientific context. In this section, I review two reasons for thisimpossibility.(1) Hypotheses are not models. The relations among hypotheses and different kinds ofmodels are complex. Many models correspond to the same hypothesis, and manyhypotheses correspond to a single model. This makes strict falsification impossible.(2) Measurement matters. Even when we think the data falsify a model, another ob-server will debate our methods and measures. They don’t trust the data. Sometimesthey are right.For both of these reasons, deductive falsification never works. The scientific method cannotbe reduced to a statistical procedure, and so our statistical methods should not pretend.

      Seems consistent with how Popper used the terms [[falsification]] and [[falsifiability]] noted here

    2. So where do priors come from? They are engineering assumptions, chosen to help themachine learn. The flat prior in Figure 2.5 is very common, but it is hardly ever the best prior.You’ll see later in the book that priors that gently nudge the machine usually improve infer-ence. Such priors are sometimes called regularizing or weakly informative priors.They are so useful that non-Bayesian statistical procedures have adopted a mathematicallyequivalent approach, penalized likelihood. These priors are conservative, in that theytend to guard against inferring strong associations between variables.

      p. 35 where [[Richard McElreath]] defines [[weakly informative priors]] aka [[regularizing priors]] in [[Bayesian statistics]]. Notes that non-Bayesian methods have a mathematically equivalent approach called [[penalized likelihood]].

    3. Andrew Gelman’s

      Per Andrew Gelman's wiki:

      Andrew Eric Gelman (born February 11, 1965) is an American statistician and professor of statistics and political science at Columbia University.

      Gelman received bachelor of science degrees in mathematics and in physics from MIT, where he was a National Merit Scholar, in 1986. He then received a master of science in 1987 and a doctor of philosophy in 1990, both in statistics from Harvard University, under the supervision of Donald Rubin.[1][2][3]

  3. Sep 2021
    1. One of the Indians that came from Medfield fight, had brought some plunder, came to me, and asked me, if I would have a Bible, he had got one in his basket. I was glad of it, and asked him, whether he thought the Indians would let me read? He answered, yes. So I took the Bible,

      They treat captives with more kindness than the Englishmen do. They still let her have access to religious texts.

    1. All was gone, my husband gone (at least separated from me, he being in the Bay; and to add to my grief, the Indians told me they would kill him as he came homeward), my children gone, my relations and friends gone, our house and home and all our comforts—within door and without—all was gone (except my life), and I knew not but the next moment that might go too.

      Alluding to how the Englishmen killed the Natives and took them slaves as a way of profit.

  4. Aug 2020
  5. Jul 2020