8 Matching Annotations
  1. Jul 2023
    1. weakly informative approach to Bayesian analysis

      In [[Richard McElreath]]'s [[Statistical Rethinking]], he defines [[weakly informative priors]] (aka [[regularizing priors]]) as

      priors that gently nudge the machine [which] usually improve inference. Such priors are sometimes called regularizing or weakly informative priors. They are so useful that non-Bayesian statistical procedures have adopted a mathematically equivalent approach, [[penalized likelihood]]. (p. 35, 1st ed.)

    1. Science is not described by thefalsification standard, as Popper recognized and argued.4 In fact, deductive falsification isimpossible in nearly every scientific context. In this section, I review two reasons for thisimpossibility.(1) Hypotheses are not models. The relations among hypotheses and different kinds ofmodels are complex. Many models correspond to the same hypothesis, and manyhypotheses correspond to a single model. This makes strict falsification impossible.(2) Measurement matters. Even when we think the data falsify a model, another ob-server will debate our methods and measures. They don’t trust the data. Sometimesthey are right.For both of these reasons, deductive falsification never works. The scientific method cannotbe reduced to a statistical procedure, and so our statistical methods should not pretend.

      Seems consistent with how Popper used the terms [[falsification]] and [[falsifiability]] noted here

    2. Statistical RethinkingA Bayesian Coursewith Examplesin R and StanRichard McElreath

      A companion book to [[Richard McElreath]]'s phenomenal lecture course [[Statistical Rethinking]] which he made freely available here.

      Note that this is the 1st ed. of the book (2015).

      source

    3. Statisticians, for theirpart, can derive pleasure from scolding scientists, which just makes the psychological battleworse.

      Note to self: don't do this.

    4. So where do priors come from? They are engineering assumptions, chosen to help themachine learn. The flat prior in Figure 2.5 is very common, but it is hardly ever the best prior.You’ll see later in the book that priors that gently nudge the machine usually improve infer-ence. Such priors are sometimes called regularizing or weakly informative priors.They are so useful that non-Bayesian statistical procedures have adopted a mathematicallyequivalent approach, penalized likelihood. These priors are conservative, in that theytend to guard against inferring strong associations between variables.

      p. 35 where [[Richard McElreath]] defines [[weakly informative priors]] aka [[regularizing priors]] in [[Bayesian statistics]]. Notes that non-Bayesian methods have a mathematically equivalent approach called [[penalized likelihood]].

    5. The other imagines instead that population size fluctuates through time, which can be trueeven when there is no selective difference among alleles.

      McElreath is referring to \(\text{P}_{0\text{B}}\) (process model zero-B).

    6. one assumes the population size andstructure have been constant long enough for the distribution of alleles to reach a steady state

      The population size & structure being "constant" is what [[Richard McElreath]] means by "equilibrium" in \(\text{P}_{0\text{A}}\) (process model zero-A), which corresponds to the null hypothesis

      \(\text{H}_0: \text{``Evolution is neutral"}\)

    7. Andrew Gelman’s

      Per Andrew Gelman's wiki:

      Andrew Eric Gelman (born February 11, 1965) is an American statistician and professor of statistics and political science at Columbia University.

      Gelman received bachelor of science degrees in mathematics and in physics from MIT, where he was a National Merit Scholar, in 1986. He then received a master of science in 1987 and a doctor of philosophy in 1990, both in statistics from Harvard University, under the supervision of Donald Rubin.[1][2][3]