2 Matching Annotations
  1. Jul 2018
    1. On 2013 Oct 25, Gregory Francis commented:

      I fear the science in this study does not back up the conclusions. An important part of the scientific process is to check whether the statistical data are consistent with the hypothesized theory. One way to do this is to compute the statistical power of the experiments. This analysis supposes that an experiment accurately measured the reported effect and then computes the probability that a randomly selected sample of data would satisfy the statistical criteria used to justify the scientific claims. For this study, these estimated probabilities are 0.56, 0.52, 0.48, and 0.57. That the values are close to one-half reflects the fact that the data were typically just to one side of the statistical criterion for finding an effect. Due to natural variation, data from some samples should fall on the other side of the criterion.

      The power analysis suggests that, at best, the odds of showing the effect for each experiment is almost the equivalent of a coin flip. As such, the repeated success of the reported experiments is rather unbelievable (multiplying the power values gives 0.08). Scientists should doubt the veracity of the reported experiments; they are too good to be true.

      A Excel file that computes the effect sizes used in the power analysis can be found at http://www1.psych.purdue.edu/~gfrancis/Publications/VohReddenRahinel2013.xls


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2013 Oct 25, Gregory Francis commented:

      I fear the science in this study does not back up the conclusions. An important part of the scientific process is to check whether the statistical data are consistent with the hypothesized theory. One way to do this is to compute the statistical power of the experiments. This analysis supposes that an experiment accurately measured the reported effect and then computes the probability that a randomly selected sample of data would satisfy the statistical criteria used to justify the scientific claims. For this study, these estimated probabilities are 0.56, 0.52, 0.48, and 0.57. That the values are close to one-half reflects the fact that the data were typically just to one side of the statistical criterion for finding an effect. Due to natural variation, data from some samples should fall on the other side of the criterion.

      The power analysis suggests that, at best, the odds of showing the effect for each experiment is almost the equivalent of a coin flip. As such, the repeated success of the reported experiments is rather unbelievable (multiplying the power values gives 0.08). Scientists should doubt the veracity of the reported experiments; they are too good to be true.

      A Excel file that computes the effect sizes used in the power analysis can be found at http://www1.psych.purdue.edu/~gfrancis/Publications/VohReddenRahinel2013.xls


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.