- Jul 2018
-
europepmc.org europepmc.org
-
On 2014 Apr 02, Daniel J Simons commented:
I wrote an extensive post-publication "HI-BAR" review of this paper on my blog (stands for Had I Been A Reviewer). You can access it at http://blog.dansimons.com/2014/04/hi-bar-benefits-of-lumosity-training.html
I posted a list of my concerns about the paper as a comment on the article at PLoS, and I've duplicated that list below. The blog post gives a more detailed discussion and explanation of each point. If the authors respond on PLoS, I'll update my comment and add a link to their response here as well.
In short, I do not think the paper permits the conclusion that game training produced any reliable benefits on the reported outcome measure.
List of questions and concerns:
1) The sample size of 15 in the training group and 12 in the control group is problematically small, especially for correlational analysis, but also for the primary analyses.
2) The "limited-contact" control group does not permit an inference that anything specific to the training led to the transfer effects. See http://pps.sagepub.com/content/8/4/445.full
3) The paper includes no corrections for multiple tests, and the core findings likely would not be significant with correction.
4) The paper does not report the means and variability for the accuracy data, leaving open the possibility of a speed-accuracy tradeoff.
5) The choice of response time cutoffs and exclusions were somewhat arbitrary, so it's not clear how robust these effects would be to other cutoffs.
6) The contrasts used to measure alertness and distraction were not defined. Which conditions were compared?
7) The alertness and distraction tests do not include a test of the difference between the training and control group. The fact that the training group difference was significant (but see below) and the control group difference was not does not mean that the difference between the groups was significant.
8) The training improvements for the alertness and distraction outcome measures were reported to be p=.05 and p=.04. But, they were truncated from p=.0565 and p=.0451. The first was not significant, and truncating the p-values is inappropriate. (Note that neither would be significant after correcting for multiple tests.)
9) The paper reports 20 correlations (each outcome measure with each of the 10 games in the training condition), but does not correct for multiple tests. And, correlations based on N=15 are of questionable reliability anyway. Moreover, correlations between training improvements and improvements on an outcome measure do not provide evidence for the efficacy of training.
10) The conclusion claims support for the idea that training improved "attention filtering," but the study does not test the mechanism that improved (and, the evidence that anything improved is uncertain).
11) The clinicaltrials.gov registration linked from the paper was posted after the paper was first submitted for publication. It is not a pre-registration.
12) The clinicaltrials.gov registration mentions a number of outcome measures that were not reported in the paper and were not mentioned on the PLoS Protocol and Consort Checklist (in the supplementary materials). If these measures were collected, they should be reported in the paper and in the supplemental materials. It is unclear whether these outcome measures just were not significant or were withheld for other reasons. In either case, the presence of unreported outcome measures makes it impossible to interpret the p-values for the one outcome measure reported in the paper.
13) The clinicaltrials.gov registration also lists a 24-week testing session that wasn't mentioned in the paper. Was the reported testing session an interim one?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
- Feb 2018
-
europepmc.org europepmc.org
-
On 2014 Apr 02, Daniel J Simons commented:
I wrote an extensive post-publication "HI-BAR" review of this paper on my blog (stands for Had I Been A Reviewer). You can access it at http://blog.dansimons.com/2014/04/hi-bar-benefits-of-lumosity-training.html
I posted a list of my concerns about the paper as a comment on the article at PLoS, and I've duplicated that list below. The blog post gives a more detailed discussion and explanation of each point. If the authors respond on PLoS, I'll update my comment and add a link to their response here as well.
In short, I do not think the paper permits the conclusion that game training produced any reliable benefits on the reported outcome measure.
List of questions and concerns:
1) The sample size of 15 in the training group and 12 in the control group is problematically small, especially for correlational analysis, but also for the primary analyses.
2) The "limited-contact" control group does not permit an inference that anything specific to the training led to the transfer effects. See http://pps.sagepub.com/content/8/4/445.full
3) The paper includes no corrections for multiple tests, and the core findings likely would not be significant with correction.
4) The paper does not report the means and variability for the accuracy data, leaving open the possibility of a speed-accuracy tradeoff.
5) The choice of response time cutoffs and exclusions were somewhat arbitrary, so it's not clear how robust these effects would be to other cutoffs.
6) The contrasts used to measure alertness and distraction were not defined. Which conditions were compared?
7) The alertness and distraction tests do not include a test of the difference between the training and control group. The fact that the training group difference was significant (but see below) and the control group difference was not does not mean that the difference between the groups was significant.
8) The training improvements for the alertness and distraction outcome measures were reported to be p=.05 and p=.04. But, they were truncated from p=.0565 and p=.0451. The first was not significant, and truncating the p-values is inappropriate. (Note that neither would be significant after correcting for multiple tests.)
9) The paper reports 20 correlations (each outcome measure with each of the 10 games in the training condition), but does not correct for multiple tests. And, correlations based on N=15 are of questionable reliability anyway. Moreover, correlations between training improvements and improvements on an outcome measure do not provide evidence for the efficacy of training.
10) The conclusion claims support for the idea that training improved "attention filtering," but the study does not test the mechanism that improved (and, the evidence that anything improved is uncertain).
11) The clinicaltrials.gov registration linked from the paper was posted after the paper was first submitted for publication. It is not a pre-registration.
12) The clinicaltrials.gov registration mentions a number of outcome measures that were not reported in the paper and were not mentioned on the PLoS Protocol and Consort Checklist (in the supplementary materials). If these measures were collected, they should be reported in the paper and in the supplemental materials. It is unclear whether these outcome measures just were not significant or were withheld for other reasons. In either case, the presence of unreported outcome measures makes it impossible to interpret the p-values for the one outcome measure reported in the paper.
13) The clinicaltrials.gov registration also lists a 24-week testing session that wasn't mentioned in the paper. Was the reported testing session an interim one?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-