10,000 Matching Annotations
  1. Jul 2018
    1. On 2016 Nov 16, Joe G. Gitchell commented:

      Readers may find my post and the subsequent discussion of interest at this link (which includes relevant disclosures): http://conscienhealth.org/2016/11/finding-a-way-for-healthier-generations/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 29, Andrew Brown commented:

      The ability to evaluate and compare this study is limited by missing methodoligical details about the outcome phenotype: various measurements of obesity and adiposity. Please excuse me if I missed the details somehow, and I hope the authors will consider adding details to better help the scientific community evaluate the authors' contribution to microbiome-obesity research.

      The authors state that they evaluated three measures of abdominal adiposity, inlcuding subcutaneous fat mass (SFM). SFM is not a measurement of abdominal adiposity unless restricted to the abdomen. The methods do not make such a distinction, and instead only indicate the data were collected from DXA. The authors cite two articles in the methods with respect to adiposity phenotypes; neither describe how SFM was defined.

      The authors state, "Visceral fat mass was calculated from one cross section of the whole body at L4–L5, the typical location of a CT slice;" no reference is provided to defend the reliability or appropriateness of such a method. For instance, some methods have used different lumbar positions and some people insist that DXA is inappropriate for visceral adipose tissue estimation.

      The authors also dichotomize 'high' and 'low' phenotypes of adiposity measurements without explanation in Figure 2. The methods of dichotomizing also are not clear: "For each phenotype, individuals who were more than 1.5 standard deviations from the mean of the phenotype were assigned to high and low phenotype groups respectively." Does this mean those less than 1.5 SD below the mean were 'low' and those more than 1.5 SD were 'high'? This would eliminate a large portion of the sample (+/- 1.5 SD removed from the middle of the distribution would leave <15% of the sample if it was normally distributed). If, on the other hand, it was dichotomized on a single point value 1.5 SD above the mean (for instance), this has severe limitations because such cutoffs can be slid along the continuum to provide very different results. Thus, it is typically best to provide a theoretical basis for classification or to have a confirmation set (e.g., see Ivanescu AE, 2016).

      It is also unclear if these dichotomized values were used throughout the rest of the manuscript (e.g., they appear to be in figure 4). This impairs the reader's ability to evaluate results and compare to new or old findings.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 05, Donald Forsdyke commented:

      SELECTIVE PRESSURE TO CONSERVE VIRUS SPECIES IDENTITY

      The authors correctly note that "the most obvious parameter associated with G + C content is the strength of molecular hybridization of polynucleotide duplexes" (1). Such hybridization controls recombination, which is favored when there is close sequence resemblance between different co-infecting viruses ("complete alignment conserved"), and is impeded when there is less sequence resemblance ("complete alignment variable"). The latter anti-recombination activity can be considered in relation to speciation mechanisms that initiate and retain taxonomic differentiations. As recently noted by Meyer et al., allied species of "viruses that infect the same [host] species and cell types are thought to have evolved mechanisms to limit recombination." Without such limitations the genomes would blend and co-infectants would lose their independence as distinct viral species. Mechanisms overcoming this selective disadvantage include "divergences in nucleotide composition and RNA structure that are analogous to pre-zygotic barriers in plants and animals" (2).

      Thus, a nucleic acid region may be "conserved," not only because it encodes a protein (i.e. there is "protein pressure" on the sequence), but because it has a specific nucleotide composition (e.g. "GC-pressure"). While protein pressure mainly affects the first and second codon positions, GC-pressure can affect all codon positions. Indeed, at first and second codon positions there may be conflict between pressures, especially when protein pressure is high (i.e. in regions where amino acid conservation is high); then GC-pressure is constrained to vary only at the more flexible third codon position. In contrast, when protein pressure is low (i.e. in regions where amino acid conservation is low), then GC-pressure has greater freedom to affect all codon positions.

      If, to avoid recombination, there is selective pressure on one branch of a diverging line to decrease its GC%, then it would be predicted that "the GC% of nucleotides encoding conserved amino acid (AA) residues" would be "consistently higher than that of nucleotides encoding variable AAs," where the pressure to decrease GC% has fuller rein to encompass all three codon positions (1). Conversely, it would be predicted that when there is pressure on a diverging line to increase GC%, then it would be predicted that the GC% corresponding to conserved codons would be consistently lower than that of non-conserved codons (e.g. Ebolavirus).

      For flavivirus "the mean G% of the core conserved AA residues is higher (35%) than that of the variable AA residues (28%), but the mean G3% of the core conserved AA residues (28%) is similar to that of the variable AA residues (29%)" (1). While consistent with the above views, there is need for information on C3% and relative frequencies of synonymous codons (e.g. the two cysteine codons correspond either to low or high GC%). More details of selective anti-recombination pressures are presented elsewhere (3, 4). Similar considerations may apply to codon biases and GC% among mycobacteriophages (5).

      1.Klitting R, Gould EA & de Lamballerie X (2016) G + C content differs in conserved and variable amino acid residues of flaviviruses and other evolutionary groups. Infection, Genetics and Evolution 45: 332-340.Klitting R, 2016

      2.Meyer JR, Dobias DT, Medina SJ, Servilio L, Gupta A, Lenski RE (2016) Ecological speciation of bacteriophage lambda in allopatry and sympatry. Science 354: 1301-1304. Meyer JR, 2016

      3.Forsdyke (2014) Implications of HIV RNA structure for recombination, speciation, and the neutralism-selectionism controversy. Microbes & Infect16:96-103. Forsdyke DR, 2014

      4.Forsdyke DR (2016) Evolutionary Bioinformatics, 3rd edition. Springer, New York.

      5.Esposito LA, Gupta S, Streiter F, Prasad A, Dennehy JJ (2016). Evolutionary interpretations of mycobacteriophage biodiversity and host-range through the analysis of codon usage bias. Microbiol Genomics 2(10), doi: 10.1099/mgen.0.000079. See arXiv preprint


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 30, Seán Turner commented:

      Traore et al. (2015) [PMID 27257486] and Vicino et al. (2016) [PMID 27660714] both claim DSM 29078 to be the type strain of Bacillus andreraoultii and Murdochiella massiliensis, respectively. DSM 29078 is not listed in the DSMZ online catalog of strains as of 30 June 2017.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 08, Ole Jakob Storebø commented:

      Spin and double spin ̶ the Letter to the Editor by Romanos et al. (2016) is indeed spinning.

      Response to “Check and Double Check ̶ the Cochrane review by Storebø et al. is indeed flawed” 

      (This letter was rejected by the editors in Zeitschrift für Kinder- und Jugendpsychiatrie und Psychotherapie).

      Romanos et al. 2016 (Romanos M, 2016) continue to publish disagreements with the findings of our Cochrane systematic review (Storebø OJ, 2015) that do not have any meaningful effect on our estimate of effect size regarding methylphenidate for children and adolescents with ADHD. Our main point is that due to the very low quality of all the evidence one cannot state anything for sure about the true magnitude of the effect.

      It is correct that a post-hoc exclusion of the four trials with co-interventions in both MPH and control groups and the one trial of preschool children will change the effect size from 0.77 to 0.89. We have responded several times to this group of authors (Storebø OJ, 2015, Storebø OJ, 2016, Storebø OJ, 2016, Storebø OJ, 2016, OJ Storebø et al, 2016: doi:10.1136/eb-2016-102499) regarding these trials which were included in keeping with our protocol which was published a priori (Storebø OJ, 2015).

      We agree that there might be an effect of both Clonidine and behavioral therapy. However, the effects are balanced by their use as add-on therapies in both arms of the trials i.e. the methylphenidate and no-methylphenidate arms. Such an analysis would be a post hoc decision made purely to increase the effect size and would be in conflict with our reviewed protocol (Storebø OJ, 2015).

      There is no evidence for providing a valid cut off score for the effect size of the standardized minimal clinical difference (SMD) that can be used by clinicians. When reporting a SMD one of the challenges facing researchers is to determine the significance of any differences observed and communicate this to clinicians who will apply the results of the systematic review to clinical practice.

      The use of a Minimal Clinical Relevant Difference (MIREDIF) is a valid way to express the minimum clinically important improvement considered worthwhile by clinicians and patients (Copay AG, 2007). The variability of MIREDIF is also important, which is why we reported the 95% confidence intervals of the transformed mean value in our review. Even with a difference in means below the MIREDIF, a proportion of the patients will have a value above the MIREDIF. Similarly, a proportion of the patients will have a value below the MIREDIF.

      The use of end-of-period data in cross-over trials is problematic due to the risk for “carry- over effect” (Cox DJ, 2008) and “unit of analysis errors” (http://www.cochrane-handbook.org.). In addition, we have tested for the risk of “carry-over effect”, by comparing trials with first period data to trials with end-of-period data in a subgroup analysis. This showed no significant subgroup difference, but this analysis has sparse data and one can therefore not rule out this risk. Even with no statistical difference in our subgroup analysis comparing parallel group trials to end-of-period data in cross-over trials, there was high heterogeneity and this could mean that the risk of “unit of analysis error” and “carry-over effect” was in fact real.

      We have continued to argue that the well known adverse events of methylphenidate, such as the loss of appetite and disturbed sleep, can be detected by teachers. We highlighted this in our review (Storebø OJ, 2015) and answered this point in several replies to these authors (Storebø OJ, 2015, Storebø OJ, 2016, Storebø OJ, 2016, Storebø OJ, 2016, OJ Storebø et al, 2016: doi:10.1136/eb-2016-102499). It is not about controlling the amount of food children eat in the schoolyard or assessing their sleep quality at night. The well known adverse events of “loss of appetite” and “disturbed sleep” are easily observable by teachers as uneaten food left on lunch plates, yawning, general tiredness and even weight loss.

      There is considerable evidence that trials sponsored by industry overestimate benefits and underestimate harms (Lundh A, 2012, other citations). We did receive a table for the Coghill 2013 trial from the authors. We did, however, not ask for information about funding as it was clearly stated in Coghill 2013 that this trial was funded by Shire Development LLC (Coghill D, 2013).

      It is true that some participants in the MTA study (10%) allocated to the methylphenidate treatment were titrated to dextroamphetamine (Anonymous, 1999). We wanted to conduct a reanalysis of the data excluding the participants who did not receive methylphenidate. We contacted Dr. Swanson and he proved several helpful comments. He also enclosed published articles, but we did not receive additional data, in part because of the time frame of our review (Storebø OJ, 2015). Sensitivity analysis excluding the MTA study does not significantly change the effect estimate.

      We have seriously considered the persistent, repeated criticisms by Ramonas et al. published in a number of different journals, however, none of these have provided evidence which justify changing our conclusions about the effects of MPH and the very low quality of evidence of methylphenidate trials. We had no preconceptions of the findings of this review and followed the published protocol, therefore any proposed manipulations of the data proposed by this group of authors would be in contradiction to the accepted methods of high quality meta-analyses. As it is unlikely that any further criticisms from these authors will change and we feel we have repeatedly responded clearly to each of these criticism, we propose to agree to disagree.

      Ole Jakob Storebø, Psychiatric Research Unit, Psychiatric Department, Region Zealand, Denmark

      Morris Zwi, Islington CAMHS, Whittington Health, London, UK

      Carlos Renato Moreira-Maia, Federal University of Rio Grande do Sul, Porto Alegre, Brazil

      Camilla Groth, Pediatric Department, Herlev University Hospital, Herlev, Denmark

      Donna Gillies, Western Sydney Local Health District; Mental Health, Parramatta, Australia

      Erik Simonsen, Psychiatric Research Unit, Psychiatric Department, Region Zealand, Denmark

      Christian Gluud, Copenhagen Trial Unit, Centre for Clinical Intervention Research, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 04, Misha Koksharov commented:

      It's quite interesting that this type of cyclization doesn't prevent the necessary C-domain rotation and luciferase remains fully active.

      Fig. 8 needs some corrections: 1) The pH values are swapped in the figure caption [(A) pH 5.5 and (B) pH 7.8]. 2) Probably, there is some problem with pH adjustment for pH 5.5. P. pyralis luciferase should give nearly monomodal red spectrum at pH 5.5, maybe with a faint green shoulder (Branchini BR, 2007, Oba Y, 2012, Riahi-Madvar A, 2009). Here it is only a faint red shoulder.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 22, Lydia Maniatis commented:

      As is well-known, perceived color and lightness are contingent on the distribution of luminance/chromaticity in the proximal stimulus and on the corresponding structure of the percept, via the visual process and the principles instantiated therein.

      This being the case, any conclusions regarding perceived color contrast that do not explicitly include structure in the discussion will not generalize but will be strictly ad hoc. The results of the present study apply to "plaids", and surely not all plaids, as a category, given the relevance of particular chromaticity values and distributions. The title of the paper, however, implies generality.

      The authors also inappropriately leave structure out of the conversation when they speculate that:

      "Superimposed luminance and color contrast without co-alignment commonly occurs in natural scenes when shadows or shading fall on a colored surface, whereas co-aligned color and luminance borders are indicative of object and material boundaries, suggesting these two situations activate different color-luminance interactions. "

      As descriptions the terms "superimposed luminance and color contrast without co-alignment" and "co-aligned color and luminance borders" are not specific enough, being structure-blind, to assure whether and where "shadows" and "object boundaries" will result in the percept. An easy example, always to hand are the amodally-completed boundaries (which occur in addition to the subjective contours) in the Kanizsa triangle, as well as the appearance of overlap within figures without a luminance step, as well as situations in which the color spectrum or intensity range is shifted and which produce the impression of colored illumination or shading. Whether two areas are interpreted as overlapping ("superimposed" is a description of the percept not the proximal stimulus or the stimulus presented on a screen) depends on relative surface properties and their shapes, not simple "alignments."

      So to refer to two different "situations" on the basis of the result - the percept - and to assume different mechanisms ("different color-luminance interactions), without adequately specifying a priori how these two situations are differentiated with respect to the stimulus structure, is putting the cart before the horse.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 11, Alem Matthees commented:

      References for the above comment

      1) Hawkes N. Freedom of information: can researchers still promise control of participants' data? BMJ. 2016 Sep 21;354:i5053. doi: 10.1136/bmj.i5053. PMID: 27654128. http://www.bmj.com/content/354/bmj.i5053

      2) White PD, Goldsmith KA, Johnson AL, Potts L, Walwyn R, DeCesare JC, Baber HL, Burgess M, Clark LV, Cox DL, Bavinton J, Angus BJ, Murphy G, Murphy M, O'Dowd H, Wilks D, McCrone P, Chalder T, Sharpe M; PACE trial management group. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. Lancet. 2011 Mar 5;377(9768):823-36. doi: 10.1016/S0140-6736(11)60096-2. Epub 2011 Feb 18. PMID: 21334061. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3065633/

      3) Confidentiality: NHS Code of Practice. November 2003. https://www.gov.uk/government/publications/confidentiality-nhs-code-of-practice

      4) General Medical Council (2009). Confidentiality guidance: Research and other secondary uses. http://www.gmc-uk.org/guidance/ethical_guidance/confidentiality_40_50_research_and_secondary_issues.asp

      5) Queen Mary University of London. PACE trial protocol: Final version 5.2, 09.03.2006. A1.6-A1.7 [Full Trial Consent Forms] https://www.whatdotheyknow.com/request/203455/response/508208/attach/3/Consent forms.pdf

      6) Appeal to the First-tier Tribunal (Information Rights), case number EA/2015/0269: http://informationrights.decisions.tribunals.gov.uk/DBFiles/Decision/i1854/Queen Mary University of London EA-2015-0269 (12-8-16).PDF

      7) Tracking switched outcomes in clinical trials. http://compare-trials.org/

      8) White PD, Chalder T, Sharpe M, Johnson T, Goldsmith K. PACE trial authors' reply to letter by Kindlon. BMJ. 2013 Oct 15;347:f5963. doi: 10.1136/bmj.f5963. PMID: 24129374. http://www.bmj.com/content/347/bmj.f5963

      9) Goldsmith KA, White PD, Chalder T, Johnson AL, Sharpe M. The PACE trial: analysis of primary outcomes using composite measures of improvement. Queen Mary University of London. 8 September 2016. http://www.wolfson.qmul.ac.uk/images/pdfs/pace/PACE_published_protocol_based_analysis_final_8th_Sept_2016.pdf

      10) Wikipedia. Multiple comparisons problem. Accessed 02 October 2016. https://en.wikipedia.org/wiki/Multiple_comparisons_problem

      11) Matthees A, Kindlon T, Maryhew C, Stark P, Levin B. A preliminary analysis of ‘recovery’ from chronic fatigue syndrome in the PACE trial using individual participant data. Virology Blog. 21 September 2016. http://www.virology.ws/wp-content/uploads/2016/09/preliminary-analysis.pdf

      12) Tuller D. Trial By Error, Continued: Questions for Dr. White and his PACE Colleagues. Virology Blog. 4 January 2016. http://www.virology.ws/2016/01/04/trial-by-error-continued-questions-for-dr-white-and-his-pace-colleagues/

      13) Walwyn R, Potts L, McCrone P, Johnson AL, DeCesare JC, Baber H, Goldsmith K, Sharpe M, Chalder T, White PD. A randomised trial of adaptive pacing therapy, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome (PACE): statistical analysis plan. Trials. 2013 Nov 13;14:386. doi: 10.1186/1745-6215-14-386. PMID: 24225069. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4226009/

      14) White PD, Goldsmith K, Johnson AL, Chalder T, Sharpe M. Recovery from chronic fatigue syndrome after treatments given in the PACE trial. Psychol Med. 2013 Oct;43(10):2227-35. doi: 10.1017/S0033291713000020. PMID: 23363640. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3776285/

      15) Beth Smith ME, Nelson HD, Haney E, et al. Diagnosis and Treatment of Myalgic Encephalomyelitis/Chronic Fatigue Syndrome. Rockville (MD): Agency for Healthcare Research and Quality (US); 2014 Dec. (Evidence Reports/Technology Assessments, No. 219.) July 2016 Addendum. Available from: http://www.ncbi.nlm.nih.gov/books/NBK379582/

      16) Matthees A. Treatment of Myalgic Encephalomyelitis/Chronic Fatigue Syndrome. Ann Intern Med. 2015 Dec1;163(11):886-7. doi: 10.7326/L15-5173. PMID: 26618293. http://www.ncbi.nlm.nih.gov/pubmed/26618293

      17) Kennedy A. Authors of our own misfortune?: The problems with psychogenic explanations for physical illnesses. CreateSpace Independent Publishing Platform. 4 September 2012. ISBN-13: 978-1479253951. https://www.amazon.com/Authors-our-own-misfortune-explanations/dp/1479253952

      18) Sharpe M, Goldsmith KA, Johnson AL, Chalder T, Walker J, White PD. Rehabilitative treatments for chronic fatigue syndrome: long-term follow-up from the PACE trial. Lancet Psychiatry. 2015 Dec;2(12):1067-74. doi: 10.1016/S2215-0366(15)00317-X. Epub 2015 Oct 28. PMID: 26521770. https://www.ncbi.nlm.nih.gov/pubmed/26521770

      19) Higgins PTJ, Altman DG, Sterne JAC; on behalf of the Cochrane Statistical Methods Group and the Cochrane Bias Methods Group. Chapter 8: Assessing risk of bias in included studies. Version 5.1.0 [updated March 2011]. http://handbook.cochrane.org/chapter_8/8_assessing_risk_of_bias_in_included_studies.htm

      20) Schulz KF, Grimes DA. Blinding in randomised trials: hiding who got what. Lancet. 2002 Feb 23;359(9307):696-700. PMID: 11879884. http://www.who.int/rhl/LANCET_696-700.pdf

      21) Boot WR, Simons DJ, Stothart C, Stutts C. The Pervasive Problem With Placebos in Psychology: Why Active Control Groups Are Not Sufficient to Rule Out Placebo Effects. Perspect Psychol Sci. 2013 Jul;8(4):445-54. Doi: 10.1177/1745691613491271. PMID: 26173122. http://pps.sagepub.com/content/8/4/445.long

      22) Button KS, Munafò MR. Addressing risk of bias in trials of cognitive behavioral therapy. Shanghai Arch Psychiatry. 2015 Jun 25;27(3):144-8. doi: 10.11919/j.issn.1002-0829.215042. PMID: 26300596. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4526826

      23) Wood L, Egger M, Gluud LL, Schulz KF, Jüni P, Altman DG, Gluud C, Martin RM, Wood AJ, Sterne JA. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study. BMJ 336 (7644): 601–5. DOI:10.1136/bmj.39465.451748.AD. PMID 18316340. PMC 2267990. http://www.bmj.com/cgi/content/full/336/7644/601

      24) Kindlon TP. Objective measures found a lack of improvement for CBT & GET in the PACE Trial: subjective improvements may simply represent response biases or placebo effects in this non-blinded trial. BMJ Rapid Response. 18 January 2015. http://www.bmj.com/content/350/bmj.h227/rr-10

      25) Knoop H, Wiborg J. What makes a difference in chronic fatigue syndrome? Lancet Psychiatry. 2015 Feb;2(2):113-4. doi: 10.1016/S2215-0366(14)00145-X. Epub 2015 Jan 28. PMID: 26359736. https://www.ncbi.nlm.nih.gov/pubmed/26359736

      26) johnthejack (Peters J). Using public money to keep publicly funded data from the public. 29 June 2016. https://johnthejack.com/2016/06/29/using-public-money-to-keep-publicly-funded-data-from-the-public/

      27) Coyne JC. Further insights into war against data sharing: Science Media Centre’s letter writing campaign to UK Parliament. 31 January 2016. https://jcoynester.wordpress.com/2016/01/31/further-insights-into-the-war-against-data-sharing-the-science-media-centres-letter-writing-campaign-to-uk-parliament/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Oct 11, Alem Matthees commented:

      On 4 October 2016 I submitted a BMJ Rapid Response to this article but one week later it has not been posted on the BMJ website ( http://www.bmj.com/content/354/bmj.i5053/rapid-responses ). I am not sure whether this delay is normal or if it has been rejected. So I will post it on PubMed Commons. The following is as previously submitted, except for the addition of one word to help clarify a sentence, and the addition of a PMID for reference 22 (and removing a stray character). I also had to move the references to a separate PubMed Commons comment below this one:

      The PACE trial investigators never had total control over the data to begin with

      Thank you for covering this issue. Some comments:

      a) This article states that it was “not possible” to contact me[1]. However, my email address shows up in the first page of results with a Google search for Alem Matthees.

      b) Regarding the modification of consent forms for future trials to address FOIA data releases. The FOIA was implemented in January 2005, before PACE trial participants were recruited[2]; under the legislation, trial data that is unlikely to identify participants is not personal data, and it was always QMUL’s responsibility to be aware that trial data is within scope of the FOIA, but they failed to inform participants of this possibility. Similarly, confidentiality guidelines from the NHS[3] and GMC[4] state that consent is not necessary to release de-identified data. The trial consent form promised that identities will be protected[5], and they have been. The Information Commissioner and Information Tribunal considered and rejected the assertions that FOIA data releases would significantly affect recruitment in future studies[6].

      c) The lesson here is not about ‘controlling’ data, it is that if data is not analysed or published in a fair and transparent way, people will seek to acquire and re-analyse it, particularly when debatable claims were made that affect the lives of millions of patients. The major deviations from the published trial protocol, the recovery criteria in particular, is what motivated me. Outcome switching is recognised as a major problem in the research community[7].

      While it was important to find out the protocol-specified primary outcomes that were abandoned, the changes to the recovery criteria (a secondary analysis) were the most problematic. I sought the data after QMUL refused to release the protocol-specified outcomes for improvement and recovery. It is misleading to promote ‘recovery’ rates of 22% when based on indefensible criteria, such as thresholds of ‘normal’ fatigue and physical function that overlap with trial eligibility criteria for severe disabling fatigue, and where one-third still met Oxford CFS criteria.

      d) White et al. previously downplayed the changes to the primary outcomes as the primary measures “were the same as those described in the protocol”[8]. Now that the results for the protocol-specified primary outcomes are known and people are comparing them with the post-hoc equivalents, Professor White is arguing that “They’re not comparing like with like […] They are comparing one measure with a completely different one—it’s apples and pears”.[1]

      Professor White also stated that going back to the protocol makes no difference, as adjunctive CBT and GET are still statistically significantly better than specialist medical care alone[1]. However, statistical significance is not the same as clinical significance, and going back to the protocol decreases the response rates in the CBT and GET groups from approximately 60% down to 20% (compared to 45% down to 10% for SMC alone)[9].

      While it may be argued that the above does not change the conclusion that adjunctive CBT and GET are superior to SMC alone, the trial investigators conducted many analyses without correcting for multiple comparisons[10]; based on a quick look at the p values, some of the differences reported may not be statistically significant when using a more conservative approach.

      Moreover, the data has been re-analysed and going back to the protocol not only decreased the 'recovery' rates from 7-22% down to to 2-7%, but the differences between adjunctive therapy groups and SMC alone are not significant[11]. There appears to be a consistent pattern of outcome switching and major changes to thresholds that inflate the results by several times over.

      e) The article states that White et al. “had answered critics who have made legitimate scientific points”. However, there are numerous legitimate questions or problems that are unaddressed[12].

      f) The article states that out of 37 FOIA requests made, “many” were rejected as vexatious. But only 3 or so have been rejected under S.14 (e.g. see whatdotheyknow.com), the first one was in relation to details about the timing and nature of the changes to the protocol, the other two or so by others, relating to trial data. Asking for trial data or for details about methodology is not harassment.

      g) It remains unclear whether all the changes to the trial protocol were made and independently approved before analysing data. Statements about this issue appear to relate to the 2011 Lancet paper only, but there is no mention of change to the recovery criteria in the statistical analysis plan that was finalised shortly before the unmasking of data[13]. The ‘normal range’ is described in the 2011 Lancet paper as a post-hoc analysis[2], and this ‘normal range’ then formed part of the revised recovery criteria published in Psychological Medicine in 2013[14] without any mention of approval. I urge the trial investigators to clarify once and for all whether the changes to the recovery criteria were made after the unmasking of any trial data and whether these were independently approved.

      h) Patients want to get better but many are simply not impressed with the methodology or results of PACE: 80% of candidates definitely or provisionally diagnosed with CFS were excluded from the trial[2]. The CFS and ME case criteria used were problematic[15-17]. Only a small minority of broadly defined CFS patients reported benefit from CBT or GET (around 10-15% over SMC). That benefit was modest and transient, with no significant advantages at 2.5 year follow-up[18].

      Subjective self-reports are important, but modest improvements are difficult to separate from a placebo response and other reporting biases when a trial is non-blinded and tests therapies that aim to change patients’ perceptions about their illness[19-23]. This issue becomes more relevant given that there was a complete absence of meaningful improvements to multiple objective outcomes[24] (the small improvement in walking distance for the GET group has been attributed by CBT/GET proponents to participants pushing themselves harder on the test rather than being fitter[25]).

      i) QMUL spent £245,745 on legal fees trying to prevent release of the requested data[26], and were also part of a failed lobbying attempt to be removed from the FOIA[27].

      References

      [continued below...]


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 26, Peter Hajek commented:

      The title asserts that smokers who switch to vaping start to drink more. The paper however shows no such thing. It just reports that ex-smokers who vape drink more than ex-smokers who do not vape. Heavier smokers are more likely to seek nicotine maintenance and are also heavier drinkers and the difference almost certainly predated quitting. The title does not reflect the study findings and misleads casual readers.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 30, Wasim Maziak commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Sep 26, Peter Hajek commented:

      The letter has a misleading title, it identifies no 'unsubstantiated claims'. It just says that some ex-smokers would quit anyway and that surveys have a margin of error. It raises no material issues that would suggest that the article's finding of a huge number of smokers who claim to have stopped smoking with the help of e-cigarettes is inaccurate.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 27, James Yeh commented:

      Editor's Comment Obesity and Management of Weight Loss — Polling Results

      James Yeh, M.D., M.P.H., and Edward W. Campion, M.D.

      Obesity is increasingly prevalent worldwide, and about 40% of Americans meet the diagnostic criteria for obesity.[1] The goal of weight loss is to reduce the mortality and morbidity risks associated with obesity. Patients with a body-mass index (BMI) in the range that defines obesity (>30) have a risk of death that is more than twice that of persons with a normal BMI.[2] Obesity is also associated with increased risks of cardiovascular disease, diabetes, and several cancers. A recent study suggests that being overweight or obese during adolescence is strongly associated with increased cardiovascular mortality in adulthood.[3] Studies suggest that even a 5% weight loss may reduce the complications associated with obesity.[4]

      In September 2016, we presented the case of Ms. Chatham, a 29-year-old woman with class I obesity (BMI, 32) who leads a fairly sedentary lifestyle, with frequent reliance on takeout foods and with infrequent physical activity.[5] Readers were invited to vote on whether to recommend initiating treatment with one of the FDA-approved drugs for weight loss along with lifestyle modifications or to recommend only nonpharmacologic therapies and maximizing lifestyle changes. The patient has no coexisting medical conditions, but her blood pressure is slightly elevated (144/81 mm Hg). In the past, Ms. Chatham has tried to lose weight using various diets, each time losing 10 to 15 lb (4.5 to 6.8 kg), but she has never been able to successfully maintain weight loss.

      Over 85,000 readers viewed the Clinical Decisions vignette during the polling period, and 905 readers from 91 countries voted in the informal poll. The largest group of respondents (366) was from the United States or Canada, representing nearly 40% of the votes. A large majority of the readers (80%) voted against prescribing one of the FDA-approved medications for weight loss and instead recommended maximizing lifestyle modification and nonpharmacologic therapies first.

      A substantial proportion of the 64 Journal readers who submitted comments expressed concern about the absence of efficacy data on long-term follow-up and about the side effects associated with current FDA-approved medications for weight loss. Some suggested that simply treating obesity with a prescription medication is shortsighted and that it is important to uncover patients’ motivations for existing lifestyle choices and for weight loss. The commenters emphasized the need for a multifaceted approach to obesity management that includes nutritional and psychological support, as well as stress management, with the goal of long-lasting improvement in exercise and eating habits that will lead to weight reduction and maintenance of a healthier weight.

      Some commenters, noting the difficulty of lifestyle changes, felt that pharmacotherapy can be a complementary and reasonable part of a multidisciplinary treatment plan. Some wrote that obesity should be managed as a chronic disease is managed and that an inability to lose weight should not be seen as a disciplinary issue, especially given the importance of genetic and physiological factors. These commenters argued that the use of pharmacotherapy as part of the treatment plan to achieve weight loss should not be stigmatized.

      Overall, the results of this informal Clinical Decisions poll indicate that a majority of the respondents think physicians should not initially recommend the use of an FDA-approved drug as part of a weight-loss strategy, at least not for a patient such as Ms. Chatham, and that many respondents were troubled by the current uncertainties about the long-term efficacy and safety of weight-loss drugs.

      REFERENCES 1. Flegal KM, Kruzon-Moran D, Carroll MD, Fryar CD, Ogden CL. Trends in obesity among adults in the United States, 2005 to 2014. JAMA 2016;315:2284-91. 2. Global BMI Mortality Collaboration. Body-mass index and all-cause mortality: individual-participant-data meta-analysis of 239 prospective studies in four continents. Lancet 2016;388:776-86. 3. Twig G, Yaniv G, Levine H, et al. Body-mass index in 2.3 million adolescents and cardiovascular death in adulthood. N Engl J Med 2016;374:2430-40. 4. Kushner RF, Ryan DH. Assessment and lifestyle management of patients with obesity: clinical recommendations from systematic reviews. JAMA 2014;312:943-52. 5. Yeh JS, Kushner RF, Schiff GD. Obesity and management of weight loss. N Engl J Med 2016;375;1187-9.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 11, Richard Kellermayer commented:

      Great work!

      It would have been nice, however, to see our work on the mucosal mycobiome in pediatric Crohn disease patients discussed and referenced:

      Microbiota separation and C-reactive protein elevation in treatment-naïve pediatric granulomatous Crohn disease. Kellermayer R, Mir SA, Nagy-Szakal D, Cox SB, Dowd SE, Kaplan JL, Sun Y, Reddy S, Bronsky J, Winter HS. J Pediatr Gastroenterol Nutr. 2012 Sep;55(3):243-50. doi: 10.1097/MPG.0b013e3182617c16. PMID: 22699834


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 03, Atanas G. Atanasov commented:

      "Many "rules" for writing good science abstracts associate with fewer citations":

      https://twitter.com/mattjhodgkinson/status/594865422840762368 ...and... http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004205

      ...another study pointing in the same direction as our work.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 21, Murat Büyükşekerci commented:

      In my opinion there is problem with title of this article. Since the term "mediator" refers to intracellular proteins that enhance and activate the functions of other proteins. Thiol/disulphide homeostasis is a term used to describe the redox state of mileu and could not be defined as a novel mediator. Thanks for regarding.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 16, Thomas Langer commented:

      Loss of m-AAA proteases increases mitochondrial Ca2+ influx at low cytosolic [Ca2+]

      We demonstrate in our paper that the m-AAA protease (AFG3L2/SPG7) degrades EMRE, an essential subunit of the mitochondrial Ca2+ uniporter MCU. Loss or decrease of m-AAA protease activity, as observed in SCA28, impairs the assembly of MCU with the gatekeeper subunits MICU1/2 and results in the formation of unregulated, open MCU. This causes an increased mitochondrial Ca2+ influx at low cytosolic [Ca2+] and renders neurons more susceptible to Ca2+ overload, opening of the mitochondrial permeability transition pore (MPTP) and cell death. Thus, we do not propose in our manuscript that the formation of deregulated MCU causes an increase in cytosolic [Ca2+], as suggested in the comment by Casari et al.. Our findings explain the striking observation by the Casari group that reduced Ca2+ influx into AFG3L2-deficient neurons (by pharmacological inhibition or genetic ablation of mGluR1) protects against neuronal death (Maltecca et al., 2015): lowered cytosolic [Ca2+] in these settings result in decreased mitochondrial Ca2+ influx via deregulated MCU lacking gatekeeper subunits in AFG3L2-deficient neurons, thus preventing mitochondrial Ca2+ overload. Of note, our findings are also in agreement with two recent studies in MICU1-deficient mice demonstrating that deregulated Ca2+ influx causes MPTP opening-induced cell death (Antony et al., Nat. Com., 2016) and ataxia by specifically affecting Purkinje cells (Liu et al., Cell Reports, 2016). Strikingly, reduced EMRE expression was found to suppress ataxia (Liu et al., Cell Reports, 2016).

      Casari et al. have suggested that other (yet poorly understood) functions of the m-AAA protease lower the mitochondrial membrane potential (Maltecca et al., 2015) and impair mitochondrial morphology (Maltecca et al., 2012), resulting in decreased mitochondrial Ca2+ influx. Our results do not support a major role of disturbed mitochondrial morphology (Fig. 7), but we agree (and confirm in Fig. S6) that lowering the mitochondrial membrane potential decreases mitochondrial Ca2+ influx after histamine stimulation. We therefore have assessed mitochondrial Ca2+ influx upon mild increase of cytosolic [Ca2+] and observed an increased Ca2+ influx into m-AAA protease-deficient mitochondria (Fig. 6). The rationale of this protocol relies on the sigmoidal relationship between mitochondrial Ca2+ influx and extramitochondrial [Ca2+]. In resting conditions, mitochondrial Ca2+ accumulation is negligible when cytosolic [Ca2+] is below a threshold (~500 nM). Inhibition of SERCA leads to ER Ca2+ leaks, thus causing a slow and small increase of cytosolic [Ca2+]. In this experimental setup (low cytoplasmic [Ca2+]), mitochondrial Ca2+ influx is less hampered by a reduced mitochondrial membrane potential and indeed we observed an increased mitochondrial Ca2+ influx in AFG3L2-deficient mitochondria. We therefore suggest (and discuss in our manuscript) that m-AAA protease-deficient mitochondria show increased Ca2+ influx at resting [Ca2+] but decreased Ca2+ influx at high Ca2+ concentrations (due to the lowered membrane potential).

      Casari et al. also raise doubts about the relative role of Ca2+ and mtROS for MPTP opening. We demonstrate a reduced Ca2+ retention capacity of AFG3L2-deficient mitochondria in vitro and in vivo, which correlates with the increased mitochondrial Ca2+ influx (observed upon SERCA inhibition) and the increased ROS levels in AFG3L2-deficient mitochondria. Increased mitochondrial Ca2+ influx under resting conditions is known to trigger MPTP opening (Antony et al., Nat. Com., 2016) and to cause increased mtROS production (Hoffman et al., Cell Reports, 2013; Mallilankaraman et al., Cell, 2012). Thus, both events are interdependent and their relative contribution to MPTP opening is difficult to dissect. We have not addressed this issue in the present manuscript and, by no means, exclude a contribution of mtROS to MPTP opening.

      Together, our results provide compelling evidence that m-AAA protease deficiency causes the accumulation of MCU-EMRE complexes lacking gatekeeper subunits and impairs mitochondrial Ca2+ handling, sensitizing neurons for MPTP opening. The relative contribution of deregulated mitochondrial Ca2+ influx and lowered mitochondrial membrane potential for disease pathogenesis is currently difficult to assess and certainly warrants further studies in appropriate mouse models. However, we would like to point out that other mitochondrial diseases affecting respiration and the formation of the mitochondrial membrane potential do not show the striking vulnerability of Purkinje cells seen in SCA28. At the same time, MCU-dependent mitochondrial Ca2+ influx is a crucial determinant of excitotoxicity in neurons (Qui et al., Nat. Com., 2013). This study also demonstrates that synaptic activity transcriptionally suppresses MCU expression thereby counteracting mitochondrial Ca2+ overload at high cytosolic [Ca2+] and preventing induction of excitotoxicity. Our results thus open up the attractive possibility that increased Ca2+ influx under resting conditions and the accompanying mild stress increases progressively the vulnerability of Purkinje cells, causing late-onset neurodegeneration in SCA28 patients, which are only heterozygous for mutations in AFG3L2.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Oct 13, Giorgio Casari commented:

      Increased or decreased calcium influx?

      In this elegant paper the authors propose that loss of m-AAA (i.e. the depletion of both SPG7 and AFG3L2) facilitates the formation of active MCU complexes through the increased availability of EMRE, thus (i) increasing calcium influx into mitochondria, (ii) triggering MPTP opening and (iii) causing the consequent increase of neuronal cytoplasmic calcium leading to neurodegeneration. We previously reported that loss or reduction of AFG3L2 causes (i) decreased mitochondrial potential and fission, thus (ii) decreased calcium entry and (iii) the consequent augmented neuronal cytoplasmic calcium leading to neurodegeneration. While the functional link of m-AAA with MAIP and MCU-EMRE represents a new milestone in the characterization of the roles this multifaceted protease complex, we would like to comment on the conclusions pertaining to the calcium dynamics. 1. In SPG7/AFG3L2 knock-down HeLa cells (Figure S6A) mitochondrial matrix calcium is dramatically reduced (approx. from 100 to 50 microM) following histamine stimulation, which triggers IP3-mediated calcium release from ER. This reduction is in complete agreement with the one we previously detected in Afg3l2 ko MEFs (Maltecca et al., 2012), and that we also confirmed in Afg3l2 knock-out primary Purkinje neurons (the cells that are primarily affected in SCA28) upon challenge with KCl (Maltecca et al., 2015). The decreased mitochondrial calcium uptake correlates with the 40% reduction of mitochondrial membrane potential in SPG7/AFG3L2 knock-down cells (Figure S6B), as expected since the mitochondrial potential is the major component of the driving force for calcium uptake by MCU. Accordingly, these data are in line with the decreased mitochondrial membrane potential observed in Afg3l2 knock-out Purkinje neurons (Maltecca et al., 2015). We think that this aspect is central, because the respiratory defect is the primary event associated to m-AAA deficiency and neurodegeneration. So, the data of König et al. agree with our own findings that mitochondrial matrix calcium is reduced after m-AAA depletion. 2. By a different protocol (SERCA pumps inhibition and ER calcium leakage; Figure 6 C-F), the authors detected a small increase of mitochondrial calcium concentration in SPG7/AFG3L2 knock-down HeLa cells (from approx. 3 to 6 microM). The huge difference in calcium concentration detected in the two experiments (100 to 50 microM in Figure S6A and 3 to 6 microM in Figure 6 C) possibly reflects the stimulated (histamine) vs. unstimulated (calcium leakage) conditions, this latter being more difficult to correlate to physiologic neuronal situation. 3. The authors show increased sensitivity to MPTP opening in the absence of m-AAA and they propose the consequent calcium release as the cause of calcium deregulation and neuronal cell death. ROS are strong sensitizers of MPTP to calcium and thus favor its opening. It is well known that m-AAA loss massively increases intramitochondrial ROS production. Thus, higher ROS levels, rather than high calcium concentrations, can be the trigger of MPTP opening. Taking all this into consideration, we think that mitochondrial depolarization (as shown in Figure S6B) and decreased mitochondrial calcium entry (Fig S6A), even in the presence of increased amount of MCU-EMRE complexes, may lead to inefficient mitochondrial calcium buffering and, finally, to cytoplasmic calcium deregulation. ROS dependent MPTP opening, which may occur irrespective of a low matrix calcium concentration, may additionally contribute to this final event.

      Minor comment At page 7 we read: “Notably, these experiments likely underestimate the effect on mitochondrial Ca2+ influx observed upon loss of the m-AAA protease, since the loss of the m-AAA protease also decreases ΔΨ (i.e., the main force driving mitochondrial Ca2+ influx), as revealed by the significant impairment of mitochondrial Ca2+ influx triggered by histamine stimulation (Maltecca et al., 2015) (Figures S6A–S6E)”. The reference is not appropriate, since in Maltecca et al., 2015 the reduced mitochondrial calcium uptake has been demonstrated in Afg3l2 knock-out Purkinje neurons upon challenge with KCl and not with histamine. We used histamine stimulation, which triggers IP3-mediated calcium release from ER, in Afg3l2 ko MEF in a previous publication (Maltecca F, De Stefani D, Cassina L, Consolato F, Wasilewski M, Scorrano L, Rizzuto R, Casari G. Respiratory dysfunction by AFG3L2 deficiency causes decreased mitochondrial calcium uptake via organellar network fragmentation. Hum Mol Genet. 2012, 21:3858-70. doi: 10.1093/hmg/dds214).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 30, Gwinyai Masukume commented:

      According to this 2016 Maternal Health Lancet Series paper, unlike for most African countries no data was available for the number of obstetricians and gynaecologists and midwives in South Africa precluding the calculation of the ratio of these practitioners per 1000 pregnancies. This deficit of South African data also applies to the Caesarean section rate global estimates from the World Health Organization published this year in another journal Betrán AP, 2016.

      The apparent lack of data from South Africa suggests that it has ‘fallen off’ the international maternal health map. However, the Health Professions Council of South Africa Holmer H, 2015 and the South African Nursing Council have contemporary and historical data on the number of obstetricians and gynaecologists and midwives respectively. Caesarean delivery rates are available from the Health Systems Trust.

      Because information from The Lancet and the World Health Organization has a global reach and influences key policy makers, this high level lack of visibility of pertinent South African maternal health data is concerning. Maternal health metrics are “essential to guide intervention research, set implementation priorities, and improve quality of care, particularly for women and babies most at risk” Koblinsky M, 2016.

      Efforts to quickly address this lack of visibility are warranted.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 06, GARRET STUBER commented:

      *This review was completed as part of a graduate level circuits and behavior course at UNC-Chapel Hill. The critique was written by students in the class and edited by the instructor, Garret Stuber.

      Comments and critique

      Written by Li et al., this paper investigated a class of oxytocin receptor interneurons (OxtrINs) on which the same group first characterized in 2014 [1]. OxtrINs are a subset of somatostatin positive interneurons in the medial prefrontal cortex (mPFC) that seem to be important for sociosexual behaviors in females, specifically during estrus and not diestrus. To complement their previous story, here the authors concluded that OxtrINs in males regulate anxiety-related behaviors through the release of corticotropin releasing hormone binding protein (Crhbp). While we agree that these neurons could be mediating sexually dimorphic behaviors, it is unclear how robust these differences really are.

      We had some technical issues with this paper. First, it is unclear exactly how many mice were allotted to each experimental group, and it would have been useful to see individual data in each of the behavioral experiments, so that we can better understand some of the variability in the authors’ graphs. Even among different experiments, there were variable sizes of n (e.g. Fig. 5F-H, “n = 8-14 mice per group”). There was also no mention of how many cells per animal were tested for each brain slice experiment; instead, we received total numbers of cells tested per group. This paper did not include the complementary female data to Fig. 4F-G and Fig. 5A-B, the experiments pairing blue light with Crhr1 antagonist or Crhbp antagonist. We would have appreciated seeing this data adjacent to that for the males. In addition, there was no mentioned control for the optogenetic experiments. The authors only compared responses between light on and light off trials. Typically in optogenetic approaches, a set of control mice are also implanted with optic fibers and flashed with blue light in the absence of virus to test whether the light alone influences behavior. Incidentally, there is evidence that blue light influences blood flow, which may affect neuronal activity [2]. It was also unclear during the sociosexual behavioral testing whether the males were exposed to females in estrus or diestrus. In all, lack of detailed sample sizes and controls made it difficult to assess how prominent these sex differences were.

      These issues aside, knocking out endogenous Oxtr in their targeted interneuron population was a key experiment, as it demonstrated that oxytocin signaling in OxtrINs is important in anxiety-related behaviors in males, but not in females regardless of the estrus stage. They did this using a floxed Oxtr mouse and deleted OxtR using a Cre-inducible virus, allowing for temporal and cell-type-specific control of this deletion, and subsequently measured the resulting phenotype using an elevated plus maze and open field task. The authors also validated that changes in exploration were not due to hyperactivity. We think these experiments are convincing.

      TRAP profiling, which the same research group pioneered in 2014 [3], provided a set of genes enriched in OxtrINs. TRAP targets RNAs while they are translated into proteins, so we think their results here are particularly relevant. Moreover, the authors provided a list of genes enriched in sex-specific OxtrINs, a useful resource for those interested in gene expression differences in males and females. Once they identified Crhbp, an inhibitor of Crh, they hypothesized that OxtrINs were releasing Crhbp to modulate anxiogenic behaviors in males. The authors next measured Crh levels in the paraventricular nucleus of the hypothalamus and found that Crh levels are higher in females than males. They thus concluded Crh levels were driving sex differences associated with OxtrINs. We wonder whether Crh levels are also higher in the female mPFC, but we agree here too.

      To demonstrate that Crhbp expressed by OxtrINs is important in modulating anxiety-like behaviors in males, the authors targeted Crhbp mRNA using Cre-inducible viral delivery of an shRNA construct and subsequently tested anxiety-related behaviors. They found that knocking down Crhbp was anxiogenic in males and not in females. This was a critical experiment, but the shRNA constructs targeting Crhbp were validated solely in a cell line. It would have been more appropriate to perform a western blot on mPFC punches of adult mice, showing whether this lentiviral construct knocked down Crhbp expression in the mouse brain prior to behavioral testing. In fact, it also would have been useful to see a quantification of the shRNA transfection rate, as well as its specificity in vivo. As stated above, we also do not know the distribution of behavioral responses here either. Without these pieces of information, it is difficult to assess how reliable or robust their knockdown was.

      The authors concluded that sexually dimorphic hormones act through the otherwise sexually monomorphic OxtrINs to regulate anxiety-related behaviors in males and sociosexual behaviors in females. We agree that OxtrINs interact with oxytocin and Crh to bring about sex-specific phenotypes, but we also think that using additional paradigms testing anxiety and social behaviors, such as a predator odor, novelty-suppressed feeding or social grooming, could shed more light on the nuances of mPFC circuitry. In addition, the authors suggested that OxtrINs are sexually monomorphic because they are equally abundant in males and females. The authors’ TRAP data however suggested that OxtrINs of males and females have different gene expression profiles (Table S2), thus indicating that these interneurons may form different connections in each sex that mediate the electrophysiological and behavioral differences we see in this study.

      It would be interesting to overexpress Crhbp in female mice, preferably in a cell-type-specific manner, to see whether female mice would demonstrate the anxiety-like behavior seen in males. If the Crh:Crhbp balance is in fact mediating this sexually dimorphic behavior through OxtrINs, we would expect that doing these manipulations may “masculinize” the females’ behavior. Regardless, we believe that this study opens opportunities for future work into how oxytocin and Crh release from the hypothalamus may act together to coordinate behavior. It will also be interesting to see if single-cell RNA sequencing could provide insight into whether OxtrINs can be further divided into sexually dimorphic subtypes. As the authors pointed out, understanding the dynamics of Crh and oxytocin in the mPFC will be important for gender-specific therapy and treatment.

      [1] Nakajima, M. et al. Oxytocin modulates female sociosexual behavior through a specific class of prefrontal cortical interneurons. Cell. 159, 295-305 (2014).

      [2] Rungta, R. L. et al. Light controls cerebral blood flow in naïve animals. Nature Communications. 8, 14191 (2017).

      [3] Heiman, M. et al. Cell-type-specific mRNA purification by translating ribosome affinity purification (TRAP). Nature Protocols. 9, 1282-1291 (2014).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 08, Christopher Tench commented:

      Could you possibly provide the coordinates analysed otherwise it is difficult to interpret the results.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 21, Lydia Maniatis commented:

      According to Kingdom: "A longstanding issue in vision research concerns whether the internal noise involved in contrast transduction is fixed or variable in relation to contrast magnitude."

      This statement is precisely analogous to saying: A longstanding problem in chemistry is whether phlogiston is evenly or unevenly distributed in relation to object density.

      The notion of "internal noise" is crude, lumping together every element of the visual process between light hitting the retina and the conscious percept. It is flatly inconsistent with perceptual experience, which is in no way "noisy," yet most proponents of this view would have us accept that the conscious percept directly reflects "low-level" and noisy spiking activity of individual or sets of neurons. In any event, no attempt has ever been made to corroborate the noise assumption.

      It is not even clear what the criteria would be for corroboration on the basis of measurements at the physiological level. It would have to be shown, presumably, that identical "sensory inputs" produce a range and distribution of neural responses, this range and distribution being somewhat predictable; however "inputs" to brain activity don't come only from the external receptor organs, no matter how well we might be able to control these. Even if we could (inconceivably) control inputs perfectly, and even if we were able to say that (as is often claimed) at V1 neural responses are noisy, we would have to explain why this noise doesn't affect the conscious percept (which, again, is very stable) and yet is detectable on the basis of conscious experience. Graham (1992; 2011) has adopted the hypothesis that under certain conditions the brain becomes "transparent" so that the activities at lower levels of the processing hierarchy are act directly on the percept. It should be reasonably clear that such a view isn't worth entertaining, but if one wants to entertain it there are massive theoretical difficulties to overcome. It seems to imply that feedback and feedforward processes for some reason are frozen and some alternative, direct pathway to consciousness exists, all while other pathways are still active (because the inference generally applies to a discontinuity on a screen in a room, all of which are maintained in perception.)

      Not surprisingly given the concept's vagueness, the case for "internal noise" has never been credibly made. But it is widely accepted.

      Those who simply accept the internal noise assumption "measure internal noise" by analyzing simple "detection and discrimination" datasets on the basis of multiple layers of untested, untestable, or empirically untenable assumptions rolled into "computational models" including indispensable, multiple free parameters. (For a detailed examination of this technique, see PubPeer comments on Pelli (1985)).

      In the absence of clear and explicit assumptions, relevant confounds remain unspecified, and tests, as here, are always ad hoc, hinging on particular datasets, and counting up "successes" as though by adding these together, they can outweigh unexplained failures. But failures are dispositive, of course, when we are aiming at a general explanation.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 20, Daniel Weiss commented:

      The central problem with current standards of Lyme disease diagnosis and treatment is the absence of a highly sensitive and specific test, a gold standard, that can prove the presence of disease and/or demonstrate cure. When a patient complains of unremitting neurological and/or musculo-skeletal symptoms after completing a treatment, persistent infection is the most logical interpretation. Recent microbiological research has uncovered persistence mechanisms in virtually every prokaryotic organism(Conlon, Rowe, & Lewis, 2015 Advances in experimental medicine and biology; Harms, Maisonneuve, & Gerdes, 2016 Science,; Lewis & Shan, 2016 Molecular cell). It is not surprising that a bacterial species adapted to survive in multiple vertebrate hosts, and through multiple stages of the three-year cycle of its invertebrate host, might persist after antibiotic treatment(Feng, Shi, Zhang, & Zhang, 2015 Emerging microbes & infections). Borrelia species are characterized by immense plasticity in their expression of morphology, antigens, and genes.

      In the laboratory, and in infected humans, antibiotics predictably induce the persister phenotype(Bijaya Sharma, Autumn V Brown, Nicole E Matluck, Linden T Hu, & Kim Lewis, 2015 Emerging microbes & infections).

      Exposed to antibiotics, Borrelia burgdorferi rapidly loses the morphology of “active” motile, dividing spirochetes(Sapi et al., 2016 Int J Med Sci; Timmaraju et al., 2015 FEMS Microbiol Lett). The organism settles into dormancy in biofilm or as round bodies(Merilainen, Herranen, Schwarzbach, & Gilbert, 2015). Yet, it retains antigenicity and the genetic capacity to return to the spirochete form(Merilainen, Brander, Herranen, Schwarzbach, & Gilbert, 2016 Microbiology).

      The failure to respond to a particular antibiotic regimen does not equate with "there is no infection present". The more appropriate conclusion may be that the antibiotic is ineffective for this infection. It is completely unclear whether these patients with presumed autoimmune disorders have persistent infection with B. burgdorferi.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 21, Misha Koksharov commented:

      To clarify which of the thermostable luciferase mutants was used here (I've made a lot of them in L. mingrelica & P. pyralis Flucs - some are published, some are not):

      LMLucR = the mutant G216N,A217L,S398M in Luciola mingrelica firefly luciferase (Koksharov MI, 2011).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 09, K Hollevoet commented:

      Intramuscular antibody gene transfer as a means for prolonged in vivo antibody expression continues to gain traction as an alternative to conventional antibody production and delivery. The study by Kim et al. presents yet another elegant example of said approach, specifically expressing the anti-HER2 4D5 monoclonal antibody (mAb) in mice via plasmid-based gene electrotransfer. The authors report an average 4D5 peak concentration of up to 152 μg ml−1 in BALB/c mice, two weeks after intramuscular electrotransfer of the 4D5-encoding plasmid DNA (pDNA). mAb levels remained above 120 μg ml−1 for at least a month, as depicted in Figure 3d of the manuscript.<sup>1</sup> These results raise some questions, which, in our opinion, are not sufficiently addressed in the manuscript discussion.

      Firstly, plasmid-based antibody gene electrotransfer in mice typically results in mere single-digit μg ml−1 mAb serum levels.<sup>2</sup> After careful consideration, we found no novelties in pDNA design, optimization or delivery in Kim et al.<sup>1</sup> that could explain their quantum leap in attained mAb titers – up to two log higher than the current available literature.

      Secondly, the presented data by Kim et al.<sup>1</sup> surpass the expression levels of viral-based anti-HER2 antibody gene transfer studies in mice, with reported peak 4D5 and trastuzumab concentrations of 30 to 40 μg ml–1.<sup>3,4</sup> This further adds to the surprise, given viral vectors consistently outperform plasmid electrotransfer in terms of transgene expression.<sup>2</sup>

      Thirdly, Figure 4f shows an average 4D5 serum concentration of 3.8 μg ml–1 in athymic nude mice, 22 days after tumor cell injection and, so we assume, approximately two weeks after pDNA delivery.<sup>1</sup> mAb titers in these tumor-bearing mice are thus about 40-fold lower than those in the BALB/c mice. Given identical dosing and delivery conditions were applied, the reason for this discrepancy is unclear. The difference in mAb titers appears too high to e.g. attribute it to inter-experiment or mice strain variability. Target-mediated drug disposition in the tumor-bearing mice, i.e. the binding of 4D5 to HER2, is also unlikely to have such impact, given the continuous and robust in vivo mAb production the authors found.

      In conclusion, prolonged in vivo mAb expression above 100 μg ml–1 is unprecedented in non-viral antibody gene transfer. The impact of these findings, however, is mortgaged by the lack of explanation Kim et al. provide on the obvious differences with the available literature and with their own subsequent results – as outlined earlier. To allow for these remarkable findings to advance the field, we respectfully invite the authors to address the above concerns, and provide additional support for their data.

      References: 1. Kim H, Danishmalik SN, Hwang H, Sin JI, Oh J, Cho Y, et al. Gene therapy using plasmid DNA-encoded anti-HER2 antibody for cancers that overexpress HER2. Cancer Gene Ther 2016 doi: 10.1038/cgt.2016.37. 2. Suscovich TJ, Alter G. In situ production of therapeutic monoclonal antibodies. Expert Rev Vaccines 2015; 14: 205-19. 3. Jiang M, Shi W, Zhang Q, Wang X, Guo M, Cui Z, et al. Gene therapy using adenovirus-mediated full-length anti-HER-2 antibody for HER-2 overexpression cancers. Clin Cancer Res 2006; 12: 6179-6185. 4. Wang G, Qiu J, Wang R, Krause A, Boyer JL, Hackett NR, et al. Persistent expression of biologically active anti-HER2 antibody by AAVrh.10-mediated gene transfer. Cancer Gene Ther 2010; 17: 559-570.

      EDIT: A correction of the manuscript by the authors is ongoing based on the above comments. While awaiting the revision, our comments remain posted.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 27, Michael Gorn commented:

      Thank you for creating this great video tutorial of our lumbar puncture (LP) technique. Video 1 is the most accurate representation of the way we performed the sonography for the study. You have captured the pertinent landmarks, especially the vascular supply of the anterior spinal space that we postulated was the main reason behind a bloody LP. This is how the concept of the Maximum Safe Depth (MSD) was developed. As you clearly demonstrated, the MSD measurements are very close in both transverse and longitudinal views, thus we used longitudinal views for convenience. Once the MSD measurement is obtained, it is marked on the needle as a safe entry depth while performing the LP. The MSD may be exceeded with caution if the needle entry level is shallow as justified by the Pythagorean equation demonstrated in the paper. However, if the entry angle is close to 90 degrees, we recommend redirecting the needle or reattempting the LP. Thank you again for your contribution.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 26, James Tsung commented:

      Link to Video: http://bit.ly/2gK9osv


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 16, Virginia Barbour commented:

      As the Chair of COPE, I am writing to respond to the recent remarks of Dr. Horton with respect to the role and actions of COPE that he commented on in his Offline column1 highlighting the statin review by Professor Collins and colleagues, both of which were published in the issue of the 10th September. I also submitted this letter to the Lancet directly on 12th September.

      COPE is an international interdisciplinary organisation, not just a UK one, whose remit is the provision of education and advice to members in questions related to publication ethics. We do have a process whereby an individual can bring to our attention complaints about journal processes, but we cannot interfere in editorial decisions and nor can we investigate the underlying issues of a complaint as we have neither the resources nor, more importantly, the appropriate level of subject–specific expertise. 

      Dr Horton states that “COPE declined to act further”. This is incorrect. COPE did request details of processes at the BMJ, in accordance with our remit (http://publicationethics.org/contact-us). The guidance issued from COPE's review (I was not part of this final part of the process, having recused myself during the process because of the development of a potential conflict of interest) offered constructive criticism about how the BMJ had managed the peer review process. The BMJ had already addressed those issues following their own independent review and COPE was satisfied with the procedural changes that were implemented. 

      As it is certainly not appropriate for COPE to make any specific judgment about effects on public health, COPE also recommended that Professor Collins and colleagues engaged in open dialogue on the specific issues in the medical literature. We note this has now happened with the publication of their review in The Lancet. 

      Putting the correction of Dr Horton's record of events to one side, and instead looking for useful lessons, COPE would be interested in discussing Dr Horton's suggestion for an independent tribunal. It seems reasonable to assume that this tribunal would need public-funding and the ability to apply sanctions and, to a degree, become a regulator for the research community. This is not COPE's remit, but we would be interested in being part of the discussion on such an approach.

      Virginia Barbour Chair, COPE

      Competing interest. I'm the Chair of COPE. My decision to recuse myself from handling this issue midway through was because a colleague at PLOS (where I worked when the issue was brought to COPE) joined the BMJ.

      Committee on Publication Ethics (COPE) cope_chair@publicationethics.org www.publicationethics.org


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 26, Valter Silva commented:

      This article discusses the Brazilian science based on a new resource for scientometrics called Nature Index, as well as the SJR. The 2012–2015 change in the main metric of Nature Index showed an increase of 18.9% for Brazil and currently is ranked 24th globally. From 1996 to 2015 (SJR) Brazilian science has produced more than 600 thousand citable papers, obtained more than 5 million citations, having over 400 papers with at least 400 citations and is responsible by half of Latin America publication output. Despite such numbers, there are flows in its internationalization. Much of the Brazilian science is produced by graduate students and professors gazetted, since the profession of scientist in Brazil is not a recognized position by the Ministry of Labor and Employment.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 15, Erick H Turner commented:

      In addition to the two cited papers, close variations on this idea have been proposed in the following papers:

      1 Walster GW, Cleary TA. A Proposal for a New Editorial Policy in the Social Sciences. The American Statistician 1970;24:16–9. doi:10.1080/00031305.1970.10478884

      2 Newcombe RG. Towards a reduction in publication bias. Br Med J (Clin Res Ed) 1987;295:656–9.

      3 Sridharan L, Greenland P. Editorial policies and publication bias: the importance of negative studies. Arch Intern Med 2009;169:1022–3. doi:10.1001/archinternmed.2009.100

      4 Colom F, Vieta E. The need for publishing the silent evidence from negative trials. Acta Psychiatr Scand. 2011;123:91–4. doi:10.1111/j.1600-0447.2010.01650.x

      5 Mirkin JN, Bach PB. Outcome-blinded peer review. Arch Intern Med 2011;171:1213–4–authorreply1214. doi:10.1001/archinternmed.2011.56

      6 Turner EH. Publication bias, with a focus on psychiatry: causes and solutions. CNS Drugs 2013;27:457–68. doi:10.1007/s40263-013-0067-9

      7 Smulders YM. A two-step manuscript submission process can reduce publication bias. J Clin Epidemiol Published Online First: July 2013. doi:10.1016/j.jclinepi.2013.03.023


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 26, David Keller commented:

      Thank you. Your findings compare with the CALM-PD study [1], which found "For subjects who consumed >12 ounces of coffee/day, the adjusted hazard ratio for the development of dyskinesia was 0.61 (95% CI, 0.37-1.01) compared with subjects who consumed <4 ounces/day." in patients at an early stage of PD. Longer follow-up should indeed be helpful in assessing whether the benefits of increased caffeine ingestion are durable, at what cost in side-effects, and whether higher doses of caffeine provide correspondingly higher benefits.

      Reference

      1: Wills AM, Eberly S, Tennis M, Lang AE, Messing S, Togasaki D, Tanner CM, Kamp C, Chen JF, Oakes D, McDermott MP, Schwarzschild MA; Parkinson Study Group. Caffeine consumption and risk of dyskinesia in CALM-PD. Mov Disord. 2013 Mar;28(3):380-3. doi: 10.1002/mds.25319. PubMed PMID: 23339054; PubMed Central PMCID: PMC3608707.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 25, Marcello Moccia commented:

      We agree with the need for evaluating incidence and severity of dyskinesia in the long-term assessment of drug efficacy in PD. However, studies including de novo PD patients have to consider early markers of motor progression which indeed are associated with the development of dyskinesia in the long term. In view of this, we showed that the voluptuary consumption of caffeine-containing products is associated with reduced need for levo-dopa and with reduced accrual of motor symptoms. Of course, a longer follow-up will possibly confirm the positive impact of caffeine use also on dyskinesia.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 09, David Keller commented:

      What effect did caffeine consumption have on the incidence or severity of dyskinesia?

      In this study, higher caffeine consumption was associated with a lower rate of starting levodopa treatment and reduced motor and non-motor disability. Any treatment for Parkinson disease (PD) which delays or decreases the need for levodopa therapy should be evaluated for its propensity to hasten the onset of dyskinesia, or to worsen established dyskinesia. If the motor and non-motor benefits of caffeine are accompanied by a risk of developing dyskinesia equal to that of a levodopa regimen with equivalent benefits, then it is unclear why ingesting caffeine on a pharmacologic basis is preferable to simply initiating levodopa when it is needed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 15, Gary Goldman commented:

      Civen et al report the average HZ incidence of 12.8 cases/100,000 children aged <10 years during 2007 to 2010.<sup>1</sup> Since two different ascertainment sources are available for the reporting of HZ cases—schools (including preschools), and public and private healthcare providers (including hospitals)—capture-recapture techniques could have been employed to determine that the Antelope Valley project had approximately 50% case ascertainment, and thus, the true HZ figure is approximately double that reported. Interestingly, 25.6 cases/100,000, or twice the reported rate closely compares with the rate of 27.4 cases/100,000 (95% C.I. 22.7-32.7) based on 172,163 vaccinated children with overall follow-up of 446,027 person-years among children aged <12 years during 2007-2008 reported by Tseng et al.<sup>2</sup>

      Civen et al report that among 10- to 19-year olds a 63% increasing trend in HZ incidence from 2000 to 2006 was documented; however, “the increased incidence could not be confidently explained.” The authors concede, “the possibility persists that children infected by wild-type VZV experienced increased rates of HZ because they were having fewer opportunities to be exposed to exogenous VZV, leading to reduced immune control of HZ.“<sup>1</sup> The reason that the 63% increasing trend in HZ incidence among 10- to 19-year olds has not been confidently explained is that the methodology utilized by Civen et al did not include stratifying HZ incidence rates by using two widely different cohorts: those vaccinated and those with a history of wild-type (natural) varicella. Computing a single mean HZ incidence rate of a bimodal distribution is statistically invalid and conceals the reality that the HZ incidence rate among children and adolescents with a history of wild-type varicella has an increasing trend.<sup>3</sup> By performing such a stratified analysis, Civen et al could have tested the hypothesis that individuals with a prior history of varicella are experiencing increasing HZ incidence due to fewer exogenous exposures, and thus, reduced opportunities for boosting cell-mediated immunity to VZV.<sup>4,5</sup>

      Civen et al state, “The case for this hypothesis has weakened as studies have found no acceleration in rates of HZ among adults in the United States since the varicella vaccination was introduced, despite the fact that opportunities for varicella exposure have plummeted.” Interestingly, the same Antelope Valley surveillance project did collect HZ cases during 2000-2001 and 2006-2007 that showed statistically significant increases among adults. The Antelope Valley annual summary to the CDC demonstrates that in 2000 and 2001, HZ cases (not ascertainment corrected) reported to the project either maintained or increased in every adult 10-year age category (20–29, 30–39, . . . , 60–69 years), yielding a statistically significant difference. Reported HZ cases among adults aged 20–69 years increased 28.5%—from 158 in 2000 to 203 in 2001 (p <0.042; t = 2.95, df = 4).<sup>6</sup> Again, HZ incidence rates among adults aged 50 years and older increased from 390/100,000 p-y in 2006 to 470/100,000 p-y in 2007 with a statistically significant rate ratio of 1.2 (95% CI: 1.04–1.40).<sup>7</sup> A Canadian study by Marra et al concludes that "the incidence of zoster and PHN is increasing with time" and suggests "recent studies have shown an increasing incidence of herpes zoster infection, which may be related to the introduction of varicella vaccination programs in children."<sup>8</sup>

      The United States has traded a dramatic reduction in varicella disease which in the prevaccine era accounted for only 25% of the VZV medical costs (i.e., 75% of VZV medical costs were attributed to cases of HZ) for a disproportional increase in HZ costs associated with increasing HZ incidence among adults with a history of wild-type varicella. It is an unfortunate fact that 20 years after the introduction of the varicella vaccine in the US, healthcare officials are still claiming that the mechanism of exogenous boosting “is not well understood” and “the case for this hypothesis has weakened,” when in reality, the data currently exist to understand this biological mechanism first proposed in 1965 by Dr. Robert Edgar Hope-Simpson.<sup>4</sup> "Rather than eliminating varicella in children as promised, routine vaccination against varicella has proven extremely costly and has created continual cycles of treatment and disease."<sup>3</sup>

      References

      [1] Civen R, Marin M. Zhang J. Abraham A, Harpaz R, Mascola L. Bialek S. Update on incidence of herpes zoster among children and adolescents after implementation of varicella vaccination, Antelope Valley, CA, 2000 to 2010. Pediatr Infect Dis J. 2016 Oct; 35(10):1132-1136.Civen R, 2016

      [2] Tseng HF, Smith N, Marcy SM, Sy LS, Jacobsen SJ. Incidence of herpes zoster among children vaccinated with varicella vaccine in a prepaid health care plan in the United States, 2007, 2008. Pediatr Infect Dis J 2009;28(December(12)):1069–72.Tseng HF, 2009

      [3] Goldman GS and King PG. Review of the United States universal varicella vaccination program: Herpes zoster incidence rates, cost-effectiveness, and vaccine efficacy based primarily on the Antelope Valley Varicella active surveillance project data. Vaccine 2013; 31(13): 1680–1694.Goldman GS, 2013

      [4] Hope-Simpson RE. The nature of herpes zoster: a long term study and a new hypothesis. Proc R Soc Med 1965; 58: 9–20.HOPE-SIMPSON RE, 1965

      [5] Guzzetta G, Poletti P, Del Fava E, Ajelli M, Scalia Tomba GP, Merler S, et al. Hope-Simpson’s progressive immunity hypothesis as a possible explanation for Herpes zoster incidence data. Am J Epidemiol 2013; 177(10): 1134–1142.Guzzetta G, 2013

      [6] Maupin T, Peterson C, Civen R and Mascola L. Varicella Active Surveillance Project (VASP). 2000, 2001, Annual Summary. Antelope Valley, County of Los Angeles Department of Health Services (LADHS), Acute Communicable Disease Control, Centers for Disease Control and Prevention (CDC) Cooperative Agreement No. U66/CCU911165-10.

      [7] Maupin T, Peterson C, Civen R and Mascola L. Varicella Active Surveillance Project (VASP). 2006, 2007 Annual Summary. Antelope Valley , County of Los Angeles Department of Health Services (LADHS), Acute Communicable Disease Control, Centers for Disease Control and Prevention (CDC) Cooperative Agreement No. 5U01 IP000020-02/5U01 IP000020-04.

      [8] Marra F, Chong M and Najafzadeh M. Increasing incidence associated with herpes zoster infection in British Columbia, Canada. BMC Infect Dis. 2016 Oct 20; 16(1):589.Marra F, 2016


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 31, Sergio Uribe commented:

      Timely and necessary reflection.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 10, Arnaud Chiolero MD PhD commented:

      These findings were, unfortunately, expected. They suggest that systematic reviews should not be always regarded as the highest level of evidence; it is evident that they can be seriously biased - like any other studies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 22, Natalie Parletta commented:

      Good point re null results less likely to be published Matthew Romo. I think we also need to consider that in some instances there is a vested interest in publishing null findings, such as the systematic review on omega-3 fatty acids and cardiovascular disease (BMJ 2006; 332 doi: http://dx.doi.org/10.1136/bmj.38755.366331.2F) which did not include positive studies before 2000 (which had led to recommendations to eat fish/take fish oil for CVD) and has been critiqued for serious methodological flaws (https://www.cambridge.org/core/journals/british-journal-of-nutrition/article/pitfalls-in-the-use-of-randomised-controlled-trials-for-fish-oil-studies-with-cardiac-patients/65DDE2BD0B260D1CF942D1FF9D903239; http://www.issfal.org/statements/hooper-rebuttable). Incidentally I learned that the journal that published one of the null studies sold 900,000 reprints to a pharmaceutical company (that presumably sells statins).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Nov 04, Matthew Romo commented:

      Thank you for this very thought provoking paper. With the skyrocketing amount of systematic reviews (and meta-analyses) published, I wonder how many did not identify any evidence for their research question. If systematic reviews are research, shouldn’t we expect null results, at least once in a while? Quantifying the relative number of systematic reviews with null results (which seem to be very few) might be helpful in further understanding the degree of bias there is in published systematic reviews. After all, research should be published based on the importance of the question they seek to answer and their methodological soundness, rather than their results (Greenwald, 1993).

      "Null" systematic reviews that find no evidence can be very informative for researchers, clinicians, and patients, provided that the systematic review authors leave no stone unturned in their search, as they ought to for any systematic review. For researchers, they scientifically identify important gaps in knowledge where future research is needed. For clinicians and patients, they can provide an understanding of practices that don’t have a reliable evidence base. As stated quite appropriately by Alderson and Roberts in 2000, “we should be willing to admit that ‘we don’t know’ so the evidential base of health care can be improved for future generation.”

      Matthew Romo, PharmD, MPH Graduate School of Public Health and Health Policy, City University of New York

      Alderson P, Roberts I. Should journals publish systematic reviews that find no evidence to guide practice? Examples from injury research. BMJ. 2000;320:376-377.

      Greenwald AG. Consequences of prejudice against the null hypothesis. In: A Handbook for Data Analysis in the Behavioural Sciences, edited by Keren G, Lewis C, Hillsdale, NJ, Lawrence Erlbaum, 1993, pp419–448.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Dec 30, Arturo Martí-Carvajal commented:

      What is the degree of responsibility of either editor-in-chief of journal or peer reviewers in the publication of systematic reviews with doubtful quality?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2016 Sep 18, Hilda Bastian commented:

      Thanks, John - that's as close we'll get, and we do agree on far more than we disagree, as ever.

      I agree we should face the data, and be meticulous about it. I just don't agree that indexing has the same effect on a tagged category as it has for a filter: especially not when the filter is so broad that it encompasses the variety of terms people use to describe their work. I remain convinced that the appropriate time trend comparators are filter to filter, with triangulation of sources. I don't think it's highly likely that 90% of the RCTs are in the first 35% of tagged literature.

      I don't think people should hold off publishing a systematic review that was done before deciding to fund or run a trial, until a report of the trial or its methods is published - and ideally, they would be done by different people. Intellectual conflicts of interest can be as powerful as any other. And I don't think that trialists interpreting what their trial means in the context of other evidence meets the criterion, unconflicted. Nor do I think the only systematic reviews we need are those with RCTs.

      I don't think Cochrane reviews are all good quality and unconflicted - in fact, the example of a conflicted review with quality issues in my comment was a Cochrane review. I agree there is no prestigious name that guarantees quality. (It's a long time since I left the Cochrane Collaboration, by the way.) My comments aren't because I disagree that there is a flood of bad quality "systematic" reviews and meta-analyses: the title of your article is one of the many things I agree with. See for example here, here, and quite a few of my comments on PubMed Commons.

      But the main reason for this reply is to add into this stream the reason I feel some grounds for optimism about something else we would both fervently agree on: the need to chip away at the problem of extensive under-reporting of clinical trials. As of January 2017, the mechanisms and incentives for reporting a large chunk of trials - those funded by NIH and affected by the FDA's scope - will change (NIH, 2016). Regardless of what happens with synthesis studies, any substantial uptick in trial reporting would be great news.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2016 Sep 18, John Ioannidis commented:

      Dear Hilda,

      thank you for all these wise thoughts. Based on prior experience, at this point in time (mid-September) the numbers for "Type of Article" meta-analysis, systematic reviews and randomized controlled trial for 2015 are likely to increase by about 10% with more complete indexing. I have taken this into account in my calculations.

      I fully agree with Iain Chalmers that every trial should start and finish with a systematic review. I fervently defend upfront this concept in my paper when I say that "it is irrational not to systematically review what is already known before deciding to perform any new study. Moreover, once a new study is completed, it is useful to update the cumulative evidence", even specifically citing Iain's work. But the publication of these systematic reviews are (and should be) integral with the publication of the specific new studies. I have not counted separately the systematic reviews that are embedded within trial publications. If I were to do this, then the numbers of systematic reviews would be even higher. My proposal even goes a step further in arguing that systematic reviews and meta-analyses should be even more tightly integrated with the primary studies. Meta-analyses should become THE primary studies par excellence.

      So, the answer to your question "But in an ideal world, isn't a greater number of systematic reviews than RCTs just the way it should be?" the answer is clearly "No", if we are taking about the dominant paradigm of systematic reviews of low quality that are done in isolation of the primary evidence and represent a parallel universe serving mostly its own conflicts. The vast majority of the currently published systematic reviews are not high-quality, meticulous efforts, e.g. Cochrane reviews, and they are entirely disjoint from primary studies. Cochrane reviews represent unfortunately less than 5% of this massive production. While I see that you and some other Cochrane friends have felt uneasy with the title of my paper and this has resulted in some friendly fire, I ask you to please look more carefully at this threatening pandemic which is evolving in the systematic review and meta-analysis world. Even though I trust that Cochrane is characterized by well-intentioned, non-conflicted and meticulous efforts, this bubble, which is 20-50 times larger than Cochrane, is growing next door. Let us please face the data, recognize this major problem and not try to defend ANY systematic reviews and meta-analyses as if they have value no matter what just because they happen to carry such a prestigious name.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    8. On 2016 Sep 18, Hilda Bastian commented:

      Thanks, John, for taking this so seriously - that's extremely helpful and I certainly agree with you that the rate of publication of systematic reviews is growing faster than RCTs. So the point you are talking about may well be reached at some point. Unless the rate of growth equalizes, and unless the rate of RCTs that are unpublished drops substantially: and both of those remain possible.

      This comparison is much better, but it can't solve the underlying issues. Human indexing resources did not increase exponentially along with the exponential increase of literature. As of today, searching for PubMed records with 2015 in the PubMed entry date [EDAT], only 35% also have a 2015 date for completed indexing [DCOM] (which from PubMed Help looks to me the way you would check for that - but an information specialist may correct me here). That's roughly what I would expect to see: individually indexing well over a million records a year is a colossal undertaking. Being finished 2015 in just a few months while 2016 priorities are pouring in would be amazing. And we know that no process of prioritizing journals will solve this problem for trials, because the scatter across journals is so great (Hoffmann T, 2012).

      So any comparison between a tagged set (RCTs) and a search based on a filter with text words (which includes systematic review or meta-analysis in the title or abstract), could generate potentially very biased estimates, no matter how carefully the results are analyzed. And good systematic reviews of non-randomized clinical trials, and indeed, other methodologies - such as systematic reviews of adverse events, qualitative studies, and more - are valuable too. Many systematic reviews would be "empty" of RCTs, but that doesn't make them useless by definition.

      I couldn't agree with you more enthusiastically, though, that we still need more, not fewer, well-done RCTs, systematic reviews, and meta-analyses by non-conflicted scientists. I do add a caveat though, when it comes to RCTs. RCTs are human experimentation. It is not just that they are resource-intensive: unnecessary RCTs and some of the ways that RCTs can be "bad", can cause direct harm to participants, in a way that an unnecessary systematic review cannot. The constraints on RCTs are greater: so they need to be done on questions that matter the most and where they can genuinely provide better information. If good enough information can come from systematically reviewing other types of research, then that's a better use of scarce resources. And if only so many RCTs can be done, then we need to be sure we do the "right" ones.

      For over 20 years, Iain Chalmers has argued that an RCT should not be done without a systematic review to show the RCT is justified - and there should be an update afterwards. Six years ago - he, Mike Clarke and Sally Hopewell concluded that we were nowhere near achieving that (Clarke M, 2010). The point you make about the waste in systematic reviewing underscores that point, too. But in the ideal world, isn't a greater number of systematic reviews than RCTs just the way it should be?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    9. On 2016 Sep 17, John Ioannidis commented:

      Dear Hilda,

      Thank you for your follow-up comment on my reply, I always cherish your insights. I tried to get a more direct answer to the question on which we both have some residual uncertainty, i.e. whether currently published systematic reviews of trials outnumber new randomized controlled trials. So, I collected more data.

      First, while we can disagree on some minor technical details, it is very clear that the annual rate has been increasing extremely fast for “meta-analyses” and very fast for “systematic reviews”, while it is rising slowly for “randomized controlled trials” types of articles. In a search as of today, the numbers per year between 2009 and 2015 using the “type of article” searches (with all their limitations) are 3243-3934-4858-6570-8192-9632-9745 for meta-analysis, 15085-17353-19378-22575-25642-29261-31609 for systematic reviews and 17879-18907-20451-22339-24538-24459-22066 for randomized controlled trials. The data are not fully complete for 2015 given that “type of article” assignments may have some delay, but comparing 2014 versus 2009 where the data are unlikely to change meaningfully with more tags, within 5-years the rate of publication of meta-analyses tripled, the rate of publication of systematic reviews doubled, while the rate of publication of randomized trials increased by only 36% (almost perfectly tracking the 33% growth of total PubMed items in the same period).

      Type of article is of course not perfectly sensitive or specific in searching. So, I took a more in-depth look in a sample of 111 articles that are Type of article=“randomized controlled trial” among the 22066 published in 2015 (in the order of being retrieved by a 2015 [DP] search selecting the first and every 200th afterwards, i.e. 1, 201, 401, etc). Of the 111, 17 represent secondary analyses (the majority of secondary analyses of RCTs are not tagged as “randomized controlled trial”), 5 are protocols without results, 6 are non-human randomized studies (on cattle, barramundi etc), and 12 are not randomized trials, leaving a maximum of 71 new randomized controlled trials. I say “maximum”, because some of those 71 may actually not be randomized (e.g. there is a substantial number of “randomized” trials from China and past in-depth evaluations have shown that many/most are not really randomized even they say they are) and some others may also be secondary or duplicate publications but this is not easy to decipher based on this isolated sampling. Even if 71/111 are new RCTs, this translates to (71/111)x22060=14114 new RCTs (or articles masquerading as new RCTs) in 2015. Allowing for some missed RCTs and not yet tagged ones, it is possible that the number of new RCTs published currently is in the range of 15,000 per year. Of the 71 studies that were new RCTs or masquerading as such, only 25 had over 100 randomized participants and only 1 had over 1000 randomized participants. Clinically informative RCTs are sadly very few.

      I also examined the studies tagged as Type of Article “meta-analysis” or “systematic review” or “review” published in 2015 [DP], combined with (trial* OR treatment* OR randomi*). Of the 49,166 items, I selected 84 for in-depth scrutiny (the first and every 600th afterwards, i.e. 1, 601, 1201, etc). Overall, 30 of the 84 were systematic reviews and/or meta-analyses of trials or might be masquerading as such to the average reader, i.e. had some allusion to search databases and/or search strategies and/or systematic tabulation of information. None of these 30 are affected by any of the potential caveats you raised (protocols, ACP Journal Club, split reviews, etc). Extrapolating to the total 49166, one estimates 17988 systematic reviews and/or meta-analyses of trials (or masquerading as such) in 2015. Again, allowing for missed items (e.g. pooled analyses of multiple trials conducted by the industry are not tagged as such Types of Articles), for those not yet tagged, and for a more rapid growth for such studies in 2016 than for RCTs, it is likely that the number of systematic reviews and/or meta-analyses of trials published currently is approaching 20,000 per year. If the criteria for “systematic review of trials” become more stringent (as in Page et al, 2016), this number will be substantially smaller, but will still be quite competitive against the number of new RCTs. Of course, if we focus on both stringent criteria and high quality, the numbers drop precipitously, as it happens also with RCTs.

      I am sure that these analyses can be done in more detail. However, the main message is unlikely to change. There is a factory of RCTs and a far more rapidly expanding factory of systematic reviews and meta-analyses. The majority of the products of both factories are useless, conflicted, misleading or all of the above. The same applies to systematic reviews and meta-analyses for most other types of study designs in biomedical research. This does not mean that RCTs, systematic reviews, and meta-analyses are not a superb idea. If well done by non-conflicted scientists, they can provide the best evidence. We need more, not fewer, such studies that are well done and non-conflicted.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    10. On 2016 Sep 16, Hilda Bastian commented:

      Thanks, John, for the reply - and for giving us all so much to think about, as usual!

      I agree that there are meta-analyses without systematic reviews, but the tagged meta-analyses are included in the filter you used: they are not additional (NLM, 2016). It also includes meta-analysis in the title, guidelines, validation studies, and multiple other terms that add non-systematic reviews, and even non-reviews, to the results.

      In Ebrahim S, 2016, 191 primary trials in only high impact journals were studied. Whether they are typical of all trials is not clear: it seems unlikely that they are. Either way, hundreds of reports for a single trial is far from common: half the trials in that sample had no secondary publications, only 8 had more more than 10, and none had more than 54. Multiple publications from a single trial can sometimes be on quite different questions, which might also need to be addressed in different systematic reviews.

      The number of trials has not been increasing as fast as the number of systematic reviews, but the number has not reached a definite ongoing plateau either. I have posted an October 2015 update to the data using multiple ways to assess these trends in the paper by me, Paul Glasziou, and Iain Chalmers from 2010 (Bastian H, 2010) here. Trials have tended to fluctuate a little from year to year, but the overall trend is growth. As the obligation to report trials grows more stringent, the trend in publication may be materially affected.

      Meanwhile, "systematic reviews" in the filter you used have not risen all that dramatically since February 2014. For the whole of 2014, there were 34,126 and in 2015 there were 36,017 (with 19,538 in the first half of 2016). It is not clear without detailed analysis what part of the collection of types of paper are responsible for that increase. The method used to support the conclusion here about systematic reviews of trials overtaking trials themselves was to restrict the systematic review filter to those mentioning trials or treatment - “trial* OR randomi* OR treatment*”. That does not mean the review is of randomized trials only: no randomized trial need be involved at all, and it doesn't have to be a review.

      Certainly, if you set the number of sizable randomized trials high, there will be fewer of them than of all possible types of systematic review: but then, there might not be all that many very sizable, genuinely systematic reviews either - and not all systematic reviews are influential (or even noticed). And yes, there are reviews that are called systematic that aren't: but there are RCTs called randomized that aren't as well. What's more, an important response to the arrival of a sizeable RCT may well be an updated systematic review.

      Double reports of systematic reviews are fairly common in the filter you used too, although far from half - and not more than 10. Still, the filter will be picking up protocols as well as their subsequent reviews, systematic reviews in both the article version and coverage in ACP Journal Club, the full text of systematic reviews via PubMed Health and their journal versions (and the ACP Journal Club coverage too), individual patient data analyses based on other systematic reviews, and splitting a single systematic review into multiple publications. The biggest issue remains, though, that as it is such a broad filter, casting its net so very wide across the evidence field, it's not an appropriate comparator for tagged sets, especially not in recent years.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    11. On 2016 Sep 16, John Ioannidis commented:

      Dear Hilda,

      thank you for the very nice and insightful commentary on my article. I think that my statement "Currently, probably more systematic reviews of trials than new randomized trials are published annually" is probably correct. The quote of 8,000 systematic reviews in the Page et al. 2016 article is using very conservative criteria for systematic reviews and there are many more systematic reviews and meta-analyses, e.g. there is a factory of meta-analyses (even meta-analyses of individual level data) done by the industry combining data of several trials but with no explicit mention of systematic literature search. While many papers may fail to satisfy stringent criteria of being systematic in their searches or other methods, they still carry the title of "systematic reviews" and most readers other than a few methodologists trust them as such. Moreover, the 8,000 quote was from February 2014, i.e. over 2.5 years ago, and systematic reviews' and meta-analyses' publication rates rise geometrically. Conversely, there is no such major increase in the annual rate of published randomized controlled trials. Furthermore, the quote of 38,000 trials in the Cochrane database is misleading, because it includes both randomized and non-randomized trials and the latter may be the majority. Moreover, each randomized controlled trial may have anywhere up to hundreds of secondary publications. On average within less than 5 years of a randomized trial publication, there are 2.5 other secondary publications from the same trial (Ebrahim et al. 2016). Thus the number of published new randomized trials per year is likely to be smaller than the number of published systematic reviews and meta-analyses of randomized trials. Actually, if we also consider the fact that the large majority of randomized trials are small/very small and have little or no impact, while most systematic reviews are routinely surrounded by the awe of the "highest level of evidence", one might even say that the number of systematic reviews of trials published in 2016 is likely to be several times larger than the number of sizable randomized trials published in the same time frame.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    12. On 2016 Sep 16, Hilda Bastian commented:

      There are many important issues raised in this paper on which I strongly agree with John Ioannidis. There is a lot of research waste in meta-analyses and systematic reviews, and a flood of very low quality, and he points out the contributing factors clearly. However, there are some issues to be aware of in considering the analyses in this paper on the growth of these papers, and their growth in comparison with randomized and other clinical trials.

      Although the author refers to PubMed's "tag" for systematic reviews, there is no tagging process for systematic reviews, as there is for meta-analyses and trials. Although "systematic review" is available as a choice under "article types", that option is a filtered search using Clinical Queries (PubMed Help), not a tagging of publication type. Comparing filtered results to tagged results is not comparing like with like in 2 critical ways.

      Firstly, the proportion of non-systematic reviews in the filter is far higher than the proportion of non-meta-analyses and non-trials in the tagged results. And secondly, full tagging of publication types for MEDLINE/PubMed takes considerable time. When considering a recent year, the gulf between filtered and tagged results widens. For example, as of December 2015 when Ioannidis' searches were done, the tag identified 9,135 meta-analyses. Today (15 September 2016), the same search identifies 11,263. For the type randomized controlled trial, the number tagged increased from 23,133 in December to 29,118 today.

      In the absence of tagging for systematic reviews, the more appropriate comparisons are using filters for both systematic reviews and trials as the base for trends, especially for a year as recent as 2014. Using the Clinical Queries filter for both systematic reviews and therapy trials (broad), for example, shows 34,126 for systematic reviews and 250,195 trials. Page and colleagues estimate there were perhaps 8,000 actual systematic reviews according to a fairly stringent definition (Page MJ, 2016) and the Centre for Reviews and Dissemination added just short of 9,000 systematic reviews to its database in 2014 (PubMed Health). So far, the Cochrane Collaboration has around 38,000 trials in its trials register for 2014 (searching on the word trial in CENTRAL externally).

      The number of systematic reviews/meta-analyses has increased greatly, but not as dramatically as this paper's comparisons suggest, and the data do not tend to support the conclusion in the abstract here that "Currently, probably more systematic reviews of trials than new randomized trials are published annually".

      Ioannidis suggests some bases for some reasonable duplication of systematic reviews - these are descriptive studies, with many subjective choices along the way. However, there is another critical reason that is not raised: the need for updates. This can be by the same group publishing a new version of a systematic review or by others. In areas with substantial questions and considerable ongoing research, multiple reviews are needed.

      I strongly agree with the concerns raised about conflicted systematic reviews. In addition to the issues of manufacturer conflicts, it is important not to underestimate the extent of other kinds of bias (see for example my comment here). Realistically, though, conflicted reviews will continue, building in a need for additional reviewers to tackle the same ground.

      Systematic reviews have found important homes in clinical practice guidelines, health technology assessment, and reimbursement decision-making for both public and private health insurance. But underuse of high quality systematic reviews remains a more significant problem than is addressed here. Even when a systematic review does not identify a strong basis in favor of one option or another, that can still be valuable for decision making - especially in the face of conflicted claims of superiority (and wishful thinking). However, systematic reviews are still not being used enough - especially in shaping subsequent research (see for example Habre C, 2014).

      I agree with Ioannidis that collaborations working prospectively to keep a body of evidence up-to-date is an important direction to go - and it is encouraging that the living cumulative network meta-analysis has arrived (Créquit P, 2016). That direction was also highlighted in Page and Moher's accompanying editorial (Page MJ, 2016). However, I'm not so sure how much of a solution this is going to be. The experience of the Cochrane Collaboration suggests this is even harder than it seems. And consider how excited people were back in 1995 at the groundbreaking publication of the protocol for prospective, collaborative meta-analysis of statin trials (Anonymous, 1995) - and the continuing controversy that swirls, tornado-like, around it today (Godlee, 2016).

      We need higher standards, and skills in critiquing the claims of systematic reviews and meta-analyses need to spread. Meta-analysis factories are a serious problem. But I still think the most critical issues we face are making systematic reviews quicker and more efficient to do, and to use good ones more effectively and thoroughly than we do now (Chalmers I, 2009, Tsafnat G, 2014).

      Disclosure: I work on projects related to systematic reviews at the NCBI (National Center for Biotechnology Information, U.S. National Library of Medicine), including some aspects that relate to the inclusion of systematic reviews in PubMed. I co-authored a paper related to issues raised here several years ago (Bastian H, 2010), and was one of the founding members of the Cochrane Collaboration.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 03, Andrey Alexeyenko commented:

      The counts for 2x2 table seem to be wrong. The last two terms should be:

      n12 | k - x

      n22 | N - [ k + d ] + x

      so that n11 + n12 + n21 + n22 sum up to all genes from the genome. That is why the table is called "contingency table".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 22, Peter Hajek commented:

      Thank you Laurie for looking into this and confirming that the data do not suggest that vaping undermines quitting. Regarding a possible benefit of vaping, a better test would be including participants who were smoking at 3M, as you did, but compare those who did and those who did not try vaping BETWEEN the 3M and 6M follow-up. This is because those still smoking and reporting using EC prior to the 3M f-u are self-selected for not benefiting from vaping (up to that point anyway). Doing it the way suggested above avoids some of that problem - but the result would remain affected by self-selection and uncertainty about the purpose and intensity of e-cig use. Thanks again, Peter


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 21, Laurie Zawertailo commented:

      We thank Dr. Hajek for his constructive criticism of our paper and his suggested alternate analysis. We agree that smokers reporting e-cigarette use at follow-up may have been less likely to be able to quit using the standard treatment offered and so may have resorted to e-cigarettes to aid in their quit attempt. We were able to conduct the suggested analysis by looking at smokers who were not quit at the 3-month follow-up time point (i.e. failed on the initial treatment, n=1626). At the 6-month follow-up we assessed whether or not they reported being quit and whether or not they had used e-cigarettes. At 6-month follow-up, 11.4% of e-cigarette non-users reported quit (7-day PPA), compared to 9.2% of e-cigarette users (p=0.24, NS). Therefore, there is no evidence to support Dr. Hajek’s hypothesis that e-cigarette use will increase quit rates among those who do not quit smoking using standard evidence-based treatment (NRT plus counselling). Again these data are limited due to the lack of information regarding dose and duration of e-cigarette use and due to bias caused by self-selection.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Sep 27, Peter Hajek commented:

      The conclusion is mistaken. People who failed in their initial attempt to quit smoking with NRT would be much more likely to try alternatives than those who quit successfully. The finding that non-EC use group did better is an artifact of this - treatment successes were concentrated there. It would be more informative to look at people who failed with the initial treatment and compare those who did and those who did not try e-cigarettes during the follow-up period. Such a comparison may well show that e-cigarette use had a positive effect. Self-selection would remain a problem, but perhaps the authors could check this?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 10, Shaun Khoo commented:

      Green Open Access: The accepted manuscript of this review paper is available from the UNSW Institutional Repository.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 10, Seyed Moayed Alavian commented:

      I read with interest this publication , the study has done in resistant cases and 40.2% were null-responders and 56.9% of them had liver cirrhosis. The result of SVR 99.0% is very interesting for scientists. It is very critical for us to understand the tolerability of patients to these regimens especially in liver cirrhosis patients?. And did they included the cirrhotic patients in child B and C in their study or not?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 19, Gustav van Niekerk commented:

      We (van Niekerk G, 2016) have recently argued that sickness associated anorexia (SAA) may represent a strategy to maintain high levels of autophagic flux on a systemic level systemically. (Also see van Niekerk G, 2016 for an evolutionary perspective).

      An upregulation of autophagy during an infection may be critical for a number of reasons:

      • Serum and AA starvation induce autophagy in macrophages and protect against TB infection (Gutierrez MG, 2004).

      • We speculate that hepatic autophagy may play a critical role in clearing LPS and bacteria rom circulation.

      • Pathogens entering a cell must quickly subvert host processes to prevent being degraded by autophagy. In this regard, up regulation of autophagic flux would confront pathogens with a narrower window of opportunity to modulate host machinery. Thus, autophagy enhance cell autonomous defence.

      • Autophagy processes ribosomal components into antimicrobial peptides (Ponpuak M, 2010). Note that all nucleated cells have ribosomes and are capable of autophagy, this suggesting that autophagy may again enhance cell-autonomous defence.

      • Autophagy is also involved in the non-canonical expression of epitopes on MHC II by non-professional antigen presenting cells such as adipocytes, muscle and endothelium cells.

      Autophagy may also be important in cell survival. As an example, tissue ischemia, the release of biocidal agents from immune cells as well as the increase in misfolded proteins resulting from a febrile response may lead to the generation of toxic protein aggregates. Here, autophagy may promote cell survival by processing ‘overflow’ of damage proteins aggregates when proteasome pathway is overwhelmed.

      Fasting-induced autophagy may thus promote host tolerance and resistance.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 26, Kenneth Witwer commented:

      This article has now been retracted:

      https://www.nature.com/articles/srep46826

      The authors also repeated some of their experiments with appropriate methods and reported, "we were unable to confirm specific amplification of these miRNAs in human blood. Thus, we were not able to validate the central hypothesis of this paper."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 02, Kenneth Witwer commented:

      Following the previous comments, tweets on this subject raise a few more perceived issues:

      https://twitter.com/ProfParrott/status/792472109834498049

      https://twitter.com/ProfParrott/status/792472735427461120

      Importantly, the manufacturer of the qPCR kit used in this study states that it is not for use with plant miRNAs:

      http://bit.ly/2f0QWYL


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Oct 26, Kenneth Witwer commented:

      It appears to me that this report includes PCR design errors that may invalidate the findings. In the hopes that I had made a mistake or overlooked something, I consulted with two colleagues at different academic institutions who also came to the conclusion that there are consequential errors in the assay designs. I would encourage the authors and editors to double-check what I say below and take appropriate steps if the observations are correct.

      To amplify a mature miRNA, the miScript universal reverse primer used in this study must be paired with a forward primer with identity to part or all of the mature miRNA sequence. A forward primer that is the reverse complement of the mature miRNA would not amplify a specific product. However, all but one of the mature miRNA forward primers reported in the supplement, including the human miR-21 primers, are reverse complementary to the indicated miRNAs (or do not match known miRNAs, e.g., the putative MIR1508, MIR917, and MIR477 primers). Therefore, any signal obtained from these reactions would have been non-specific. The exception is MIR824; however, this miRNA does not appear to have contributed to the conclusions of the article, namely, that plant miRNAs are taken up into human circulation and affect gene expression.

      Proof of the PCR design error is supplied by Supplementary Table 5, showing the sequences of PCR products of two reactions (MIR160 and MIR2673) that were cloned into a sequencing vector. Had the reactions amplified actual miScript cDNA, two features of the sequenced product should be evident: 1) mature miRNA sequence and reverse primer sequence would be separated by a poly(A) sequence (or poly(T), depending on the sequenced strand); 2) the mature miRNA sequence would come before (5' to) the poly(A) and reverse primer. In Supplementary Table 5, there is no intervening poly(A) or (T) sequence, and the mature miRNA sequence of both MIR160 and MIR2673 follows the reverse primer. It is thus clear that these sequences are not products of specific amplification of mature miRNA sequences, but rather the result of spurious amplification or cloning of the incorrectly stranded mature miRNA primers and the kit reverse primer.

      Incidentally, other, less important PCR primer design and reporting errors are apparent in the supplement. The human ACTB primers do match the ACTB transcript, but are also not ideally specific, as they would amplify sequences on numerous human chromosomes. Also, several primers are designed to the minus genomic strand, not a transcript, and thus seem to have the forward and reverse labels switched.

      It should also be noted that, even if the miRNA qPCR assays had been correctly designed, miR2673 is not specific to Brassica or plants, and matches low complexity sequences in organisms from human to yeast. miRBase indexes MIR2673 sequences of Medicago trunculata derived from hairpins designated MIR2673 and MIR2673a that are transcribed from chromosomes 3 and 5. The 22-nt mature sequence for both is CCUCUUCCUCUUCCUCUUCCAC, a low-complexity sequence beginning with three repeats of "CCUCUU". MIR2673 has previously been reported in pineapple (Yusuf NH, 2015), potato (Yang J, 2013), and cucumber (Wen CL, 2016). Additionally, at least one 100% match to the mature MIR2673 sequence is found on every human chromosome...along with many human transcriptome matches of 100% identity for stretches of 20 of 22 consecutive bases. To complicate matters, the curated miRBase miR2673 sequences are not used; instead, the report relies on two predicted 21-nt mature miRNA sequences at the "miRNEST 2.0" site, a site that emphasizes that neither of these putative miRNAs is supported by miRBase.

      Quite possibly, transfecting massively non-physiologic amounts of plant miRNA mimics into human cells, as done in Figure 3 of this study and in another cited study (Chin AR, 2016), will elicit effects. However, these effects should not be taken as evidence of physiologic function of xenomiRs, which, assuming they are not contaminants (Tosar JP, 2014, Witwer KW, 2015), appear to reach only subhormonal (zeptomolar to attomolar) levels in human (Witwer KW, 2016).

      In sum, I would conclude from these observations that no plant mature miRNA sequences were amplified from human blood, and that there was therefore no basis for the nonphysiologic transfection or gene expression studies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 19, Josh Bittker commented:

      The compound names of the format BRD#### can now be searched directly in Pubchem.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Sep 13, Josh Bittker commented:

      @Christopher Southan- Thanks, we're going to try to resolve the PubChem links by adding aliases in Pubchem of the short names used in the paper (BRD####); the compounds are registered in Pubchem with their full IDs but not the shortened IDs.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Sep 10, Christopher Southan commented:

      This post resolves the PubChem links https://cdsouthan.blogspot.se/2016/09/structures-from-latest-antimalarial.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 09, Cristiane N Soares commented:

      The question regarding CHIK tests mentioned by Thomas Jeanne is really relevant in this case. In fact, we were concerned about co-infections, and after the paper acceptance we performed IgM and IgG CHIK tests in serum and CSF. All samples were negatives for CHIKV.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Sep 08, Thomas Jeanne commented:

      In their case report, Soares et al. do not mention testing for chikungunya virus (CHIKV), which has considerable overlap with Zika virus (ZIKV) in both epidemiologic characteristics and clinical presentation. Brazil experienced a large increase in chikungunya cases in early 2016 (Collucci C, 2016), around the time of this patient's illness, and recent case series in Ecuador (Zambrano H, 2016) and Brazil (Sardi SI, 2016) have demonstrated coinfection with ZIKV and CHIKV. Moreover, a recently published study of Nicaraguan patients found that 27% of those who tested positive for any of ZIKV, CHIKV, or DENV (dengue virus) with multplex RT-PCR also tested positive for one or both of the other viruses (Waggoner JJ, 2016). CHIKV itself has previously been linked to encephalitis including fatal encephalitis (Gérardin P, 2016), and some have speculated that adverse interactions could result from coinfection with two or more arboviruses (Singer M, 2017). Coinfection with chikungunya as a contributing factor in this case cannot be ruled out without appropriate testing.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 08, Clive Bates commented:

      This paper does not actually address the question posed in the title: Should electronic cigarette use be covered by clean indoor air laws?

      The question it does address is more like "how much discomfort do vapers say they experience when the same law is applied to them as to smokers?".

      The authors do not show that this is the foundation on which a justification for applying clean indoor air laws to vaping should rest. There is no basis to assume that it is.

      Addressing the question set in the title is ultimately a matter of property rights. The appropriate question is: "what is the rationale for the state to intervene using the law to override the preferred vaping policy of the owners or managers of properties?".

      The authors cannot simply assume that everyone and at all times shares their preference for 'clean indoor air'. Vapers may prefer a convivial vape and a venue owner may be pleased to offer them a space to do it. Unless this is creating some material hazard to other people, why should the law stop this mutually agreed arrangement? Simply arguing that it doesn't cause that much discomfort among that many vapers isn't a rationale. If the law stops them doing what they would like to do there is a welfare or utility loss to consider.

      It is likely that many places will not allow vaping - sometimes for good reasons. But consider the following cases:

      1. A bar wants to have a vape night every Thursday

      2. A bar wants to dedicate one room where vaping is permitted

      3. In a town with three bars, one decides it will cater for vapers, two decide they will not allow vaping

      4. A bar manager decides on balance that his vaping customers prefer it and his other clientele are not that bothered – he’d do better allowing it

      5. A hotel wants to allow vaping in its rooms and in its bar, but not in its restaurant, spa, and lobby

      6. An office workplace decides to allow vaping breaks near the coffee machine to save on wasted smoking break time and encourage smokers to quit by switching

      7. A care home wants to allow an indoor vaping area to encourage its smoking elderly residents to switch during the coming winter instead of going out in the cold

      8. A vape shop is trying to help people switch from smoking and wants to demo products in the shop…

      9. A shelter for homeless people allows it to make its clients welcome

      10. A day centre for refugees allows it instead of smoking

      These are all reasonable accommodations of vaping for good reasons. But the law is much too crude to manage millions of micro-judgments of this nature. It can only justify overruling them with a blanket prohibition if it is preventing harm to bystanders or workers who are exposed to hazardous agents at a level likely to cause a material risk.

      A much better role for the state is to advise owners and managers on how to make these decisions in an informed way. This is what Public Health England has done [1], and that, in my view, is a more enlightened and liberal philosophy. Further, I suspect it is more likely help to convert more smokers to vaping, giving a public health dividend too.

      [1] Public Health England, Use of e-cigarettes in public places and workplaces, July 2016.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 07, Anita Bandrowski commented:

      This paper is the basis of part of an example Authentication of Key Biological Resources document that we and the UCSD library has put together.

      Please find it here: http://doi.org/10.6075/J0RB72JC


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 20, Huang Nan commented:

      This paper introduced a very peculiar observation, which states that 18% and 42% of rural and urban Beijing female, with avaerge age in the early 60s, are sunbed users (Table 1). This is highly counter-intuitive as there would be less than 1% sunbed user exists in any Chinese population. Despite this observation, the authors claimed in the text that: "Only a few individuals had a sunburn history or used sunbeds."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 08, Christopher Tench commented:

      Could you possibly provide the coordinates analysed otherwise it is difficult to interpret the results.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 26, Atanas G. Atanasov commented:

      Excellent work, many thanks to the authors for the great overview.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 03, Thomas Hünig commented:

      Thank you for bringing this up in the Commons. Yes, it is unfortunate that somewhere in production process, the "µ" symbols were converted to "m", which sometimes happens when fonts are changed. Fortunately, the mistake becomes obvious by its sheer magnitude (1000x off), and the corresponding paper in Eur. J. Immunol. with the original, correct data is referenced. My apologies that we did not spot this mistake before publication.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Dec 31, Mark Milton commented:

      The article includes several unfortunate typos in a vital piece of information. The article states that "These encouraging findings led to the design of a new healthy volunteer trial, which started at 0.1 mg/kg, i.e. a 1000- fold lower dose than the one applied in the ill-fated trial of 2006 (Clinical trials identifier: NCT01885624). After careful monitoring of each patient, the dose was gradually increased to a maximum of 7 mg/kg, still well below what had been applied in the first HV trial." The units listed for the dose are mg/kg but should have been µg/kg. The starting dose was 0.1 µg/kg and the highest dose evaluated was 7 µg/kg (Tabares et al 2014). The dose administered in the TGN1412 FIH study was 100 µg/kg. Although this typo does not detract from the overall conclusions from the study, it is sad to see that this error was not noticed by the authors or reviewers given the near tragic circumstances of the FIH clinical trial for TGN1412.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 02, Pierre Fontana commented:

      Tailored antiplatelet therapy in patients at high cardiovascular risk: don’t prematurely throw the baby out with the bathwater

      The clinical impact of a strategy based on a platelet function assay to adjust antiplatelet therapy has been intensively investigated. However, large prospective interventional studies failed to demonstrate the benefit of personalizing antiplatelet therapy. One of the concerns was that the interventions were delayed and partially effective, contrary to earlier smaller trials that employed incremental clopidogrel loading doses prior to PCI Tantry US, 2013 Bonello L, 2009.

      Cayla and co-workers should be commended for their efforts in the ANTARCTIC trial. Although the trial is pragmatic, important limitations may account for the neutral effect of the intervention, including an antiplatelet adjustment performed between D14 and D28 after randomization. Early personalization is also supported by data from the TRITON-TIMI38 trial where half of the ischemic events (4.7/9.9%) of the prasugrel-treated arm occurred 3 days after randomization. Stratifying the analysis on the timing of events before and after D28 may provide some insight, though underpowered for a definitive conclusion.

      The prognostic value of the platelet function assay and cut-off used would also be of great interest in the control group. If, the assay and cut-off values were not prognostic in this elderly population, personalization would be bound to fail.

      Finally, the results of ANTARCTIC restricted to the subgroup of patients with hypertension (73% of patients), thus accumulating 3 of the risk factors related to the clinical relevance of high platelet reactivity Reny JL, 2016 would also be very interesting. Further research should not only evaluate other pharmacological approaches but also early personalization and measurement of platelet reactivity in the control group.

      J.-L. Reny, MD, PhD and P. Fontana, MD, PhD


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 16, Leon van Kempen commented:

      RNA in FFPE tissue is commonly degraded. NanoString profiling will still yield reliable results when RNA is degraded to 200nt fragments.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 03, Adrian Barnett commented:

      I should have cited this paper which shows how random funding can de-centralise funding away from ingrained ideas and hence increase overall efficiency: Sharar Avin "Funding Science by Lottery", volume 1, European Studies in Philosophy of Science


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 19, G L Francis commented:

      I have read your publication in PNAS titled ‘Metabolic features of chronic fatigue syndrome’ with much interest, this significant contribution has at last provided a definitive publication of a realistic evidence based diagnostic test based on a panel of blood metabolites - this could provide a more robust diagnostic base for future rational treatment studies in ‘CFS’.

      Athough there are many more complex and critical questions to be asked, I will keep mine simple. I took particular note of the authors comments “When MTHFD2L is turned down in differentiated cells, less mitochondrial formate is produced and one-carbon units are directed through Methylene-THF toward increased SAM synthesis and increased DNA methylation” (from Figure S6. Mitochondrial Control of Redox, NADPH, Nucleotide, and Methylation Pathways legend). I recently read the paper, 'Association of Vitamin B12 Deficiency with Homozygosity of the TT MTHFR C677T Genotype, Hyperhomocysteinemia, and Endothelial Cell Dysfunction' Shiran A et al. IMAJ 2015; 17: 288–292, and wondered whether the gene variations in the individuals described within that publication, could be over represented in your subjects, mind you the size of your study population probably answers my own question; and no doubt many mechanisms that lead to a perturbation of this pathway exist, of which this could conceivable be just one of many, even if a minor contributor. Moreover, there does seem to be a difference between the two papers in terms of the particular pertubations on incidence of cardiovascular disease and outcomes?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 19, james brophy commented:

      The authors conclude prophylactic ICD implantation "was not associated with a significantly lower long-term rate of death from any cause than was usual clinical care”. Given the observed hazard rate for death was 0.87; 95% confidence interval [CI], 0.68 to 1.12; P=0.28), this conclusion is quite simply wrong, unless a potential 32% reduction in death is considered clinically unimportant. The aphorism "Absence of proof is not proof of absence" is worth recalling.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 29, David Keller commented:

      If the investigators were not blinded, how was bias excluded from their scoring of the balance, stability and TUG tests?

      The placebo, nocebo, Pygmalion and other expectation effects can be substantial in Parkinson's disease. Unblinded investigators can transmit cues regarding their own expectations to patients, thereby affecting the patients' response to therapy. In addition, unblinded investigators are affected by bias in their subjective evaluations of patient response to therapy, and even in their measurement of patient performance on relatively objective tests. What was done to minimize these sources of bias from contaminating the results of this single-blinded study? Were the clinicians who scored the BBS, TUG and LOS tests aware of the randomization status of each patient they tested?

      In addition, I question whether the results reported for the LOS test in the Results section of the abstract are statistically significant. The patients assigned to exergaming scored 78.9 +/- 7.65 %, which corresponds to a Confidence Interval of [71.25 - 86.55] %, while the control patient scores of 70.6 +/- 9.37 % correspond to a confidence interval of [61.23 - 79.97] %. These two confidence intervals overlap from 71.25 % to 79.97 %, a range which includes the average score of the exergaming patients.

      If a follow-up study is planned, the blinding of investigators, especially those who score the patients' test performances, would reduce bias and expectation effects. Increasing the number of subjects assigned to active treatment and to control treatment would improve the statistical significance of the results.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 13, Craig Brown commented:

      I would welcome more research in this matter, also. As a clinician, for many years I have found this to yield significant visual and cognitive functional improvement in many patients with mild to moderate cerebral atrophy and vascular dementia, particularly those with MTHFR polymorphisms.

      Over several years, the benefit has been reliable. Patients who quit it, often come back and and restart it, because they notice loss of functional improvements that returns upon resumption.

      Because Folic acid does not cross the Blood Brain Barrier, it can build up blocking active l-methylfolate transport into the brain and retina, also impairing DHFR, Dihydrofolate Reductase which impairs methylation and BH4 recycling- essential for serotonin, dopamine, and norepinephrine production- which in turn are essential for mood, attention,sleep and memory.

      I find it works optimally when combined with Folic Acid avoidance- to reduce BBB blockade, riboflavin to enhance MTHFR methylation, and Vitamin D to enhance folate absorption. It has a long record of safety, with few serious side effects, for a condition that has few effective treatments. All this is to say, more research is surely a good thing here, but excessive skepticism deprives patients of a chance to try a low risk frequently helpful but not magic option.

      It is classified as a Medical Food, in part, because our FDA does not encourage formulating drug products that have multiple active ingredients, particularly ingredients that occur naturally in foods and in human metabolism. Medical Foods were implemented by the FDA specifically for higher concentrations of natural food based substances important to address genetic metabolic impairments, in this case, impairment of DFHR and MTHFR; which may contribute to cerebral ischemia, atrophy, and dementia.

      A final thought, double blind experiments are the ideal gold standard, however, the elderly, and the demented are considered high risk populations and have such strong protections in place at the NIH, that placebo studies are difficult to justify to, or get approval from, any Institutional Review Board IRB, when previous benefit has been shown, because that amounts to knowingly withholding treatment. We may have to content ourselves with non-placebo trials.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 09, Gayle Scott commented:

      This company (Nestle Health Science - Pamlab, Inc)-sponsored study was neither randomized nor blinded. Tests of cognitive function and QOL were not included.The study will certainly be cited in advertising for CerefolinNAC.

      It is important to note CerefolinNAC is a "medical food," an FDA designation for products designed to meet the nutritional needs of patients whose needs cannot be met through foods, such as inborn errors of metabolism, eg PKU (patients must avoid phenylalanine), maple syrup urine disease (patients must avoid branched chain amino acids).

      Another medical food for Alzheimer's disease is Axona, caprylic triglyceride, a medium chain triglyceride found in coconut oil. Unlike dietary supplements, medical foods can be labeled for medical conditions such as Alzheimer’s disease. Dietary supplements must be labeled for so-called “structure and function claims” and cannot make claims to treat or prevent disease. For example, ginkgo may be labeled “supports memory function,” but not “for treatment of dementia.” A drug or medical food could be labeled “for treatment of dementia associated with Alzheimer’s disease.”

      Think of medical foods as hybrids of prescription drugs and dietary supplements, more closely resembling dietary supplements in terms of regulation. Packaging for medical foods is similar to prescription products with package inserts, NDC numbers, and usually “Rx only” on the labels. But like dietary supplements, medical foods are not required to be evaluated for safety or efficacy, and the FDA does not require approval before marketing. "Caution: Federal law prohibits dispensing without prescription" is not required on product labeling. The FDA specifies only that these products are for use with medical supervision;. however, a medical food manufacturer may market a product to be dispensed only on physician request.

      Message to patients regarding CerefolinNAC: much more research is needed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 16, Michael Stillman commented:

      I read this article with great interest. And with significant concern.

      A sweeping review by the Department of Health and Human Services' Office for Human Research Protections of Dr. Harkema's spinal cord injury research program (https://www.hhs.gov/ohrp/compliance-and-reporting/determination-letters/2016/october-17-2016-university-louisville/index.html accessed May 16, 2017) documented numerous instances of sloppy methodologies and potential frank scientific misconduct. This report included evidence of: a) missing source documents, leading to an inability to verify whether protocols had been followed or captured data was valid; b) multiple instances of unapproved deviations from experiments protocols; c) participants having been injured while participating in translational research experiments; d) a failure to document and adjudicate adverse events and to report unanticipated problems to the IRB; and e) subjects being misled about the cost of participating in research protocols. Dr. Harkema's conduct was so concerning that the National Institute of Disability, Independent Living, and Rehabilitation Research (NIDILRR) prematurely halted and defunded one of her major research projects (http://kycir.org/2016/07/11/top-u-of-l-researcher-loses-federal-funding-for-paralysis-study/ accessed May 26, 2017).

      I approached the editors of "Journal of Neurotrauma" with reports from both Health and Human Services (above) and University of Louisville's IRB and asked them three questions: a) were they adequately concerned with this study's integrity to consider a retraction; b) were they adequately concerned to consider publishing a "concerned" letter to the editor questioning the study's integrity and reliability; and c) were they interested in reviewing adverse events associated with the experiments. Their response: "no," "no," and "no."

      I call on the editorial board of "Journal of Neurotrauma" to carefully inspect all documents and data sets related to this work. I would further expect them to review all adverse events reports, and to demand evidence that they've been reviewed and adjudicated by an independent medical monitor or study physician. Short of this, this work remains specious.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 07, Donald Forsdyke commented:

      PATERNITY OF INNATE IMMUNITY?

      The accolades cast on scientists we admire include that of paternity. Few will dispute that Gregor Mendel was the father of the science we now call genetics. At the outset, this paper (1) hails Metchnikoff (1845-1916) as “the father of innate immunity.” However, an obituary of US immunologist Charles Janeway (1943-2003) hails him similarly (2). Can a science have two fathers? Well, yes. But not if an alternative of Mendelian stature is around. While paternity is not directly ascribed, a review of the pioneering studies on innate immunity of Almroth Wright (1861-1947) will perhaps suggest to some that he is more deserving of that accolade (3).

      1.Gordon S (2016) Phagocytosis: the legacy of Metchnikoff. Cell 166:1065-1068 Gordon S, 2016

      2.Oransky I (2003) Charles A Janeway Jr. Lancet 362:409.

      3.Forsdyke DR (2016) Almroth Wright, opsonins, innate immunity and the lectin pathway of complement activation: a historical perspective. Microbes & Infection 18:450-459. Forsdyke DR, 2016


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 10, Shawn McGlynn commented:

      With the trees from this phylogeny paper now available, we can resolve the discussion between myself and the authors (below) and conclude that there is no evidence that nitrogenase was present in the LUCA as the authors claimed in their publication.

      In their data set, the authors identified two clusters of proteins which they refer to as NifD; clusters 3058 and 3899. NifD binds the metal cluster of nitrogenase and is required for catalysis. In the author's protein groups, cluster 3058 is comprised of 30 sequences, and 3899 is comprised of 10 sequences. Inspection of these sequences reveals that neither cluster contains any actual NifD sequences. This can be said with certainty since biochemistry has demonstrated that the metal cofactor coordinating residues Cys<sup>275</sup> and His<sup>442</sup> (using the numbering scheme from the Azotobacter vinelandii NifD sequence) are absolutely required for activity. NONE of the 40 sequences analyzed by the authors contain these residues. Therefore, NONE of these sequences can have the capability to bind the nitrogenase metal cluster, and it follows that none of them would have the capacity to reduce di-nitrogen. The authors have not analyzed a single nitrogenase sequence in their analysis and are therefore disqualified from making claims about the evolution of the protein; the claims made in this paper about nitrogenase cannot be substantiated with the data which have been analyzed. The sequences contained in the author's "NifD" protein clusters are closely related homologs related to nitrogenase cofactor biosynthesis and are within a large family of related proteins (which includes real NifD proteins, but also proteins involved in bacteriochlorophyll and Ni porphyrin F430 biosynthesis). While the author's analyzed proteins are more related to nitrogen metabolism than F430 or bacteriochlorophyll biosynthesis, they are not nitrogenase, but are nitrogenase homologs that complete assembly reactions.

      Other than not having looked at any sequences which would be capable of catalyzing nitrogen reduction, the presentation of two "NifD" clusters highlights important problems with the methods used in this paper which affect the entire analysis and conclusions. First, two clusters were formed for one homologous group, which should not have occurred if the goal was to investigate ancestry. Second, by selecting small clusters from whole trees, the authors were able to prune the full tree until they recovered small sub trees which show monophyly of archaea and bacteria. However it was incorrect to ignore the entire tree of homologs and present only two small clusters from a large family. This is "cherry" picking to the extreme - in this case it is "nitrogenase" picking, but it is very likely that this problem of pruning until the desired result sullies many if not all of the protein families and conclusions in the paper; for example the radical SAM tree was likely pruned in this same way with the incorrect conclusion being reached (like nitrogenase, a full tree of radical SAM does not recover the archaea bacteria split in protein phylogenies either). Until someone does a complete analysis with full trees the claims of this paper will remain unproven and misleading since they are based on selective sampling of information. It would seem that the authors have missed the full trees whilst being lost in mere branches of their phylogenetic forest of 286,514 protein clusters.

      In a forthcoming publication, I will discuss in detail the branching position of the NifD homologs identified by the authors, as well as the possible evolutionary trajectory of the whole protein family with respect to the evolution of life and the nitrogen cycle on this planet in more detail, including bona fide NifD proteins which I have already made comment on below in this PubMed Commons thread.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Mar 20, Madeline C Weiss commented:

      The trees and also the alignments for all 355 proteins are available on our resources website:

      http://www.molevol.de/resources/index.html?id=007weiss/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Dec 25, Shawn McGlynn commented:

      Unfortunately, the points raised by Professor Martin do not address the problem I raised in my original comment, which I quote from below: "nitrogenase protein phylogeny does not recover the monophyly of the archaea and bacteria." As I wrote, the nitrogenase protein is an excellent example of violating the author's criterion of judging a protein to be present in the LUCA by virtue that "its tree should recover bacterial and archaeal monophyly" (quoted from Weiss et alia). Therefore it should not be included in this paper's conclusions.

      Let's be more specific about this and look at a phylogenetic tree of the nitrogenase D peptide (sometimes referred to as the alpha subunit). This peptide binds the catalytic metal-sulfur cluster and its phylogeny can be viewed on my google site https://sites.google.com/site/simplyshawn/home.

      I colored archaeal versions red and bacterial black. You can see that this tree does not recover the monophyly of the archaea and bacteria and therefore should not be included in the author's set of LUCA proteins.

      Is what I display the result of a tree construction error? Probably not, this tree looks pretty much the same to every other tree published by various methods, so it seems to correctly reflect sequence evolution as we understand it today. The tree I made just has more sequences; it can be compared directly with Figure 1 in Leigh 2000, Figure 2 in Raymond et alia 2004, and Figure 2 in Boyd et alia 2011. Unfortunately, Weiss and others do not include any trees in their paper, so it is impossible to know what they are deriving their conclusions from, but it would be very difficult to imagine that they have constructed a tree different from all of these.

      Could it be that all these archaea obtained nitrogenase by horizontal gene transfer after the enzyme emerging in bacteria? Possibly, although this would imply that it was not in the LUCA as the authors claim.

      Could it be that the protein developed in methanogens and was then transferred into the bacterial domain? Yes, and Boyd and others suggested just this in their 2011 Geobiology paper. This would also mean that the protein was not in the LUCA.

      Could it be that the protein was present in the LUCA as Weiss and co-authors assert? Based on phylogenetic analysis, no.

      As Prof. Martin writes - there certainly is more debate to be had about nitrogenase age than was visible in my first comment. However, we can be sure that the protein does not recover the archaea bacteria monophyly, and should have not been included in the authors paper.

      Prof. Martin might likely counter my arguments here by saying something about metal dependence and treating different sequences separately (for example Anf, Vnf, MoFe type). However let us remember that the sequences are all homologous. Metal binding is one component of the nitrogenase phenotype, but all nitrogenase are homologous and descend from a common ancestor.

      Now that we can be sure that the nitrogenase does not conform to the author's second criterion for judging presence in the LUCA, let us examine if the protein conforms to the first criterion: "the protein should be present in at least two higher taxa of bacteria and archaea". In fact, all nitrogenase in archaea that are found in the NCBI and JGI databases are only within the methanogenic euryarchaeota. Unfortunately, Weiss and coauthors do not define what "higher taxa" means to them in their article, but it should be questioned if having a gene represented by members of a single phylum actually constitutes being present within "two higher taxa". Archaea are significantly more diverse than what is observed in the methanogenic euryarchaeota. Surely, if a protein was present in the LUCA, it would be a bit more widely distributed, and it would be easy to argue that the presence of nitrogenase in only one phylum provides evidence that it does not conform to the authors criterion number one. Thus, the picture that emerges from a closer look at nitrogenase phylogeny and distribution is that the protein violates both of the authors criteria for inclusion in the LUCA protein set.

      Let me summarize:

      1) Nitrogenase does not recover the bacterial and archaeal monophyly and therefore violates the author's criterion number 2.

      2) Nitrogenase in archaea is only found within the methanogenic euryarchaeota and is not broadly distributed, and therefore also seems to violate the authors criterion number 1.

      3) From a phylogenetic perspective, the nitrogenase protein should not be included as a candidate to be present in the LUCA.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Oct 12, William F Martin commented:

      There is an ongoing debate in the literature about the age of nitrogenase.

      In his comment, McGlynn favours published interpretations that molybdenum nitrogenase arose some time after the Great Oxidation Event 2.5 billion years ago (1). A different perspective on the issue is provided by Stüecken et al. (2) who found evidence for Mo-nitrogenase before 3.2 billion years ago. Our recent paper (3) traced nitrogenase to LUCA, but also suggested that methanogens are the ancestral forms of archaea, in line both with phylogenetic (4) and isotope (5) evidence for the antiquity of methanogens, and with a methanogen origin of nitrogenase (6).

      Clearly, there had to be a source of reduced nitrogen at life’s origin before the origin of nitrogenase or any other enzyme. Our data (3) are consistent with the view that life arose in hydrothermal vents and independent laboratory studies show that dinitrogen can be reduced to ammonium under simulated vent conditions (7,8). There is more to the debate about nitrogenase age, methanogen age, and early sources of fixed nitrogen than McGlynn’s comment would suggest.

      1. Boyd, E. S., Hamilton, T. L., and Peters, J. W. (2011). An alternative path for the evolution of biological nitrogen fixation. Front. Microbiol. 2:205. doi:10.3389/fmicb.2011.00205

      2. Stüeken EE, Buick R, Guy BM, Koehler MC. Isotopic evidence for biological nitrogen fixation by molybdenum-nitrogenase from 3.2 Gyr. Nature 520, 666–669 (2015)

      3. Weiss MC, Sousa FL, Mrnjavac N, Neukirchen S, Roettger M, Nelson-Sathi S, Martin WF: The physiology and habitat of the last universal common ancestor. Nat Microbiol (2016) 1(9):16116 doi:10.1038/nmicrobiol.2016.116

      4. Raymann, K., Brochier-Armanet, C. & Gribaldo, S. The two-domain tree of life is linked to a new root for the Archaea. Proc. Natl Acad. Sci. USA 112, 6670–6675 (2015).

      5. Ueno, Y., K. Yamada, N. Yoshida, S. Maruyama, and Y. Isozaki. 2006. Evidence from fluid inclusions for microbial methanogenesis in the early archaean era. Nature 440:516-519.

      6. Boyd, E. S., Anbar, A. D., Miller, S., Hamilton, T. L., Lavin, M., and Peters, J. W. (2011). A late methanogen origin for molybdenum-depen- dent nitrogenase. Geobiology 9, 221–232.

      7. Smirnov A, Hausner D, Laffers R, Strongin DR, Schoonen MAA. Abiotic ammonium formation in the presence of Ni-Fe metals and alloys and its implications for the Hadean nitrogen cycle. Geochemical Transactions 9:5 (2008) doi:10.1186/1467-4866-9-5

      8. Dörr M, Kassbohrer J, Grunert R, Kreisel G, Brand WA, Werner RA, Geilmann H, Apfel C, Robl C, Weigand W: A possible prebiotic formation of ammonia from dinitrogen on iron sulfide surfaces. Angew Chem Int Ed Engl 2003, 42(13):1540-1543.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Mar 09, Tanai Cardona commented:

      I agree with Shawn regarding the fact that "Nitrogenase does not recover the bacterial and archaeal monophyly and therefore violates the author's criterion number 2."

      I have a different explanation for why nitrogenase was recovered in LUCA. And this has to do with the tetrapyrrole biosynthesis enzymes related to nitrogenases that, in fact, do recover monophyly for Archaea and Bacteria. Namely, the enzyme involved in the synthesis of the Ni-tetrapyrrole cofactor, Cofactor F430, required for methanogenesis in archaea; and the enzymes involved in the synthesis of Mg-tetrapyrroles in photosynthetic bacteria. Still to this date, the subunits of the nitrogenase-like enzyme required for Cofactor F430 synthesis are annotated as nitrogenase subunits.

      So, what Weiss et al interpreted as a nitrogenase in LUCA, might actually include proteins of the tetrapyrrole biosynthesis enzymes.

      Bill, I think you should make all the trees for each one of the 355 proteins available online. That would be really useful for all of us interested in early evolution! Thank you.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2016 Oct 08, Shawn McGlynn commented:

      This paper uses a phylogenetic approach to "illuminate the biology of LUCA" and uses two criteria to assess if a given protein encoding gene was in the LUCA:

      "the protein should be present in at least two higher taxa of bacteria and archaea, respectively, and (2) its tree should recover bacterial and archaeal monophyly"

      The authors later conclude that "LUCA accessed nitrogen via nitrogenase", however the nitrogenase protein is an excellent example of violating the author's criterion (2) above, and therefore cannot be included in the LUCA protein set based on the author's own criterion.

      Upon phylogenetic analysis, the nitrogenase alpha subunit protein - which ligates the active site - branches into five clusters. One of these clusters is not well resolved, yet four of the five clusters contain both archaea and bacteria, therefor a nitrogenase protein phylogeny does not recover the monophyly of the archaea and bacteria.

      Other claims in this paper may deserve scrutiny as well.

      Suggested Reading below - if there are others to add someone please feel free:

      Raymond, J., Siefert, J. L., Staples, C. R., and Blankenship, R. E. (2004). The natural history of nitrogen fixation. Mol. Biol. Evol. 21, 541–554

      Boyd, E. S., Anbar, A. D., Miller, S., Hamilton, T. L., Lavin, M., and Peters, J. W. (2011a). A late methanogen origin for molybdenum-depen- dent nitrogenase. Geobiology 9, 221–232.

      Boyd, E. S., Hamilton, T. L., and Peters, J. W. (2011b). An alternative path for the evolution of biological nitrogen fixation. Front. Microbiol. 2:205. doi: 10.3389/fmicb.2011.00205


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 19, DP Zhou commented:

      China's fragile health insurance system can not serve the health system. Patients are forced to spend all their savings to buy medicines in cash with no insurance coverage. For most families, one cancer patient means the bankruptcy of the whole family. In such despair, many patients choose to extort the doctors and the hospitals as the last option to recover some cost of medicines.

      The health insurance system in China is a sensitive issue. The state-provided insurance is not covering major illnesses. The private insurance is poor-quality and mostly abused by finance institutions in real estate investment and other speculations.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 30, Peter Hajek commented:

      The length of exposure would be relevant if the dosing was comparable, but the damage to mice lungs was caused by doses of nicotine that were many times above anything a human vaper could possibly get. It is the dose that makes the poison. Many chemicals produce damage at large enough doses while lifetime exposure to small enough doses is innocent.

      To justify the conclusions about toxicity of vaping, the toxic effect would need to be documented with realistic dosing, and then shown to actually apply to humans (who have much better nicotine tolerance than mice).

      I agree that mice studies with realistic dosing could be useful, though data on changes in lung function in human vapers would be much more informative; and I do appreciate that the warnings of risks in the paper were phrased with caution.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 30, Robert Foronjy commented:

      The study is NOT reassuring to e-cigarette consumers. On the contrary, it shows that nicotine exposure reproduced the lung structural and physiologic changes present in COPD. These changes occurred after only four months of exposure. Even adjusting for the differences in lifespans, this exposure in mice is much briefer than that of a lifelong e-cigarette consumer. I do agree, however, that carefully conducted studies are needed to determine whether there is a threshold effect of nicotine exposure.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Aug 29, Peter Hajek commented:

      Thank you for the explanation. The exposure however was not equivalent. Mice have much faster nicotine metabolism than humans which means that nicotine exposure in mice must be many times higher than in humans to produce the same blood cotinine levels. See the reference below that calculated that mice with comparable cotinine levels were exposed to an equivalent of at least 200 cigarettes per day. In addition to this, mice also have much lower tolerance to nicotine than humans which means that their organs would be much more severely affected even if the levels were comparable.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Aug 29, Robert Foronjy commented:

      Mortality was not reported in the manuscript since no deaths occurred. The exposure was well tolerated by the mice and no abnormal behavior or physiologic stress was noted. At the time of euthanasia, all the internal organs were grossly normal on exam. Cotinine levels in the mice were provided in the study and they are similar to what has been documented in humans who vape electronic cigarettes. We agree that both the mice and human consumers are exposing their lungs to toxic concentrations of nicotine. This is one of the essential points that is expressed by the data presented in the manuscript.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Aug 26, Peter Hajek commented:

      The authors propose a hypothesis that deserves attention, but the study findings need to be interpreted with caution.

      The mice were severely overdosed with nicotine, up to the lethal levels for mice, and a huge amount above what any human vaper would get - see this comment on a previous such study:

      http://journals.plos.org/plosone/article/comment?id=info:doi/10.1371/annotation/5dfe1e98-3100-4102-a425-a647b9459456

      The report does not say how many mice were involved and if any died during the experiment; and whether effects of nicotine poisoning were detected in other organ systems. This could perhaps be clarified.

      Regarding the relevance to human health, nicotine poisoning poses normally no risk to vapers or smokers because if nicotine concentrations start to rise above their usual moderate levels, there is an advance warning in the form of nausea which makes people stop nicotine intake long before any dangerous levels can accrue. (Mice in these types of experiments do not have that option).

      The study actually provides a reassurance for vapers, to the extent that mice outcomes have any relevance for humans, in that in the absence of nicotine overdose, chronic dosing with the standard ingredients of e-cigarette aerosol (PG and VG) had no adverse effects on mice lungs.

      Peter Hajek


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 04, Egon Willighagen commented:

      This paper raises a number of interesting points about contemporary research. The choice of word "selfies" is a bit misleading, IMHO, particularly because the article also discusses the internet.

      The problem of selfies is partly because the liberal ideas that research is a free market where researchers have to sell their research and compete for funding. Indeed, I was trained to do so by the generation of researchers above me, and I learned what role conferences (talks, posters), publication lists (amount, where, etc) have in this. Using the Internet is just an extension of this, and nothing special; this idea of selfies was introduced before the internet, not after.

      Unfortunately, the Internet is used more for these selfies (publication lists, CVs, announcements) than for actual research: exchange of research data is still very limited. That is indeed a shame and must change. But I guess it can only really change after the current way research is funded has changed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 13, Andrew R Kniss commented:

      There is a correction to this article that includes corrected yield data for haylage, and an updated overall estimate for the organic yield gap (updated figure is 67%, rather than the originally reported 80%). Correction is here: https://www.ncbi.nlm.nih.gov/pubmed/27824908

      A pdf of the article with corrections made in-line (in blue font) can be downloaded here: https://figshare.com/articles/journal_pone_0161673-CORRECTED_PDF/4234037


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 28, Jaime A. Teixeira da Silva commented:

      There are at least two extremely serious - and possibly purposefully misleading - errors in the terms used in this paper. Or perhaps, as I argue, they are not errors, but reflect a seismic shift in "predatory" publishing ideology eschewed by Jeffrey Beall.

      Beall refers to such "deceitful" publishers as "predatory" publishers. He even refers to his original paper incorrectly [1]. The original term that Beall coined in 2010 was "predatory" open-access scholarly publishers, referring specifically to open access (OA) publishers. His blog, named “scholarlyoa” also reflects this exclusive focus on OA.

      His purposeful omission of the term OA in this paper published by J Korean Med Sci reflects not only the omission of the term OA from the entire title and text, and even from the original definition, it also reflects very lax editorial oversight in the review of this paper. For the past 6 years, Beall has focused exclusively on OA, and has indicated, on multiple occasions on his blog, that he does not consider traditional (i.e., print or non-OA) journals or publishers.

      Why then has Beall purposefully omitted the term OA?

      Why has there been an apparent seismic shift in this paper, and in Beall’s apparent position in 2016, in the definition of "predatory"? By purposefully (because it is inconceivable that such an omission by Beall, a widely praised scholar, could have been accidental) removing the OA limit, and allowing any journal or publisher to be considered "predatory", Beall is no longer excluding the large publishers. Such publishers include Elsevier, SpringerNature, Nature Publishing Group, Taylor & Francis / Informa, or Wiley, which include the largest oligopolic publishers that dominate publishing today [2].

      Does this shift in definition also reflect a shift in Beall's stance regarding traditional publishers? Or does it mean that several of these publishers, who publish now large fleets of OA journals, can no longer be excluded from equal criticism if there is evidence of their “predatory” practices, as listed by Beall [3]?

      The second misleading aspect is that Beall no longer refers to such OA journals as simply "predatory". His definition evolved (the precise date is unclear) to characterize such publishers as "Potential, possible, or probable predatory scholarly open-access publishers" [4] and journals as "Potential, possible, or probable predatory scholarly open-access journals" [5]. Careful examination of this list of words reflects that almost any journal or publisher could be classified as “predatory”, provided that it fulfilled at least one of the criteria on the Beall list of “predatory” practices.

      So, is Beall referring exclusively to the lists in [4] and [5] in his latest attack on select members of the OA industry, or does his definition also include other publishers that also publish print journals, i.e., non-OA journals?

      Beall needs to explain himself carefully to scientists and to the public, because his warnings and radical recommendations [6] have to be carefully considered in the light of his flexible definitions and swaying lists.

      The issue of deceitful publishers and journals affects all scientists, and all of us are concerned. But we should also be extremely concerned about the inconsistency in Beall's lists and definitions, and the lack of clear definitions assigned to them. Because many are starting to call on the use of those lists as "black lists" to block or ban the publication of papers in such publishers and journals. I stand firmly against this level of discriminatory action until crystal clear definitions for each entry are provided.

      We should also view the journals that have approved these Beall publications with caution and ask what criteria were used to approve the publication of these papers with faulty definitions?

      Until then, these "warnings" by Beall may in fact represent a danger to freedom of speech and of academics' choice to publish wherever they please, with or without the explicit permission or approval of their research institutes, even though the Beall blog provides some entertainment value, and as a crude “warning system”.

      [1] Beall J. "Predatory" open-access scholarly publishers. Charleston Advis 2010;11:10–17. [2] Larivière V, Haustein S, Mongeon P (2015) The Oligopoly of Academic Publishers in the Digital Era. PLoS ONE 10(6): e0127502. doi:10.1371/journal.pone.0127502 [3] https://scholarlyoa.files.wordpress.com/2015/01/criteria-2015.pdf [4] https://scholarlyoa.com/publishers/ [5] https://scholarlyoa.com/individual-journals/ [6] Beall J. Predatory journals: Ban predators from the scientific record. Nature 534, 326. doi: 10.1038/534326a (also read some pertinent criticism in the comments section of that paper)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 01, Lydia Maniatis commented:

      The logic of this study hinges on the following statement:

      "The perceptual distance between colors was calculated using the receptor-noise limited model of Vorobyev and Osorio (1998; see also Table 1; Supplementary Figure S1), which has recently been validated experimentally (Olsson, Lind, & Kelber, 2015)."

      In no sense is it legitimate to say that Olsson, Lind & Kelber have validated any models, as their own conclusions rest on unvalidated and implausible assumptions, specifically the assumption that the relevant discrimination thresholds " are set by photoreceptor noise, which is propagated into higher order processing."

      This idea (versions of which Teller, 1984, described as the "nothing mucks it up" proviso), is not only untested, it is bizarre, as it leaves open the questions of a. how and why this "noise" is directly propagated unchanged by a highly complex feedback and feedforward system whose outcomes (e.g. lightness constancy, first demonstrated by W. Kohler to exist in chicks) resemble logical inference (and which are not noisy in experience), and b. even if we wanted to concede that the visual system is "noisy," (which is a bad idea) on what basis do we decide, using behavioral data, that this noise originates at the photoreceptor, and only the photoreceptor level? Many psychophysicists (equally illegitimately) prefer to cite V1 in describing their results.

      The concept of "noise" is related to the also-illegitimate ideas that neurons act as "detectors" of specific stimuli and that complex percepts are formed by summing up simpler ones.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 31, Thomas Ferguson commented:

      Other than VEGF, there are no solid data to support the idea that the other cytokines tested in this study are involved in human AMD. When the levels of the cytokines are lowered by ranibizumab plus dexamethasone treatment, there was no effect on disease course. It seems that the conclusion of this studying should be the opposite: that inflammatory proteins (other than VEGF) are not involved in the pathogenesis of chronic macular edema due to AMD


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 31, Lydia Maniatis commented:

      I think the authors have overlooked a major confound in their stimuli. This is the structure of the collection of items. If we have three items, for example, they will always form a triangular structure, except if they’re in a line. If they’re in a line, they still have a structure, with a middle and two flanking items. Our visual system is sensitive to structure, including that of a collection of items; Gestalt experiments have also shown this is also clearly the case with much “lower” animals, such as birds. I don’t think the authors can discuss this issue meaningfully without taking this factor into account.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 13, Andy Collings commented:

      Jeffrey Friedman and colleagues' response to Markus Meister's paper, Physical limits to magnetogenetics, is available here, https://elifesciences.org/content/5/e17210#comment-2948691685, and is reproduced below:

      On the Physical Limits of Magnetogenetics

      In a recent paper, Markus Meister comments on data published by our groups (and a third employing a different approach) [1] showing that cells can be engineered to respond to an electromagnetic field [2-4]. Based on a set of theoretical calculations, Meister asserts that neither the heat transfer nor mechanical force created by an electromagnetic field interacting with a ferritin particle would be of sufficient magnitude to gate an ion channel and then goes on to question our groups’ findings altogether.

      One series of papers (from the Friedman and Dordick laboratories) employed four different experimental approaches in cultured cells, tissue slices and animals in vivo to show that an electromagnetic field can induce ion flow in cells expressing ferritin tethered to the TRPV1 ion channel [2,3]. This experimental approach was validated in vitro by measuring calcium entry, reporter expression and electrophysiological changes in response to a magnetic field. The method was validated in vivo by assaying magnetically induced changes in reporter expression, blood glucose and plasma hormones levels, and alterations in feeding behavior in mice.

      These results are wholly consistent with those in an independent publication (from the Guler and Deppmann laboratories) in which the investigators fused ferritin in frame to the TRPV4 ion channel [4]. In this report, magnetic sensitivity was validated in vitro using calcium entry and electrophysiological responses as outputs. Additionally, in vivo validation was demonstrated by analyzing magnetically induced behaviors in zebrafish and mice, and through single unit electrophysiological recordings.

      In his paper, Meister incorrectly states our collective view on the operative mechanism [1]. While we are considering several hypotheses, we agree that the precise mechanism is undetermined. Lastly, although mathematical calculations can often be used to model biologic phenomena when enough of the relevant attributes of the system are known, the intrinsic complexity of biologic processes can in other instances limit the applicability of purely theoretical calculations [5]. It is our view that mathematical theory needs to accommodate the available data, not the other way around. We are thus surprised that Meister would stridently question the validity of an extensive data set published by two independent groups (and a third using a different method) without performing any experiments. However, we too are interested in defining the operative mechanism(s) and welcome further discussion and experimentation to bring data and theory into alignment.

      Jeffrey Friedman, Sarah Stanley, Leah Kelly, Alex Nectow, Xiaofei Yu, Sarah F Schmidt, Kaamashri Latcha

      Department of Molecular Genetics, Rockefeller University

      Jonathan S Dordick, Jeremy Sauer

      Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute

      Ali D Güler, Aarti M Purohit, Ryan M Grippo

      Christopher D Deppmann, Michael A Wheeler

      Sarah Kucenas, Cody J Smith

      Department of Biology, University of Virginia

      Manoj K Patel, Matteo Ottolini, Bryan S Barker, Ronald P Gaykema

      Department of Anesthesiology, University of Virginia

      (Laboratory Heads in Bold Lettering)

      References

      1) Meister, M, Physical limits to magnetogenetics. eLife, 2016. 5. http://dx.doi.org/10.7554/eLife.17210

      2) Stanley, SA, J Sauer, RS Kane, JS Dordick, and JM Friedman, Corrigendum: Remote regulation of glucose homeostasis in mice using genetically encoded nanoparticles. Nat Med, 2015. 21(5): p. 537. http://dx.doi.org/10.1038/nm0515-537b

      3) Stanley, SA, L Kelly, KN Latcha, SF Schmidt, X Yu, AR Nectow, J Sauer, JP Dyke, JS Dordick, and JM Friedman, Bidirectional electromagnetic control of the hypothalamus regulates feeding and metabolism. Nature, 2016. 531(7596): p. 647-50. http://dx.doi.org/10.1038/nature17183

      4) Wheeler, MA, CJ Smith, M Ottolini, BS Barker, AM Purohit, RM Grippo, RP Gaykema, AJ Spano, MP Beenhakker, S Kucenas, MK Patel, CD Deppmann, and AD Guler, Genetically targeted magnetic control of the nervous system. Nat Neurosci, 2016. 19(5): p. 756-61. http://dx.doi.org/10.1038/nn.4265

      5) Laughlin, RB and D Pines, The theory of everything. Proc Natl Acad Sci U S A, 2000. 97(1): p. 28-31. http://dx.doi.org/10.1073/pnas.97.1.28


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 05, Alan Roger Santos-Silva commented:

      The spectrum of oral squamous cell carcinoma in young patients

      We read with interest the current narrative review published by Liu et al [1], in Oncotarget. The article itself is interesting, however, they appear to have misunderstood our article [2] because they seem to believe that there was a cause-effect relationship between orthodontic treatment and tongue squamous cell carcinoma (SCC) at young age. This idea might provide anecdotal information about the potential of orthodontic treatment to cause persistent irritation on oral mucosa and lead to oral SCC. Thus, we believe that it is relevant to clarify that the current understanding about the spectrum of oral SCC in young patients points out three well-known groups according to demographic and clinicopathologic features: (1). 40-45 years old patients highly exposed to alcohol and tobacco diagnosed with keratinizing oral cavity SCC; (2). <45 years old patients, predominantly non-smoking males, diagnosed with HPV-related non-keratinizing oropharyngeal SCC; and (3). Younger than 40-year-old patients, mainly non-smoking and non-drinking females diagnosed with keratinizing oral tongue SCC (HPV seems not to be a risk factor in this group) [3-5]. Therefore, chronic inflammation triggered by persistent trauma of the oral mucosa must not be considered an important risk factor in young patients with oral cancer.

      References: 1. Liu X, Gao XL, Liang XH, Tang YL. The etiologic spectrum of head and neck squamous cell carcinoma in young patients. Oncotarget. 2016 Aug 12. doi: 10.18632/oncotarget.11265. [Epub ahead of print]. 2. Santos-Silva AR, Carvalho Andrade MA, Jorge J, Almeida OP, Vargas PA, Lopes MA. Tongue squamous cell carcinoma in young nonsmoking and nondrinking patients: 3 clinical cases of orthodontic interest. Am J Orthod Dentofacial Orthop. 2014; 145: 103-7. 3. Toner M, O'Regan EM. Head and neck squamous cell carcinoma in the young: a spectrum or a distinct group? Part 1. Head Neck Pathol. 2009; 3: 246-248. 4. de Castro Junior G. Curr Opin Oncol. 2016; 28: 193-194. 5.Santos-Silva AR, Ribeiro AC, Soubhia AM, Miyahara GI, Carlos R, Speight PM, Hunter KD, Torres-Rendon A, Vargas PA, Lopes MA. High incidences of DNA ploidy abnormalities in tongue squamous cell carcinoma of young patients: an international collaborative study. Histopathology. 2011; 58: 1127-1135.

      Authors: Alan Roger Santos-Silva [1,2]; Ana Carolina Prado Ribeiro [1,2]; Thais Bianca Brandão [1,2]; Marcio Ajudarte Lopes [1]

      [1] Oral Diagnosis Department, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, São Paulo, Brazil. [2] Dental Oncology Service, Instituto do Câncer do Estado de São Paulo (ICESP), Faculdade de Medicina da Universidade de São Paulo, São Paulo, Brazil.

      Correspondence to: Alan Roger Santos-Silva Department of Oral Diagnosis, Piracicaba Dental School, UNICAMP Av. Limeira, 901, Areão, Piracicaba, São Paulo, Brazil, CEP: 13414-903 Telephone: +55 19 2106 5320 alanroger@fop.unicamp.br


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 30, Olga Krizanova commented:

      We are aware that there are some papers showing ineffectivity of Xestospongin C on IP3 receptors. Nevertheless, Xest is a widely accepted inhibitor of IP3 receptors (IP3R), as documented by majority IP3R papers and also by companies selling this product (e.g. Sigma-Aldrich, Cayman Chemical, Abcam, etc.). Since Xest also inhibits voltage-dependent Ca2+ and K+ currents at concentrations similar to those which inhibit the IP3R, it can be regarded as a selective blocker of the IP3R in permeabilized cells. Cell type used in experiments might be of a special importance. In our paper we observed the effect of Xest on IP3R1 on four different cell lines -A2780, SKOV3, Bowes and MDA-MB-231. Moreover, we verified results observed by Xest by another IP3R blocker -2-APB and also by IP3R1 silencing. All these results imply that Xest acts as IP3R inhibitor. Recently, paper with more specific Xestospongin B was published, but unfortunately, this compound is not yet commercially available.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 22, Darren Boehning commented:

      Xestospongin C (Xest) does not inhibit IP3R channels. See PMID: 24628114 PMCID: PMC4080982 DOI: 10.1111/bph.12685 There are other well-documented examples in the literature.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 27, Atanas G. Atanasov commented:

      This is indeed a very promising research area… thanks to the authors for the good work


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 16, Ellen M Goudsmit commented:

      It should be noted that the PACE trial did not assess pacing as recommended by virtually all patient groups. This behavioural strategy is based on the observation that minimal exertion tends to exacerbate symptoms, plus the evidence that many with ME and CFS cannot gradually increase activity levels for more than a few days because of clinically significant adverse reactions [1]. It does not make any assumptions about aetiology.

      The authors state that “It should be remembered that the moderate success of behavioural approaches does not imply that CFS/ME is a psychological or psychiatric disorder.” I submit that this relates to CBT and GET and not to strategies such as pacing. It might be helpful here to remind readers that the GET protocol for CFS/ME (as tested in most RCTs) is partly based on an operant conditioning theory, which is generally regarded as psychological [2]. The rehabilitative approaches promoted in the UK, i.e. CBT and GET, tend to focus on fatigue and sleep disorders, both of which may be a result of stress and psychiatric disorders e.g. depression. A review of the literature from the 'medical authorities' in the UK shows that almost without exception, they tend to limit the role of non-psychiatric aetiological factors to the acute phase and that somatic symptoms are usually attributed to fear of activity and the physiological effects of stress.

      I informed the editor that as it read, the paper suggests that 1. patients have no sound medical source to support their preference for pacing and that 2. the data from the PACE trial provides good evidence against this strategy. I clarified that the trial actually evaluated adaptive pacing therapy (a programme including advice on stress management and a version of pacing that permits patients to operate at 70% of their estimated capability.) The editor chose not to investigate this issue in the manner one expects from an editor of a reputable journal. In light of the above issues, the information about pacing in this paper may mislead readers.

      Interested scientists may find an alternative analysis of the differing views highly illuminating [3].

      [1]. Goudsmit, EM., Jason, LA, Nijs, J and Wallman, KE. Pacing as a strategy to improve energy management in myalgic encephalomyelitis/chronic fatigue syndrome: A consensus document. Disability and Rehabilitation, 2012, 34, 13, 1140-1147. doi: 10.3109/09638288.2011.635746.]

      [2]. Goudsmit, E. The PACE trial. Are graded activity and cognitive-behavioural therapy really effective treatments for ME? Online 18th March 2016. http://www.axfordsabode.org.uk/me/ME-PDF/PACE trial the flaws.pdf

      [3]. Friedberg, F. Cognitive-behavior therapy: why is it so vilified in the chronic fatigue syndrome community? Fatigue: Biomedicine, Health & Behavior, 2016, 4, 3, 127-131. http://www.tandfonline.com/doi/full/10.1080/21641846.2016.1200884


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 15, Lily Chu commented:

      As a member of the Institute of Medicine Committee, I talked to multiple patients, caregivers, clinicians, and researchers. The problem they have with the name "CFS" goes beyond psychological stigma. For one, fatigue is only one symptom of the disease but not even the most disabling one for patients. Post-exertional malaise and cognitive issues are. Secondly, most patients and families are concerned about psychological implications not because of stigmatization but simply because CFS is NOT a psychological or psychiatric condition. Some patients experience co-morbid depression, acknowledge its presence, and receive treatment for it. In support groups, patients discuss depression and anxiety without fear of stigma. The problem comes when clinicians or researchers conflate patients' depression with their CFS and conclude that they can treat the latter condition with cognitive behavioral therapy or with SSRIs. An analogy would be if tomorrow, patients experiencing myocardial infarcts and major depression were told aspirin, B-blockers, cholesterol medication, etc. would no longer be the treatments for myocardial infarcts but instead SSRIs would be. Could you imagine how patients would feel in that circumstance? That is why they are concerned.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 10, ROBERT COMBES commented:

      Robert Combes and Michael Balls

      In a recent exchange of views, in PubMed Commons, with Simon Chapman on the effectiveness and safety of vaping for achieving the cessation of tobacco smoking, provoked by a paper published by Martin McKee [and comments therein], Clive Bates has criticised one of our publications. The paper in question urges caution concerning any further official endorsement of electronic cigarettes (ECs), at least until more safety data (including results from long-term tests) have become available. Bates questions why we should write on such issues, given our long-standing focus on ‘animal rights’, as he puts it, and from this mistaken assumption he makes the remarkably illogical deduction that our paper is without merit. Bates also implies that our views should not be taken seriously, because we published in Alternatives to Laboratory Animals (ATLA), a journal owned by FRAME (Fund for the Replacement of Animals in Medical Experiments), an organisation with which we have been closely associated in the past.<br> We have written a document to correct Bates' misconceptions about who we are, what our experience is, why we decided to write about this topic in the first place, what we actually said, and why we said it. In addition, we have elaborated on our views concerning the regulatory control of e-cigarettes, in which we explain in detail why we believe the current policy being implemented by PHE lacks a credible scientific basis. We make several suggestions to rectify the situation, based on our careers specialising in cellular toxicology: a) the safety of electronic cigarettes should be seen as a problem to be addressed, primarily by applying toxicological principles and methods, to derive relevant risk assessments, based on experimental observations and not opinions and guesswork; b) such assessments should not be confused with arguments in favour of vaping based on how harmful smoking is, and on the results of chemical analysis; c) it would be grossly negligent if the relevant national regulatory authorities were to continue to ignore the increasingly convincing evidence suggesting that exposure to nicotine can lead to serious long-term, as distinct from acute, effects, related to carcinogenicity, mutagenicity (manifested as DNA and chromosomal damage) and reproductive toxicity; and d) only once such information has been analysed, together with the results of other testing, should risks from vaping be weighed against risks from not vaping, to enable properly informed choice.<br> Due to space limitations, the pre-publication version of the complete document has to be downloaded from: https://www.researchgate.net/publication/307958871_Draft_Response_regarding_comments_made_by_Clive_Bates_about_one_of_our_publications_on_the_safety_of_electronic_cigarettes_and_vaping and our original publication is available from: https://www.researchgate.net/publication/289674033_On_the_Safety_of_E-cigarettes_I_can_resist_anything_except_temptation1

      We hope that anyone wishing to respond will carefully read these two documents before doing so.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 24, Clive Bates commented:

      In response to Professor Daube, I am pleased to have the opportunity to explain a different and less authoritarian approach to the public health challenges of smoking.

      1. But let me start with a misunderstanding. Professor Daube accuses me of a personal attack on Professor McKee. In fact, I made five specific substantive comments on Professor McKee's short letter, to which Professor Stimson added a further two. These are corrections of fact and understanding, not a 'personal attack'. It is important that academics understand and recognise this distinction.

      2. Professor Daube draws the reader's attention to a link to an investor presentation by Imperial Tobacco. I am unsure what point he is trying to make. Nevertheless, the presentation paints a rosy picture of life in Australia for this tobacco company: it is "on track" (p6); it has "continued strong performance in Australia" (p15); in Australia it is "continuing to perform strongly - JPS equity driving share, revenue and profit growth" (p31). It may be a hard pill to swallow, but tobacco companies in Australia are very profitable indeed, in part because the tax regime allows them to raise underlying pre-tax prices easily.

      3. It's a common error of activists to believe that harm to tobacco companies is a proxy for success in tobacco control (an idea sometimes known as 'the scream test'). If it that was the case, the burgeoning profitability of tobacco companies would be a sign of utter failure in tobacco control [1]. We should instead focus on what it takes to eliminate smoking-related disease. If that means companies selling products that don't kill the user instead of products that do, then so be it - I consider that is progress. If your alternative is to use coercive policies to stop people using nicotine at all, then you may make progress... but it will be slow and laborious, smoking will persist for longer and many more people will be harmed as a result. These are the unintended consequences of taking more dogmatic positions that seem tougher, but are less effective.

      4. In any event, my concerns are not about the welfare of the tobacco industry in Australia or anywhere else. My concern, as I hope I made clear in my response to Professor Chapman, is the welfare of the 2.8 million Australians (16% adults) who continue to smoke despite Australia's tobacco control efforts. For them, the serious health risks of smoking are compounded by some Australian tobacco control policies that are punitive (Australia is not alone in this) while being denied low-risk alternatives. All the harms caused by both smoking and anti-smoking policies can be mitigated and the benefits realised by making very low-risk alternatives to combustible cigarettes (for example, e-cigarettes or smokeless tobacco) available to smokers to purchase with their own money and of their own volition. Professor Daube apparently opposes this simple liberal idea - that the state should not intervene to prevent people improving their own health in a way that works for them and harms no-one else.

      5. Professor Daube finishes his contribution with what I can only assume is an attempted smear in pointing out that I sometimes speak at conferences where the tobacco industry is present, as if this is somehow, a priori, an immoral act. I speak at these events because I have an ambitious advocacy agenda about how these firms should evolve from being 'merchants of death' into supplying a competitive low-risk recreational nicotine market, based on products that do not involve combustion of tobacco leaf, which the source of the disease burden. So I, and many others, have a public health agenda - the formation of a market for nicotine that will not kill one billion users in the 21st Century, and that will perhaps avoid hundreds of millions of premature deaths [2]. There is a dispute about how to do this, and no doubt Professor Daube has ideas. However, the policy proposals for the so-called 'tobacco endgame' advanced by tobacco control activists do not withstand even cursory scrutiny [3]. The preferred approach of advocates of 'tobacco harm reduction', among which I include myself, involves a fundamental technology transformation, a disruptive process that has started and is synergistic with well-founded tobacco control policies [4]. If, like me, you wish to see a market change fundamentally, then it makes sense to talk to and understand every significant actor in the market, rather than only those whose convictions you already share.

      References & further reading

      [1] Bates C. Who or what is the World Health Organisation at war with? The Counterfactual, May 2016 [link].

      [2] Bates C. A billion lives? The Counterfactual, November 2015 [link] and Bates C. Are we in the endgame for smoking? The Counterfactual, February 2015 [link]

      [3] Bates C. The tobacco endgame: a critique of the policy ideas. The Counterfactual, March 2015 [link]

      [4] Bates C. A more credible endgame - creative destruction. The Counterfactual, March 2015 [link].


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Aug 25, Clive Bates commented:

      As I think Professor Daube's comment contains inappropriate innuendo about my motives, let me repeat the disclosure statement from my initial posting:

      Competing interests: I am a longstanding advocate for 'harm reduction' approaches to public health. I was director of Action on Smoking and Health UK from 1997-2003. I have no competing interests with respect to any of the relevant industries.

      My hope is that prominent academics and veterans of the struggles of the past will adopt an open mind towards the right strategy for reducing the burden of death and disease caused by smoking as we go forward. While he may not like the idea, Professor Daube can surely see that 'tobacco harm reduction' is a concept supported by many of the top scientists and policy thinkers in the field, including the Tobacco Advisory Group of the Royal College of Physicians. It is not the work of the tobacco industry and cannot be dismissed just by claiming it is in their interests.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Aug 24, Mike Daube commented:

      As part of his lengthy and personalised attacks on Martin McKee, Clive Bates argues that “we certainly should not” look to Australia for policy inspiration.

      This view, and some of his other comments, would have strong support from the global tobacco industry, which has ferociously opposed the evidence-based action to reduce smoking taken by successive Australian governments, and reports that we are “the darkest market in the world”. (1)

      No doubt Mr Bates will be able to discuss these issues further with tobacco industry leaders at the Global Tobacco & Nicotine Forum (“the annual industry summit”) in Brussels later this year, where as in previous years he is listed as a speaker.(2)

      References 1. Brisby D, Pramanik A, Matthews P, Kutz O, Kamaras A. Imperial Brands PLC Investor Day: Jun 8 2016. Transcript – Quality Growth: Returns and Growth – Markets that Matter [p.6] & Presentation Slides – Quality Growth: Returns and Growth – Markets that Matter [slide 16]. http://www.imperialbrandsplc.com/Investors/Results-centre.

      1. http://gtnf-2016.com/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Aug 22, Clive Bates commented:

      Some responses to Professor Simon Chapman:

      1. Professor Chapman criticises the Public Health England and Royal College of Physicians consensus on the relative risk of smoking and e-cigarette use by referring to a comment piece Combes RD, 2015 in the journal Alternatives to Laboratory Animals. The piece is written by a commentator whose affiliation is an animal welfare rights campaign (FRAME), for which ATLA is the house journal, and an independent consultant. How these two came to be writing about e-cigarettes at all is not stated, but this is less important than the fact that their commentary provides little of substance to challenge the robust expert-based PHE and RCP analysis, and it provides even less to justify the colourful dismissive pull-out quotes chosen by Professor Chapman. Even though the work can be dismissed on its merits, surely the authors should have disclosed that FRAME has pharmaceutical funders [Our supporters], including companies who make and sell medical smoking cessation products.

      2. Professor Chapman confirms my view that the appropriate statistic to use for comparing Australian prevalence of current smoking is 16.0 percent based on the Australian Bureau of Statistics, National Health Survey: First Results, 2014-15 (see table 9.3). This is the latest data on the prevalence of current adult smoking.

      3. Unless it's to make the numbers look as low as possible, I am unsure why Professors Chapman and McKee choose to refer to a survey from 2013 or why Professor Chapman didn't disclose in his response that he is citing a survey of drug use, including illicit drug use: [see AIHW, National Drug Strategy Household Survey detailed report 2013]. Surely a neutral investigator would be concerned that a state-run survey asking about illicit drug use might have a low response rate? And further, that non-responders would be more likely to be drug users, and hence also more likely to be smokers - so distorting the prevalence systematically downwards? In fact, the response rate in this survey is just 49.1% [Explanatory notes]. While this might be the best that can be done to understand illicit drug use, it is an unnecessarily unreliable way to gauge legal activity like smoking, especially as a more recent and more reliable survey is available.

      4. The figure of 11% given for smoking in Sweden is not 'daily smoking' as asserted by Professor Chapman. With just a little more research before rushing out his reply, Professor Chapman could have checked the source and link I provided. The question used is: "Regarding smoking cigarettes, cigarettes, cigars, cigarillos or a pipe, which of the following applies to you?" 11% of Swedes answer affirmatively to the response: "You currently smoke".

      5. If we are comparing national statistics, it is true that measured smoking prevalence in Britain is a little higher than in the Australia - the latest Office for National Statistics data suggests 17.5 percent of adults age 16 and over were current smokers in 2015 (derived from its special survey of e-cigarette use: E-cigarette use in Great Britain 2015). So what? The two countries are very different both today and in where they have come from and many factors explain smoking prevalence - not just tobacco control policy. But if one is to insist on such comparisons, official data from the (until now) vape-friendly United States suggests that American current adult smoking prevalence, at 15.1 percent, is now below that of Australia [source: National Center for Health Statistics, National Health Interview Survey, 1997–2015, Sample Adult Core component. Figure 8.1. Prevalence of current cigarette smoking among adults aged 18 and over: United States, 1997–2015]

      6. Regressive taxes are harmful and so is stigmatisation - I shouldn't need to reference that for anyone working in public health. Any thoughtful policy maker will not only try to design policies that achieve a primary objective (reduce the disease attributable to smoking) but also be mindful that the policies themselves can be a source of harm or damaging in some other way. Ignoring the consequences of tobacco policies on wider measures of wellbeing is something best left to fanatics. In public health terms, these consequences may be considered 'a price worth paying' to reduce smoking, but they create real harms for those who continue to smoke, and in my view, those promoting them have an ethical obligation to mitigate these wider harms to the extent possible.

      7. The approach, favoured by me and many others, of supporting (or in Australia's case of not actively obstructing) ways in which smokers can more easily move from the most dangerous products to those likely to cause minimal risk has twin advantages:

      • (1) it helps to achieve the ultimate goal of reducing cancer, cardiovascular disease, and respiratory illnesses by improving the responsiveness of smokers to conventional tobacco control policy. It does this by removing the significant barrier of having to quit nicotine completely, something many cannot do easily or choose not to do.

      • (2) It does this in a way that goes with the grain of consumer preferences and meets people where they are. This is something for public health to rediscover - public health should be about 'enabling', not bullying or nannying, and go about its business with humility and empathy towards those it is trying to help.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2016 Aug 22, Clive Bates commented:

      As an aside, it's disappointing to see Professor Chapman spreading doubt about e-cigarettes with reference to the filters and 'light and mild' cigarette fiasco (see the 1999 report by Martin Jarvis and me on this fiasco). This 'science-by-analogy' fails because it misunderstands the nicotine-seeking behaviour that underpins both smoking and vaping.

      With light and mild cigarettes, health activists were fooled into believing that these cigarettes would much be less risky, even though they are no less risky. It would be wrong to compound this error by implying that e-cigarettes are not much less risky, even though they are sure to be.

      The underlying reason for both errors is the same - nicotine users seek a roughly fixed dose of nicotine (a well-understood process, known as titration). If a vaper can obtain their desired nicotine dose without exposure to cigarette smoke toxins, then they will not suffer the smoking-related harms. With light and mild cigarettes, both nicotine and toxins were diluted equally with air to fool smoking machines. However, human smokers adjusted their behaviour to get the desired dose of nicotine and so got almost the same exposures to toxins. This is another well-understood process known as 'compensation'. I am sure a global authority of Professor Chapman's stature would be aware these mechanisms, so it is all the more perplexing that he should draw on this analogy in his campaign against e-cigarettes.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2016 Aug 22, Simon Chapman commented:

      Clive Bates' efforts to correct points made in Martin McKee’s letter in turn require correction and comment. Bates disputes that there was not a single source for the claim that e-cigarettes are “95% safer” than smoking (in fact Public Health England stated “95% less harmful” [1], a critical difference). Bates cites two references in support of his claim, but both of these are nothing but secondary references, with both citing the same Nutt et al [2] 95% less harmful estimate as their primary source.

      Two toxicologists have written an excoriating critique of the provenance of the “95% less harmful” statement, describing its endorsement as “reckless”[3] and nothing but the consensus of the opinions of a carefully hand-picked group. The 95% estimate remains little more than a factoid – a piece of questionable information that is reported and repeated so often that it becomes accepted as fact.

      We will not have an evidence-based comparison of harm until we have cohort data in the decades to come comparing mortality and morbidity outcomes from exclusive smokers versus exclusive vapers and dual users. This was how our knowledge eventually emerged of the failure other mass efforts at tobacco harm reduction: cigarette filters and the misleading lights and milds fiasco.

      Bates challenges McKee’s statement that Australian smoking prevalence is “below 13%” and cites Australian Bureau of Statistics (ABS) data from 2014-15 derived from a household survey of 14,700 dwellings that shows 16% of those aged 18+ were “current” smokers (14.5% smoking daily). McKee was almost certainly referring to 2013 data from the Australian Institute of Health and Welfare’s (AIHW) ongoing national surveys based on interviews with some 28,000 respondents which showed 12.8% of age 14+ Australians smoked daily, with another 2.2% smoking less than daily[4]. The next AIHW survey will report in 2017 and with the impact of plain packaging, several 12.5% tobacco tax increases, on-going tobacco control campaigning and a downward historical trend away from smoking, there are strong expectations that the 2017 prevalence will be even lower.

      Bates cites a 2015 report saying that Sweden has 11% smoking prevalence. This figure is almost certainly daily smoking prevalence data, not total smoking prevalence that Bates insists is the relevant figure that should be cited for Australia. If so, the comparable figure for Sweden should also be used. In 2012 the Swedish Ministry of Health reported to the WHO that 22% of Swedish people aged 16-84 currently smoked (11% daily and 11% less than daily) [5]. It is not credible that Sweden could have halved its smoking prevalence in three years.

      Meanwhile, England with current smoking prevalence in 2015 of 18.2% in July 2016 [6 – slide 1] trails Australia, regardless of whether the ABS or AIHW data are used. Also, the proportion of English smokers who smoked in the last year and who tried to stop smoking is currently the lowest recorded in England since 2007 [6 slide 4].

      Bates says that the UK and the USA where e-ecigarette use is widespread have seen “recent sharp falls” in smoking prevalence. In fact in smoking prevalence has been falling in both nations for many years prior to the advent of e-cigarettes, as it has in Australia where e-cigarettes are seldom seen. Disturbingly in the USA, the decline in youth smoking has come to a halt after 2014 [7], following continuous falls for at least a decade – well before e-cigarette use became popular. The spectacular increase in e-cigarette use in youth particularly between 2013-2015 (see Figure 1 in reference 7] was either coincident or possibly partly responsible with that halting.

      Finally Bates makes gratuitous, unreferenced remarks about “harms” arising from Australia’s tobacco tax policy and “campaigns to denormalise smoking”. There are no policies or campaigns to denormalise smoking in Australian that are not also in place in the UK or the USA, as well as many other nations. When Bates was director at ASH he vigourously campaigned for tobacco taxes to be high and to keep on increasing [8]. His current views make an interesting contrast with even the CEO of British American Tobacco Australia who agrees that tax has had a major impact on reducing smoking, telling an Australian parliamentary committee in 2011 “We understand that the price going up when the excise goes up reduces consumption. We saw that last year very effectively with the increase in excise. There was a 25 per cent increase in the excise and we saw the volumes go down by about 10.2 per cent; there was about a 10.2 per cent reduction in the industry last year in Australia.” [9].

      References

      1 Public Health England. E-cigarettes: a new foundation for evidence-based policy and practice. Aug 2015. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/454517/Ecigarettes_a_firm_foundation_for_evidence_based_policy_and_practice.pdf

      2 Nutt DJ et al. Estimating the harms of nicotine-containing products using the MCDA approach. Eur Addict Res 2014;20:218-25.

      3 Combes RD, Balls M. On the safety of e-cigarettes.: “I can resists anything except temptation.” ATLA 2015;42:417-25. https://www.researchgate.net/publication/289674033_On_the_Safety_of_E-cigarettes_I_can_resist_anything_except_temptation1

      4 Australian Institute of Health and Welfare. National Drug Household Survey. 2014 data and references. http://www.aihw.gov.au/WorkArea/DownloadAsset.aspx?id=60129548784

      5 Swedish Ministry for Health and Social Affairs. Reporting instrument of the WHO Framework Convention on Tobacco Control 2012 (13 April) http://www.who.int/fctc/reporting/party_reports/sweden_2012_report_final_rev.pdf

      6 Smoking in England. Top line findings STS140721 5 Aug 2016 http://www.smokinginengland.info/downloadfile/?type=latest-stats&src=13 (slide 1)

      7 Singh T et al. Tobacco use among middle and high school students — United States, 2011–2015. http://www.cdc.gov/mmwr/volumes/65/wr/mm6514a1.htm MMWR April 15, 2016 / 65(14);361–367

      8 Bates C Why tobacco taxes should be high and continue to increase. 1999 (February) http://www.ash.org.uk/files/documents/ASH_218.pdf

      9 The Treasury. Post-implementation review: 25 per cent tobacco excise increase. Commonwealth of Australia 2013; Feb. http://ris.dpmc.gov.au/files/2013/05/02-25-per-cent-Excise-for-Tobacco.doc p15


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    8. On 2016 Aug 21, Gerry Stimson commented:

      Clive Bates (below) identifies five assertions by Martin McKee that need correction: there are two more, making seven in McKee's eleven lined letter.

      First, McKee states that ‘It is misleading to suggest that there is a consensus on e-cigarettes in England, given that many members of the health community have continuing reservations’ and quotes one short BMA statement that calls for medical regulation of e-cigarettes.

      He ignores the ‘public health consensus statement’ from English public health, medical, cancer and tobacco control organisations that supports e-cigarettes for quitting smoking. The consensus statement says that ‘We all agree that e-cigarettes are significantly less harmful than smoking.’ [1, 2]. The first edition of this statement [1] explicitly challenges McKee’s position on the evidence. The consensus statement is endorsed by Public Health England, Action on Smoking and Health, the Association of Directors of Public Health, the British Lung Foundation, Cancer Research UK, the Faculty of Public Health, Fresh North East, Healthier Futures, Public Health Action, the Royal College of Physicians, the Royal Society for Public Health, the UK Centre for Tobacco and Alcohol Studies and the UK Health Forum. McKee and the BMA are minority outliers in England and the UK.

      The PHE report on e-cigarettes faced a backlash but this was from a few public health leaders including McKee who organised a behind-the-scenes campaign against the report including a critical editorial and comment in the Lancet, and an editorial in the BMJ backed up by a media campaign hostile to PHE. Emails revealed as a result of a Freedom of Information request show that this backlash was orchestrated by McKee and a handful of public health experts [3, 4].

      Second, McKee misrepresents and misunderstands drugs harm reduction. He cites Australia, and it was indeed in Australia (as in the UK) that the public health successes in preventing the spread of HIV infection and other adverse aspects of drug use were driven by harm reduction – including engaging with drug users, outreach to drug users, destigmatisation, provision of sterile needles and syringes, and methadone [5, 6, 7]. Drugs harm reduction was a public health success [4, 6]. The UK and other countries that implemented harm reduction avoided a major epidemic of drug related HIV infection of the sort that has been experienced in many countries. Drugs harm reduction was implemented despite drugs demand and supply and reduction measures, not as McKee asserts because it was part of a combined strategy including supply demand and supply reduction. McKee’s position is out of step with the Open Society Institute, of which he chairs the Global Health Advisory Committee; OSI has resourced drugs harm reduction and campaigns against the criminalisation of drugs ie those demand and supply reduction measures that maximise harm.

      1 Public health England (2015) E-cigarettes: a developing public health consensus. https://www.gov.uk/government/news/e-cigarettes-an-emerging-public-health-consensus

      2 Public health England (2016) E-cigarettes: a developing public health consensus. https://www.gov.uk/government/publications/e-cigarettes-a-developing-public-health-consensus

      3 Puddlecote D, (2016/) Correspondence between McKee and Davies Aug 15 to Oct 15. https://www.scribd.com/doc/296112057/Correspondence-Between-McKee-and-Davies-Aug-15-to-Oct-15. Accessed 07 03 2016

      4 Stimson G V (2016) A tale of two epidemics: drugs harm reduction and tobacco harm reduction, Drugs and Alcohol Today, 16, 3 2016, 1-9.

      5 Berridge V (1996) AIDS in the UK: The Making of Policy, 1981-1994. Oxford University Press.

      6 Stimson G V (1995) AIDS and injecting drug use in the United Kingdom, 1988-1993: the policy response and the prevention of the epidemic. Social Science and Medicine, 41,5, 699-716

      7 Wodak A, (2016) Hysteria about drugs and harm minimisation. It's always the same old story. https://www.theguardian.com/commentisfree/2016/aug/11/hysteria-about-drugs-and-harm-minimisation-its-always-the-same-old-story


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    9. On 2016 Aug 20, Clive Bates commented:

      The author, Martin McKee, makes no less than five assertions in this short letter that demand correction:

      First, that there was only one source for the claim that e-cigarettes are "95% safer" than smoking. In fact, this claim does not rely on a single source but is the consensus view of Public Health England's expert reviewers [1] and a close variation on this claim is the consensus view of the Tobacco Advisory Group of the Royal College of Physicians and is endorsed by the College [2]:

      Although it is not possible to precisely quantify the long-term health risks associated with e-cigarettes, the available data suggest that they are unlikely to exceed 5% of those associated with smoked tobacco products, and may well be substantially lower than this figure. (Section 5.5 page 87)

      Second, that PHE's work was in some way compromised by McKee's "concerns about conflicts of interest". To support this largely self-referential claim, he cites a piece of very poor journalism in which every accusation was denied or refuted by all involved. Please see Gornall J, 2015 including my PuBMed Commons critique of this article and a more detailed critique on my blog [3].

      Third, that "other evidence, some not quoted in the review, raised serious questions about the safety of these products". The citation for this assertion is Pisinger C, 2014. This review does not, in fact, raise any credible questions about the safety of these products, and suffered numerous basic methodological failings. For this reason, it was reviewed but then ignored in the Royal College of Physicians' assessment of e-cigarette risk [2 - page 79]. Please see the PubMed Commons critiques of this paper [4].

      Fourth, that adult smoking prevalence in Australia is "below 13%, without e-cigarettes". Both parts of this claim are wrong. The latest official data shows an adult smoking prevalence of 16.0% in Australia [5]. No citation was provided by the author for his claim. E-cigarettes are widely used in Australia, despite a ban on sales of nicotine liquids. Australians purchase nicotine-based liquids internationally over the internet or buy on a thriving black market that has been created by Australia's wholly unjustified de facto prohibition.

      Fifth, that we "should look to Australia" for tobacco policy inspiration. We certainly should not. Australia has a disturbingly unethical policy of allowing cigarettes to be widely available for sale but tries to deny its 2.8 million smokers access to much safer products by banning nicotine-based e-cigarettes. These options have proved extremely popular and beneficial for millions of smokers in Europe and the United States trying to manage their own risks and health outcomes. Further, the author should consider the harms that arise from Australia's anti-smoking policies in their own right, such as high and regressive taxation and stigma that arises from its campaigns to denormalise smoking.

      If the author wishes to find a model country, he need not travel as far as Australia. Sweden had a smoking prevalence of 11% in 2015 - an extreme outlier in the European Union, which averages 26% prevalence on the measure used in the only consistent pan-European survey [6]. The primary reason for Sweden's very low smoking prevalence is the use of alternative forms of nicotine (primarily snus, a smokeless tobacco) which pose minimal risks to health and have over time substituted for smoking. This exactly what we might expect from e-cigarettes and, given the recent sharp falls in adult and youth smoking in both the UK and the US, this does seem likely. Going with grain of consumers' preferences represents a more humane way to address the risks of smoking than the battery of punitive and coercive policies favoured in Australia.

      Though not specialised in nicotine policy or science, the author is a prolific commentator on the e-cigarette controversy. If he wishes to contribute more effectively, he could start by reading an extensive critique of his own article in the BMJ (McKee M, 2015), which is at once devastating, educational, and entertaining [7].

      References

      [1] McNeill A. Hajek P. Underpinning evidence for the estimate that e-cigarette use is around 95% safer than smoking: authors’ note, 27 August 2015 [link]

      [2] Royal College of Physicians (London) Nicotine without smoke: tobacco harm reduction 28 April 2016 [link]

      [3] Bates C. Smears or science? The BMJ attack on Public Health England and its e-cigarettes evidence review, November 2015 [link]

      [4] Pisinger C, 2014 Bates C. comment [here] and Zvi Herzig [here]

      [5] Australian Bureau of Statistics, National Health Survey: First Results, 2014-15. Table 9.3, 8 December 2015 [link to data]

      [6] European Commission, Special Eurobarometer 429, Attitudes of Europeans towards tobacco, May 2015 [link] - see page 11.

      [7] Herzig Z. Response to McKee and Capewell, 9 February 2016 [link]

      Competing interests: I am a longstanding advocate for 'harm reduction' approaches to public health. I was director of Action on Smoking and Health UK from 1997-2003. I have no competing interests with respect to any of the relevant industries.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Sep 21, Daniel Himmelstein commented:

      Thanks Dr. Seulbe Lee for you response. My apologies for the unit mistake. For the record, I had incorrectly used milliliters rather than liters in the denominator of stream concentrations.

      I updated my notebook to fix the error. To avoid confusion, I changed the notebook link in my first comment to be version specific. I also performed another analysis which speculated on potential sewage contentrations of AMPH under the following assumptions:

      • 1 in 4 people orally consume 30 mg of AMPH daily
      • 40% of the consumed AMPH is excreted into the sewage
      • Each person creates 80 gallons of sewage per day

      Under these assumptions, fresh sewage was estimated to contain 9.91 ug/L of AMPH, which is ~10 times higher than the artificial streams. Granted there is likely additional dilution and degradation I'm not accounting for, but nonetheless this calculation shows it's possible that sewage streams from avid amphetamine communities could result in the doses reported by this study.

      Our research group is continuing work on the ecological effects of multiple contaminants found in these streams.

      Glad to hear. As someone who's swam in both Cresheim Creek and the Mississippi River just this summer, I can appreciate the need to study and reduce the contamination of America's waterways.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Sep 17, Sylvia Seulbe Lee commented:

      Daniel,

      Thank you for your comments on the paper. We appreciate your skepticism and critical observations. Although the CNN story mentions that the source of amphetamine in Baltimore streams could be linked to the excrement of illicit drug users, we clarify that our study made no claims about the major source of the amphetamine in the streams we sampled. Illicit or recreational drug use is one potential source of amphetamine. We are unable to distinguish between recreational and prescription drug use in Baltimore, but prescription use of amphetamine (e.g., for the treatment of ADHD, illicit use by college students prior to exams) may be the primary cause for increased loading, especially given the increasing trend in number of diagnoses and prescription of medication for treatment of ADHD and similar conditions. Another source of amphetamine is improper disposal of prescription medication (flushing down the toilet).

      We have to point out that your reading of the amphetamine concentrations is incorrect. We measured 0.630 ug/L amphetamine in Gwynns Falls, which is equivalent to 630 ng/L or 0.630 ng/mL. Additionally, we added 1 ng/mL (equivalent to 1 ug/L reported in the paper) amphetamine into the artificial streams, not 1000 ng/mL. Thus, the actual concentrations of amphetamine measured in the field and used in the experiment were 1000 times less than the concentrations you reported.

      With respect to dilution of pharmaceutical products from sewage to the watershed, we would like to note that the stream we sampled is small (http://www.beslter.org/virtual_tour/Watershed.html) and the wastewater entering these streams is mostly raw, untreated sewage leaking from failing infrastructure. Baltimore has a population of more than 600,000 people and the large number of people feeding waste into that river could create quite a load. In addition, we note that amphetamine degraded by over 80% in the artificial streams. Thus, we noted in the discussion section that the high concentrations found in the field may indicate that the loading of amphetamine into the Baltimore streams is actually higher than the concentrations we measured, or that there is pseudo-persistence of amphetamine because of continuous input into the streams. Our finding that there were ecological effects even with 80% degradation of the parent amphetamine compound in the artificial streams is noteworthy.

      Furthermore, we acknowledge that the concentrations of drugs in streams is spatially and temporally variable. As shown in our paper, the concentrations of drugs differed quite a bit between our sampling in 2013 and in 2014. The differences were likely due to high flow events prior to our sampling date in 2013. However, the environmental relevance of 1 ug/L amphetamine concentration was clearly supported in the paper by higher concentrations found in streams and rivers in other locations (e.g., Spain, India, etc.).

      Finally, we agree completely that there are many pressing and detrimental contaminants in urban streams in Baltimore and elsewhere. Our research group is continuing work on the ecological effects of multiple contaminants found in these streams.

      Regards, Sylvia - on behalf of my co-authors.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Aug 26, Daniel Himmelstein commented:

      Preamble: I'm far from an expert on environmental science, just a critical observer skeptical of claims that excrement from recreational drug users harms aquatic environments. Given the ongoing war on drugs, these topics are bound to political. For example, CNN covered this study with the title "Your drain on drugs: Amphetamines seep into Baltimore's streams." The CNN story concludes that excrement of illicit meth users is of environmental concern.

      Premise: By the time pharmaceutical products in excrement reach the watershed, they will be extremely diluted. Humans safely tolerate the undiluted dosage, so in general I don't envision the extremely diluted dose harming aquatic life. In cases where the watershed contains high concentrations of pharmaceuticals, I suspect the contamination vector was not the excrement of users, but rather runoff from manufacturing or distribution processes.

      Specifics:

      This study observed the following six concentrations of amphetamine in Baltimore's streams: 3, 8, 13, 28, 101, 630 ng/ml (Table 1). They constructed four artificial streams where they introduced 1000 ng/ml of AMPH (D-amphetamine). Note that the controlled experiment evaluated an AMPH concentration 49 times that of the median concentration in Baltimore streams.

      Furthermore, the Cmax (max concentration in plasma) of D-amphetamine resulting from prescription Adderall is 33.8 ng/ml (McGough JJ, 2003). Accordingly, the artificial streams used an AMPH concentration 30 times that of the blood of an active user. Note that AMPH has a high bioavailability: 75% of the consumed dose enters the blood according to DrugBank. It's unreasonable that runoff from excrement of users could result in a higher concentration than in the blood of the active user.

      However, the study frames the contamination as a result of excrement. The introduction states:

      Unfortunately, many of the same chemicals are also used illicitly as narcotics. After ingestion of AMPH approximately 30−40% of the parent compound plus its metabolites are excreted in human urine and feces, and these can be transported into surface waters directly or through wastewater treatment facilities. On the basis of increases in both medical and illicit usage, there is cause to speculate that the release of stimulants to various aquatic environments across the globe may be on the rise.

      And the discussion states:

      Our study demonstrates that illicit drugs may have the potential to alter stream structure and function.

      Conclusion:

      Evidence is lacking that excrement from recreational drug users has anything to do with environmentally harmful levels of AMPH in Baltimore streams. There seems to be a bigger issue with pollution in the Baltimore streams, with the study stating:

      As much as 65% of the average flow in the Gwynns Falls can be attributed to untreated sewage from leaking infrastructure

      In such a polluted aquatic environment, I suspect there are several more pressing and detrimental contaminants than recreational drugs. Finally, there are related studies, such as Jiang JJ, 2015, that I haven't had time to investigate.

      Update 2016-09-01:

      Here is more evidence that the 630 ng/ml of amphetamine observed in Gwynns Run at Carroll Park is extremely high. At that concentration, only 7.94 liters of stream water contain an effective dose of AMPH (5 mg). At 1000 ng/ml, 5.0 liters of water contain an effective dose of AMPH.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 31, Daniela Drandi commented:

      Dr. Kumar S. and Colleagues gave a comprehensive description of the role of the ancestors (MFC and ASOqPCR) and the new high-throughput (NGF and NGS) MRD techniques in MM. However, in the “molecular methods for MRD detection” section, the Authors briefly refer to our original work, (Drandi D et al. J Mol Diagn. 2015;17(6):652-60), in a way that misinterprets our findings. Infact, in their review the Authors concluded that ddPCR is a “less applicable and more labor intensive” method compared to qPCR. This statement is in contrast to what was observed in our original work, where the comparison between qPCR and ddPCR showed that: 1) ddPCR has sensitivity, accuracy, and reproducibility comparable to qPCR; 2) ddPCR allows to bypass the standard curve issue, ensuring the quantification of samples with low tumor invasion at baseline or lacking MFC data; 3) ddPCR has a substantial benefit in terms of reduced costs, labor intensiveness and waste of precious tissue (see Drandi D et al., supplemental table S3). Notably, according to these findings, a standardization process is currently ongoing, both in the European (ESLHO-EuroMRD group) and in the Italian (Italian Lymphoma Foundation (FIL)-MRD Network) context. We agree that ddPCR does not overcome all the limitation of qPCR including the need, in IGH-based MRD, of patients-specific ASO-primers. However, as we showed, ddPCR is a feasible and an attractive alternative method for MRD detection, especially in term of applicability and labor intensiveness.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 24, Jordan Anaya commented:

      I think readers of this article will be interested in a comment I posted at F1000Research, which reads:

      I would like to clarify and/or raise some issues with this article and accompanying comments.

      One: Reviewers Prachee Avasthi and Cynthia Wolberger both emphasized the importance of being able to sort by date, and in response the article was edited to say: "Currently, the search.bioPreprint default search results are ordered by relevance without any option to re-sort by date. The authors are aware of the pressing need for this added feature and if possible will incorporate it into the next version of the search tool."

      However, it has been nearly a year and this feature has not been added.

      Two: The article states: "Until the creation of search.bioPreprint there has been no simple and efficient way to identify biomedical research published in a preprint format..."

      This is simply not true as Google Scholar indexes preprints. This was pointed out by Prachee Avasthi and in response the authors edited the text to include an incorrect method for finding preprints with Google Scholar. In a previous comment I pointed out how to correctly search for preprints with Google Scholar, and it appears the authors read the comment given they utilize the method at this page on their site: http://www.hsls.pitt.edu/gspreprints

      Three: In his comment the author states: "We want to stress that the 'Sort by date' feature offered by Google Scholar (GS) is abysmal. It drastically drops the number of retrieved articles compared to the default search results."

      This feature of Google Scholar is indeed limited, as it restricts the results to articles which were published in the past year. However, if the goal is to find recent preprints then this limitation shouldn't be a problem and I don't know that I would classify the feature as "abysmal".

      Four: The article states: "As new preprint servers are introduced, search.bioPreprint will incorporate them and continue to provide a simple solution for finding preprint articles."

      New preprint servers have been introduced, such as preprints.org and Wellcome Open Research, but search.biopreprint has not incorporated them.

      Five: Prachee Avasthi pointed out that the search.biopreprint search engine cannot find this F1000Research article about search.biopreprint. It only finds the bioRxiv version. In response the author stated: "The Health Sciences Library System’s quality check team has investigated this issue and is working on a solution. We anticipate a quick fix of this problem."

      This problem has not been fixed.

      Competing Interests: I made and operate http://www.prepubmed.org/, which is another tool for searching for preprints.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 25, Cicely Saunders Institute Journal Club commented:

      This paper was discussed on 12 May 2017 by the MSc students in Palliative Care at the KCL Cicely Saunders Institute.

      The study, that we read with great interest, is a retrospective cohort study examining the association between palliative homecare services and the number of emergency department visits (in regards to both high and low acuity). Previous studies have shown that palliative care homecare services help reduce patients’ consecutive visits to emergency department. Therefore, in this study the authors tested the hypothesis that life-threatening visits could be reduced with the induction of palliative homecare services and education in treating high acuity symptoms at home.

      The study used data from the Ontario Cancer Registry, including a large number of patients (54,743). The study showed that palliative homecare services could reduce the emergency department visit rate in both high and low-acuity groups, which could be considered a benefit of palliative homecare services. However, more information on the definition and the way of addressing palliative homecare services would allow better understanding of the generalizability of this finding. The authors used the Canadian Triage and Acuity Scale national guidelines as the classification, but we would have liked more information on the triage system and the allocation of patients according to their symptoms. For example, pain throat, malaise and fatigue are subjective symptoms which are less commonly classified as emergency or resuscitation-required, but in the study these were allocated in both acuity levels (high and low). We considered that this classification might affect the result significantly, therefore we would have appreciated further explanations.

      Ka Meng Ao, Ming Yuang Huang, Pamela Turrillas, Myongjin Agnes Cho


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 13, Richard Jahan-Tigh commented:

      Might just be a case of Grover's disease? Good place for it clinically and in the right age group.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 10, Gerry Stimson commented:

      In addition to the comments made by Clive Bates about the limitations of the study, a further fault is that the research measured e-cigarette use, and did not establish whether the e-cigarettes actually contained nicotine. As the paper reports 'Students were selected as ever e-cigarette users if they responded “yes” to the question “have you ever tried an e-cigarette”'. But the majority of youth vapers in the US do NOT use nicotine-containing e-cigarettes. The Monitoring the Future study reported that about 60% of youth vapers use e-cigarettes without nicotine. Lax scrutiny by the editor and reviewers means that this crucial issue is overlooked - indeed the article authors do not appear to have identified that this is as a limitation. This further undermines the rather facile policy recommendations to limit e-cigarette availability to young people.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 10, Clive Bates commented:

      The authors appear to have discovered that people want to use e-cigarettes instead of smoking. For anyone taking a public health perspective, that is a positive development given they are likely to be at least 95% lower risk than smoking and nearly all e-cigarette users are currently (or otherwise would be) smokers.

      The authors' policy proposals as stated in their conclusion do not follow from the observations they have made. The paper is insufficiently broad to draw any policy conclusions as it does not consider the interactions between vaping behaviour and smoking behaviour or wider effects on adult or adolescent welfare from increasing costs or reducing access. The paper does not give any insights into the effectiveness, costs, and risks of the proposed policies, so the authors have no foundation on which to make such recommendations.

      The authors appear to be unaware of the potential for unintended consequences arising from their ideas. For example raising the cost of e-cigarettes may cause existing users to relapse to smoking or reduce the incentive to switch from smoking to vaping. They believe their policies will "be important for preventing continued use in youth", but the reaction may not be the one they want - complete abstinence. It may be a continuation of, or return to, smoking.

      Finally, editors and peer reviews should be much firmer in disallowing policy recommendations based on completely inadequate reasoning and, in this case, on a misinterpretation of their own data in which they mischaracterize a benefit as a detriment and an opportunity as a threat.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 14, Haider Al-Darraji commented:

      Figure 1 doesn't seem to reflect its provided legend! It is the Andersen framework rather than the research sites.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 05, Peter H. Asdahl commented:

      I read with interest the study by Villani et al., who reported findings of a cancer surveillance program among TP53 mutation carriers. The authors analysed data from 89 TP53 mutation carriers diagnosed and followed at three tertiary cancer care centres in North America. Villani et al. concluded that their surveillance program is feasible, detects early tumour stages, and confers a sustained survival benefit. I emphasize a more conservative interpretation of the results because of biases common to observational studies of screening effects.

      The primary outcome of the study was incident cancers. If the surveillance and non-surveillance groups were exchangeable at baseline (i.e. had similar distribution of known and unknown factors that affect cancer incidence), we would expect either higher frequency or earlier detection of cancer in the surveillance group because of differential detection attributable to systematic cancer surveillance. The results reported by Villani et al. are counterintuitive: 49% and 88% of individuals in the surveillance and non-surveillance groups, respectively, were diagnosed with at least one incident cancer (crude risk ratio for the effect of surveillance=0.56, 95% confidence limits: 0.41, 0.76. – n.b. risk time distribution is not included in the manuscript). This inverse result suggests that the groups were not exchangeable, and thus confounding is a concern. The potential for confounding is further supported by baseline imbalances in age, sex, and previous cancer diagnosis, which favour higher cancer incidence in the non-surveillance group (the P-values in Table 1 are misleading because of low power to detect differences).

      The baseline imbalance between groups is exacerbated by lead time and length time biases when comparing survival. Detection of asymptomatic tumours by surveillance invariably adds survival time to the surveillance group and gives a spurious indication of improved survival. (Croswell JM, 2010) The effects of this bias are well-documented, and depending on the time from detection to symptom onset, the bias may be substantial. The survival analysis is further invalidated by the inclusion of non-cancerous lesions (e.g. fibroadenoma and osteochondroma, which are unlikely to affect survival) and pre-cancerous lesions (e.g. colonic adenoma and dysplastic naevus). Such lesions accounted for 40% and 16% of the incident neoplasms in the surveillance group and non-surveillance group, respectively.

      Annual MRI was included in the surveillance protocol. Many of the malignancies among TP53 mutation carriers are rapidly growing, and thus more often detected by symptoms rather than annual follow-up. For example, most medulloblastoma recurrences are detected by symptoms a median of four months after the last imaging. (Torres CF, 1994)

      In summary, the study by Villani et al. is not immune to biases common to observational studies of screening effects (Croswell JM, 2010) and the results should not be interpreted as a benefit of surveillance of TP53 mutation carriers as it uncritically has been done by many, which is well described in the accompanying editorial. The benefit, if any, of the proposed surveillance program cannot be assessed without a more rigorous study design to reduce known biases. For example, a study based on random allocation of individuals to surveillance and no surveillance would reduce the potential for confounding assuming that randomization is successful. In addition, adjustment for lead and length time biases is recommended regardless of randomized or non-randomized study design.

      I would like to acknowledge Rohit P. Ojha, Gilles Vassal, and Henrik Hasle for their contributions to this comment.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 01, Serina Stretton commented:

      We read with interest Vera-Badillo and colleagues’ recent publication entitled Honorary and Ghost Authorship in Reports of Randomised Clinical Trials in Oncology [1], which appears to show that these unethical practices are highly prevalent in oncology trial publications. While we applaud the authors for conducting original research in this field, we are concerned that the nonstandard definitions used for ghost authorship may have skewed the results.

      Vera-Badillo and colleagues assessed oncology trial publications where the trial protocol was available and contained a list of investigators. They defined ghost authorship as being present if an individual met one of the following criteria: “(i) investigators listed in the protocol were neither included as authors nor acknowledged in the article; (2) the individual who performed the statistical analyses was neither listed as an author nor acknowledged; (3) assistance of a medical writer was acknowledged in the publication.” No rationale or references were provided in support of this definition. However, while similar definitions have been used in some surveys of unethical authorship practices [2, 3], the definition provided by Vera-Badillo and colleagues is not uniformly accepted [4-7] and is not consistent with the International Committee of Medical Journal Editors (ICMJE) authorship criteria [8] or with the Council of Science Editors (CSE) definition of ghost authorship [9].

      There may be many valid reasons why participating investigators or statisticians may not be eligible for authorship of publications arising from a trial [8]. Here, we would like to respond to Vera-Badillo and colleagues’ assertion that medical writers who ARE acknowledged for their contributions are ghost authors. As specified by the ICMJE, medical writing is an example of a contribution that alone does not merit authorship and, therefore, should be disclosed as an acknowledgement. Also according to ICMJE, appropriate disclosure of medical writing assistance in the acknowledgements is not ghost authoring unless the medical writer was also involved in the generation of the research or its analysis, was responsible for the integrity of the research, or was accountable for the clinical interpretation of the findings. In their publication, Vera-Badillo and colleagues reported evidence of ghost authorship in 66% of evaluated studies. Of these, 34% had acknowledged medical writer assistance. Clearly, inclusion of declared medical writing assistance as ghost authorship has inflated the prevalence of ghost authoring reported in this study. Failure to apply standardised definitions of ghost authorship, guest (or honorary) authorship, and ghostwriting, limits the comparability of findings across studies and can mislead readers as to the true prevalence of these distinct practices [10-12].

      As recognised by the ICMJE [8], the CSE [9], and the World Association of Medical Editors [13], professional medical writers have a legitimate and valued role in assisting authors disclose findings from clinical trials in the peer-reviewed literature. Vera-Badillo and colleagues state in the discussion that medical writers either employed or funded by the pharmaceutical industry are “likely to write in a manner that meets sponsor approval”. No evidence is cited to support this claim. If sponsor approval requires accurate and robust reporting of trial results in accordance with international guidelines on reporting findings from human research [8, 14, 15], then yes, we agree. Professional medical writers employed or funded by the pharmaceutical industry routinely work within ethical guidelines and receive mandatory training on ethical publication practices [16-19]. Although medical writers may receive requests from authors or sponsors that they believe to be unethical, findings from the Global Publication Survey, conducted from November 2012 to February 2013, showed that most requests (93%) were withdrawn after the need for compliance with guidelines was made clear to the requestor [19].

      By expanding the definition of ghost authorship to include disclosed medical writing assistance, Vera-Badillo and colleagues have inflated the prevalence of ghost authorship in oncology trial publications. Such an unbalanced approach has the potential to detract from the true prevalence of ghost authorship where an individual who is deserving of authorship is hidden from the reader.

      The Global Alliance of Publication Professionals (www.gappteam.org)

      Serina Stretton, ProScribe – Envision Pharma Group, Sydney, NSW, Australia; Jackie Marchington, Caudex – McCann Complete Medical Ltd, Oxford, UK; Cindy W. Hamilton Virginia Commonwealth University School of Pharmacy, Richmond; Hamilton House Medical and Scientific Communications, Virginia Beach, VA, USA; Art Gertel, MedSciCom, LLC, Lebanon, NJ, USA

      GAPP is a group of independent individuals who volunteer their time and receive no funding (other than website hosting fees from the International Society for Medical Publication Professionals). All GAPP members have held, or do hold, leadership roles at associations representing professional medical writers (eg, AMWA, EMWA, DIA, ISMPP, ARCS), but do not speak on behalf of those organisations. GAPP members have or do provide professional medical writing services to not-for-profit and for-profit clients.

      References [1] Vera-Badillo FE et al. Eur J Cancer 2016;66:1-8. [2] Healy D, Cattell D. Br J Psychiatry 2003;183:22-7. [3] Gøtzsche PC et al. PLoS Med 2007;4:0047-52. [4] Flanagin A et al. JAMA 1998;280:222-4. [5] Jacobs A, Hamilton C. Write Stuff 2009;18:118-23. [6] Wislar JS et al. BMJ 2011;343:d6128.4-7 [7] Hamilton CW, Jacobs A. AMWA J 2012;27:115. [8] http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html; 2015 [accessed 12.09.16]. [9] http://www.councilscienceeditors.org/resource-library/editorial-policies/white-paper-on-publication-ethics/; [accessed 13.09.16]. [10] Stretton S. BMJ Open 2014;4(7): e004777. [11] Marušić A et al. PloS One. 2011;6(9):e23477. [12] Marchington J et al. J Gen Intern Med 2016;31:11. [13] http://www.wame.org/about/policy-statements#Ghost Writing; 2005 [accessed 12.09.16]. [14] WMA JAMA. 2013;310(20):2191-4. [15] Moher D et al. J Clin Epidemiol. 2010;63:e1–37 [16] http://www.ismpp.org/ismpp-code-of-ethics [accessed 12.09.16]. [17] http://www.amwa.org/amwa_ethics; [accessed 12.09.16]. [18] Jacobs A, Wager E. Curr Med Res Opin 2005;21(2):317-21. [19]Wager E et al. BMJ Open. 2014;4(4):e004780.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 16, Amanda Capes-Davis commented:

      This paper has problems with its authentication testing, resulting in at least three misidentified cell lines (HeLa) being used as models for liver cancer.

      The Materials and Methods of the paper state that "All cell lines were regularly authenticated by morphologic observation under microscopy". This is consistent with the policy of the journal, Cancer Research, which strongly encourages authentication testing of cell lines used in its publications.

      However, morphologic observation is not a suitable method for authentication testing. Changes in morphology can be subtle and difficult to interpret; cultures can be misidentified before observation begins. To investigate the latter possibility, I examined publicly available datasets of STR genotypes to see if the cell lines listed in the paper are known to be misidentified.

      Three of the cell lines used in this paper (Bel-7402, L-02, SMMC-7721) had STR genotypes published by Bian X, 2017 and Huang Y, 2017. All three "liver" cell lines correspond to HeLa and are therefore misidentified.

      HeLa and its three misidentified derivatives were used in the majority of figures (Figures 2, 3, 5, and 6). Although the phosphorylation data appear to be unaffected, the conclusions regarding liver cancer metastasis must be re-examined.

      What can we learn to improve the validity of our research publications?

      For authors and reviewers:

      For journal editors and funding bodies:

      • Encouragement of authentication testing is a step forward, but is insufficient to stop use of misidentified cell lines.
      • Mandatory testing using an accepted method is effective (Fusenig NE, 2017) and would have detected and avoided this problem prior to publication.
      • Policy on authentication testing requires oversight and ongoing review in light of such examples. This is important for NIH and other funding bodies who have recently implemented authentication of key resources as part of grant applications.

      I am grateful to Rebecca Schweppe, Christopher Korch, Douglas Kniss, and Roland Nardone for their input to this comment and much helpful discussion.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 30, Paul Brookes commented:

      I submitted a response to this opinion piece to the journal (Circ. Res.), but unfortunately was informed that they do not accept or publish correspondence related to this type of article. So, here's my un-published letter, which raises a number of issues with the article...

      A recent Circ. Res. viewpointLoscalzo J, 2016 discussed the complex relationships between redox biology and metabolism in the setting of hypoxia, with an emphasis on the use of biochemically correct terminology. While there is broad agreement that the field of redox biology is often confounded by use of inappropriate methods and language Kalyanaraman B, 2012,Forman HJ, 2015, concern is raised regarding some ideas on reductive stress in the latter part of the article.

      In discussing the fate of glycolytically-derived NADH in hypoxia, the reader is urged to “Remember that while redirecting glucose metabolism to glycolysis decreases NADH production by the TCA cycle and decreases leaky electron transport chain flux, glycolysis continues to produce NADH". First, glucose undergoes glycolysis regardless of cellular oxygenation status; this simply happens at a faster rate in hypoxia. As such, glucose is not redirected but rather its product pyruvate is. Second, regardless a proposed lower rate of NADH generation by the TCA cycle (which may not actually be the case Chouchani ET, 2014,Hochachka PW, 1975) NADH still accumulates in hypoxic mitochondria because its major consumer, the O2-dependent respiratory chain, is inhibited. It is clear that both NADH consumers and producers can determine the NADH/NAD+ ratio, and in hypoxia the consumption side of the equation cannot be forgotten.

      While the field is in broad agreement that NADH accumulates in hypoxia, the piece goes on to claim that “How the cell handles this mounting pool of reducing equivalents remained enigmatic until recently.” This is misleading. The defining characteristic of hypoxia, one that has dominated the literature in the nearly 90 years since Warburg's seminal work Warburg O, 1927, is the generation of lactate by lactate dehydrogenase (LDH), a key NADH consuming reaction that permits glycolysis to continue. Lactate is “How cells handle the mounting pool of reducing equivalents.”

      Without mentioning lactate, an alternate fate for hypoxic NADH is proposed, based on the recent discovery that both LDH and malate dehydrogenase (MDH) can use NADH to drive the reduction of 2-oxoglutarate (α-ketoglutarate, α-KG) to the L(S)-enantiomer of 2-hydroxyglutarate (L-2-HG) under hypoxic conditions Oldham WM, 2015,Intlekofer AM, 2015. We also found elevated 2-HG in the ischemic preconditioned heart Nadtochiy SM, 2015, and recently reported that acidic pH – a common feature of hypoxia – can promote 2-HG generation by LDH and MDH Nadtochiy SM, 2016.

      While there can be little doubt that the discovery of hypoxic L-2-HG accumulation is an important milestone in understanding hypoxic metabolism and signaling, the claim that L-2-HG is “a reservoir for reducing equivalents and buffers NADH/NAD+” is troublesome on several counts. From a quantitative standpoint, we reported the canonical activities of LDH (pyruvate + NADH --> lactate + NAD+) and of MDH (oxaloacetate + NADH --> malate + NAD+) are at least 3-orders of magnitude greater than the rates at which these enzymes can reduce α-KG to L-2-HG Nadtochiy SM, 2016. This is in agreement with an earlier study reporting a catalytic efficiency ratio of 10<sup>7</sup> for the canonical vs. L-2-HG generating activities of MDH Rzem R, 2007. Given these constraints, we consider it unlikely that the generation of L-2-HG by these enzymes is a quantitatively important NADH sink, compared to their native reactions. It is also misleading to refer to the α-KG --> L-2-HG reaction as a "reservoir for reducing equivalents", because even though this reaction consumes NADH, it is not clear whether the reverse reaction regenerates NADH. Specifically, the metabolite rescue enzyme L-2-HG-dehydrogenase uses an FAD electron acceptor and is not known to consume NAD+ Nadtochiy SM, 2016,Rzem R, 2007,Weil-Malherbe H, 1937.

      Another potentially important sink for reducing equivalents in hypoxia that was not mentioned, is succinate. During hypoxia, NADH oxidation by mitochondrial complex I can drive the reversal of complex II (succinate dehydrogenase) to reduce fumarate to succinate Chouchani ET, 2014. This redox circuit, in which fumarate replaces oxygen as an electron acceptor for respiration, was first hinted at over 50 years ago SANADI DR, 1963. Importantly (and in contrast to L-2-HG as mentioned above), the metabolites recovered upon withdrawal from a fumarate --> succinate "electron bank" are the same as those deposited.

      Although recent attention has focused on the pathologic effects of accumulated succinate in driving ROS generation at tissue reperfusion Chouchani ET, 2014,Pell VR, 2016, the physiologic importance of hypoxic complex II reversal as a redox reservoir and as an evolutionarily-conserved survival mechanism Hochachka PW, 1975 should not be overlooked. Quantitatively, the levels of lactate and succinate accumulated during hypoxia are comparable Hochachka PW, 1975, and both are several orders of magnitude greater than reported hypoxic 2-HG levels.

      While overall the article makes a number of important points regarding reductive stress and the correct use of terminology in this field, we feel that the currently available data do not support a quantitatively significant role for L-2-HG as a hypoxic reservoir for reducing equivalents. These quantitative limitations do not diminish the potential importance of L-2-HG as a hypoxic signaling molecule Nadtochiy SM, 2016,Su X, 2016,Xu W, 2011.

      Paul S. Brookes, PhD.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 04, Cicely Saunders Institute Journal Club commented:

      This paper was discussed on 2.5.17, by students on the KCL Cicely Saunders Institute MSc in Palliative Care

      We read with interest the systematic review article by Cahill et al on the evidence for conducting palliative care family meetings.

      We congratulate the authors on their effort to include as many papers as possible by using a wide search strategy. Ultimately, only a small number of papers were relevant to this review and were included. The authors found significant heterogeneity within the various studies, in terms of the patient settings, interviewer background, and country of origin and culture. Study methods included both qualitative and quantitative designs, and a range of outcome measures, but there was a notable lack of RCT studies.

      Two studies found a benefit of family meetings using validated outcome measures. A further four found a positive outcome of family meetings, but with non-validated outcome measures. We felt that the lack of validated outcome measures does not necessarily exclude their value.

      We agree with the conclusions of the authors that there is limited evidence for family meetings in the literature and that further research would be of value. The small and diverse sample size leads to the potential for a beta error (not finding a difference where one exists). We were surprised by the final statement of the abstract that family meetings should not be routinely adopted into clinical practice, and we do not feel that the data in the paper support this: the absence of finding is not synonymous to the finding of absence. Further, our experience in three health care settings (UK, Canada, Switzerland) is that family meetings are already widely and routinely used.

      Aina Zehnder, Emma Hernandez-Wyatt, James W Tam


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 03, Alfonso Leiva commented:

      I would like to remark this study is the first prospective cohort to analyse the assotiation between time to diagnosis and stage. Fiona Walter et al studied the factors related to longer time to diagnosis and tried to explain the lack of assotiation between longer time to diagnosis and stage. We have recently published an article to explain this paradox and suggest confounding by an unknown factor as a posible explination. We have suggested the stage when symptoms appear is the main confounder in the assotiation between time to diagnosis and stage of diagnosis and propose a graphic representation for the progression of CRC fron an preclinical asymtomatic stage to a clinical symptomatic stage.

      Leiva A, Esteva M, Llobera J, Macià F, Pita-Fernández S, González-Luján L, Sánchez-Calavera MA, Ramos M. Time to diagnosis and stage of symptomatic colorectal cancer determined by three different sources of information: A population based retrospective study. Cancer Epidemiol. 2017 Jan 23;47:48-55.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 17, BSH Cancer Screening, Help-Seeking and Prevention Journal Club commented:

      The HBRC discussed this paper during the journal club held on November 15th 2016. This paper fits well with research conducted within our group on early diagnosis and symptomatic presentation. We considered this paper to be a useful addition to the literature and the paper raises some interesting findings which could be investigated further.

      The study examined the factors associated with a colorectal cancer (CRC) diagnosis and time to diagnosis (TDI). These factors included symptoms, clinical factors and sociodemographic factors. Due to the important role early diagnosis plays in survival from CRC, it is pertinent to investigate at which point diagnosis may be delayed. Early diagnosis of CRC can be problematic due to many of the symptoms being able to be associated with other health problems or being benign. The authors acknowledge that most cases of CRC present symptomatically.

      The group was interested in the finding that less specific symptoms such as indigestion or abdominal pain were associated with shorter patient intervals and that specific classic symptoms, such as rectal bleeding were associated with shorter health system intervals (HSI). So what patients might perceive to be alarm symptoms differs from perceptions of healthcare professionals. It was also highlighted that there was a discrepancy in the patient interval found in this study with a previous study, with this study showing 35 days as the median patient interval, compared to a primary care audit conducted by Lyratzopoulos and colleagues (2015) which showed a patient interval of 19 days. It was also interesting that family history of cancer was associated with a longer HSI, given that family history is a risk factor for cancer.

      The main advantage of this study is the prospective design, with the recruitment of patients prior to their diagnosis. Patients reported their symptoms and so provided insight into what they experienced, but the group did acknowledge that this was retrospective as symptoms were those experienced before they presented at the GP, with these being up to 2 years before diagnosis. The group felt the authors’ use of multiple regression models was a benefit to the study, allowing an investigation into time-constant and duration-varying effects, as in line with previous research, it was shown that rectal bleeding becomes normalised over time.

      We discussed limitations of the study and recognised that the authors did not acknowledge the Be Clear on Cancer Awareness Campaigns which took place during the data collection (Jan-March 2011, Jan-March 2012, Aug-Sept 2012) and could have had an impact by shortening patient interval and increasing referral rates. We also discussed that there could be an inherent bias in GPs and that HSI could be due to this bias of GP’s wanting to reassure patients that their symptom is likely to be the sign of something other than cancer. This could also help explain the longer time to diagnosis and HSI in those with depression and anxiety, as GP’s may feel the need to over reassure these patients, recognising that they are already anxious. However, when symptoms have been shown to be a ‘false alarm’, overreassurance and undersupport from healthcare professionals has been shown to lead patients to interpret subsequent symptoms as benign and express concern about appearing hypochondriacal (Renzi, Whitaker and Wardle, 2015). It may also be due to healthcare professionals attributing symptoms to some of the side effects related to medication for depression and anxiety such as diarrhoea, vomiting, and constipation. The authors suggest also that healthcare professionals might not take these patients physical symptoms seriously. There was also a small number of CRC patients given the amount of patients approached, with the authors recognising the study is underpowered. There may also have been an overestimate of the number of bowel symptoms in non-cancer patients, which was recognised by the authors. It was also unclear that the authors had also conducted univariate analyses and that these were included in the supplementary material until they were mentioned at the end of the results.

      There may also be differences in TDI depending on the type of referral e.g. two week wait, safety netting, and the group would have liked some more information about this. The group would also have liked to see some discussion about the median HSI being longer (58 days) than the 31 days currently recommended for diagnosis from the day of referral and the new target for 2020 of 28 days from referral to diagnosis. It would have also been useful to have some information about how many consultations patients had before being referred, as the authors state in the introduction that 1/3 of CRC patients have three or more consultations with the GP before a referral is made. It would also have been informative for the data on how long participants took to return their questionnaire, with the authors stating that most were completed within 2 weeks, but that some were within 3 months.

      It would be interesting to look further into the factors affecting patients presenting to their GP straight away with symptoms and those which delay. We discussed possible explanations being personality, extreme individual differences in how symptoms are perceived as serious or not and external factors such as being too busy. It would also be interesting to consider whether these symptoms were mentioned by patients as an afterthought at the end of a consultation about something else, or whether this was the symptom that patients primarily presented to the doctor with.

      In conclusion, the HBRC group read the article with great interest and would encourage further studies in this area.

      Conflicts of interest: We report no conflict of interests and note that the comments produced by the group are collective and not the opinion of any one individual.

      References

      1) Lyratzopoulos G, Saunders CL, Abel GA, McPhail S, Neal RD, Wardle J, Rubin GP (2015) The relative length of the patient and the primary care interval in patients with 28 common and rarer cancers. Br J Cancer 112(Suppl 1): S35–S40.

      2) Renzi C, Whitaker KL, Wardle J. (2015) Over-reassurance and undersupport after a 'false alarm': a systematic review of the impact on subsequent cancer symptom attribution and help seeking. BMJ Open. 5(2):e007002. doi: 10.1136/bmjopen-2014-007002.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 29, Michael Goard commented:

      We thank the Janelia Neural Circuit Computation Journal Club for taking the time to review our paper. However, we wish to clarify a few of the points brought up in the review.

      1) Off-target effects of inactivation. The authors of the review correctly point out that off-target effects can spread laterally from an inactivated region, potentially complicating the interpretation of the V1/PPC inactivation experiments. We have since carried out (not yet published) electrophysiology experiments in V1 during PPC photoinactivation and find there is some suppression (though not silencing) of V1 excitatory neurons through polysynaptic effects. The suppression is moderate and the V1 neurons maintain stimulus selectivity, so it is unlikely off-target suppression in V1 is responsible for the PPC inactivation effects, but the results do need to be interpreted with some caution.

      Notably, the suppression effect is not distance-dependent; it instead appears heterogeneous and is likely dependent on connectivity, as has been recently demonstrated in other preparations (Otchy et al, Nature, 2015). Given these findings, describing off-target effects as a simple function of power and distance is likely misleading. Indeed, even focal cortical silencing is likely to have complex effects on subcortical structures in addition to the targeted region. Instead, we suggest that while photoinactivation experiments are still useful for investigating the role of a region in behavior, the results need to be interpreted carefully (e.g., as demonstrating an area as permissive rather than instructive; per Otchy et al., 2015).

      2) Silencing of ALM in addition to M2. The photoinactivation experiments were designed to discriminate between sensory, parietal, and motor contributions to the task, rather than specific regions within motor cortex. We did not intend to suggest that ALM was unaffected in our photoinactivation experiments (this is the principal reason we used the agnostic term “fMC” rather than referring to a specific region). Although the center of our window was located posterior and medial to ALM, we used a relatively large window (2 x 2.5 mm), so ALM was likely affected.

      3) Rebound activity contributing to fMC photoinactivation effects. Rebound effects are not likely to be responsible for the role of fMC during the stimulus epoch. First, our photostimulus did not cause consistent rebound excitation (e.g., Figure 8B). This is likely due to the use of continuous rather than pulsed photoinactivation (see Figure 1G in Zhao et al., Nat Methods, 2011). Second, we did run several inactivation experiments with a 100-200 ms offset ramp (as in Guo et al., 2014), and found identical results (we did not include these experiments in the publication since we did not observe rebound activity). We suspect the discrepancy with Guo et al. is due to the unilateral vs. bilateral photoinactivation (Li, Daie, et al., 2016), as the reviewers suggest.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 07, Janelia Neural Circuit Computation Journal Club commented:

      Highlight/Summary

      This is one of several recent papers investigating cortical dynamics during head-restrained behaviors in mice using mostly imaging methods. The questions posed were:

      Which brain regions are responsible for sensorimotor transformation? Which region(s) are responsible for maintaining task-relevant information in the delay period between the stimulus and response?

      These questions were definitely not answered. However, the study contains some nice cellular calcium imaging in multiple brain regions in a new type of mouse behavior.

      The behavior is a Go / No Go behavioral paradigm. The S+ and S- stimuli were drifting horizontal and vertical gratings, respectively. The mouse had to withhold licking during a delay epoch. During a subsequent response epoch the mouse responded by licking for a reward on Go trials.

      Strengths

      Perhaps the greatest strength of the paper is that activity was probed in multiple regions in the same behavior (all L2/3 neurons, using two-photon calcium imaging). Activity was measured in primary visual cortex (V1), ‘posterior parietal cortex’ (PPC; 2 mm posterior, 1.7 mm lateral), and fMC. ‘fMC' overlaps sMO in the Allen Reference Atlas, posterior and medial to ALM (distance approximately 1 mm) (Li/Daie et al 2016). This location is analogous to rat 'frontal orienting field’ (Erlich et al 2011) or M2 (Murakami et al 2014). Folks who work on whiskers refer to this area as vibrissal M1, because it corresponds to the part of motor cortex with the lowest threshold for whisker movements.

      In V1, a large fraction (> 50 %) of neurons were active and selective during the sample epoch. One of the more interesting findings is that a substantial fraction of V1 neurons were suppressed during the delay epoch. This could be a mechanism to reduce ‘sensory gain’ and ’distractions' during movement preparation. Interestingly, PPC neurons were task-selective during the sample or response epochs; consistent with previous work in primates (many studies in parietal areas) and rats (Raposo et al 2014), individual neurons multiplexed sensory and movement selectivity. However, there was little activity / selectivity during the delay epoch. This suggests that their sequence-like dynamics in maze tasks (e.g. Harvey et al 2012) might reflect ongoing sensory input and movement in the maze tasks, rather than more cognitive variables. fMC neurons were active and selective during the delay and response epoch, consistent with a role in movement planning and motor control, again consistent with many prior studies in primates, rats (Erlich et al 2011), and mice (Guo/Li et al 2014).

      Weaknesses

      Delayed response or movement tasks have been used for more than forty years to study memory-guided movements and motor preparation. Typically different stimuli predict different movement directions (e.g. saccades, arm movements or lick directions). Previous experiments have shown that activity during the delay epoch predicts specific movements, long before the movement. In this study, Go and No Go trials are fundamentally asymmetric and it is unclear how this behavioral paradigm relates to the literature on movement preparation. What does selectivity during the delay epoch mean? On No Go trials a smart mouse would simply ignore the events post stimulus presentation, making delay activity difficult to interpret.

      The behavioral design also makes the interpretation of the inactivation experiments suspect. The paper includes an analysis of behavior with bilateral photoinhibition (Figure 9). The authors argue for several take-home messages (‘we were able to determine the necessity of sensory, association, and frontal motor cortical regions during each epoch (stimulus, delay, response) of a memory-guided task.'); all of these conclusions come with major caveats.

      1.) Inactivation of both V1 and PPC during the sample epoch abolishes behavior, caused by an increase in false alarm rate and decrease in hit rate (Fig. 9d). The problem is that the optogenetic protocol silenced a large fraction of the brain. The methods are unlikely to have the spatial resolution to specifically inactivate V1 vs PPC. The authors evenly illuminated a 2 mm diameter window with 6.5mW/mm<sup>2</sup> light in VGat-ChR2 mice. This amounts to 20 mW laser power. According to the calibrations performed by Guo / Li et al (2014) in the same type of transgenic mice, this predicts substantial silencing over a radius (!) of 2-3 mm (Guo / Li et al 2014; Figure 2). Photoinhibiting V1 will therefore silence PPC and vice versa. It is therefore expected that silencing V1 and PPC have similar behavioral effects.

      2.) Silencing during the response window abolished the behavioral response (licking). Other labs labs have also observed total suppression of voluntary licking with frontal bilateral inactivation (e.g. Komiyama et al 2010; and unpublished). However, the proximal cause of the behavioral effect is likely silencing of ALM, which is more anterior and lateral to ‘fMC’. ALM projects to premotor structures related to licking. Low intensity activation of ALM, but not more medial and posterior structures such as fMC, triggers rhythmic licking (Li et al 2015) The large photostimulus used here would have silenced ALM as well as fMC.

      3.) Somewhat surprisingly, behavior is perturbed after silencing fMC during the sample (stimulus) and delay epochs. In Guo / i et al 2014, unilateral silencing of frontal cortex during the sample epoch (in this case ALM during a tactile decision task, 2AFC type) did not cause a behavioral effect (although bilateral silencing is likely different; see Li / Daie et al 2016). The behavioral effect in Goard et al 2016 may not be caused by the silencing itself, but by the subsequent rebound activity (an overshoot after silencing; see for example Guo JZ et al eLife 2016; Figure 4—figure supplement 2). Rebound activity is difficult to avoid, but can be minimized by gradually ramping down the photostimulus, a strategy that was not used here. The key indication that rebound was a problem is that behavior degrades almost exclusively via an increase in false alarm rate -- in other words - mice now always lick independent of trial type. Increased activity in ‘fMC’, as expected with rebound, is expected to promote these false alarms. More experiments are needed to make the inactivation experiments solid.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 18, Jen Herman commented:

      I would like to offer a possible alternative interpretation to explain the gain of interaction variants we identified for both SirA and DnaA that we did not conceive of at the time of publication.

      In the gain of interaction screen (bacterial two-hybrid - B2H) we obtained the surprising result that none of the variants we identified (either in SirA or DnaA) occurred near the known SirA-DnaA interaction interface. The DnaA gain of interaction substitutions occurred primarily in the region of DnaA important for DnaA oligomerization (Domain III). If these variants are defective for DnaA self interaction, then they might also be more available to interact with SirA in the B2H.

      If SirA, like DnaA, is also capable of forming higher order oligomers (at least at the higher copy numbers likely present in the B2H), then it is also conceivable that the gain of interaction variants we identified within SirA are also defective in this form of self-interaction. One piece of data to suggest this hypothesis might be correct is that truncating several amino acids from SirA's C-terminus (including the critical P141T residue) increases SirA solubility following overexpression. Previously, we and others were unable to identify conditions to solubilize any overexpressed wild-type SirA. Of course, this could simply be due to a propensity of SirA to form aggregates/inclusion bodies; however, another possibility is that SirA has an intrinsic tendency to oligomerize/polymerize at high concentrations, and that SirA's C-terminal region facilitates this particular form of self-interaction.

      If any of this is true, one should be able to design B2H gain of interaction screens to identify residues that likely disrupt the suspected oligomerization of any candidate protein suspected to mutltimerize (as we may have inadvertently done). This could be potentially useful for identifying monomer forms that are more amenable to, for example, protein overexpression or crystallization.

      In the bigger picture, one wonders how many proteins that are "insoluble" are actually forming ordered homomers of some sort due to their chiral nature. Relatedly, would this tendency be of any biological significance or simply a consequence of not being selected against in vivo (especially for proteins present at low copy number in the cell)? (see PMID 10940245 for a very nice review related to this subject).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 01, Tanya Halliday commented:

      A letter to the editor (and response) have been published indicating failure to account for regression to the mean in the article. Thus, the conclusion regarding effectiveness of the SHE program is not supported by the data.

      See: http://www.tandfonline.com/doi/full/10.1080/08952841.2017.1407575


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 11, Sudheendra Rao commented:

      Would be glad to get more info on the PKA. What exactly was detected? total, regulatory/cat sub unit? phosphorylation status etc. Or just providing info on antibodies used will also do. Thanks.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 18, Julia Romanowska commented:

      Sounds interesting, but I couldn't find an option in the R package to run on several cores - and this is an important feature when using GWAS or EWAS.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 19, Mayer Brezis commented:

      The study shows a correlation between low albumin and mortality - which makes sense and confirms previous literature. Is the relationship CAUSAL? The authors suggest causality: "Maintaining a normal serum albumin level may not only prolong patient survival but also may prevent medical complications...". Couldn't low albumin simply be A MARKER of more severe morbidity? If a baseline higher albumin BEFORE initiation of the tube feeding predicts lower mortality, how could this feeding mediate the improved survival? Are the authors suggesting that low albumin should a consideration AGAINST tube feeding because of predicted poorer prognosis?<br> Similarly, stable or increased albumin predicts long term survival not necessarily because of tube feeding, but simply as a marker of healthier people who survived the gastrostomy procedure. Causality cannot be implied in the absence of a control group and missed follow up in a third of the patients in the study.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 19, Harri Hemila commented:

      Two unpublished trials bias the conclusions on vitamin C and atrial fibrillation

      In their meta-analysis on vitamin C and postoperative atrial fibrillation (POAF), Polymeropoulos E, 2016 state that “no significant heterogeneity was observed among [nine] included studies” (p. 244). However, their meta-analysis did not include the data of 2 large US trials that found no effect of vitamin C against POAF and have thus remained unpublished. If those 2 trials are included, there is significant heterogeneity in the effects of vitamin C. Vitamin C had no effects against POAF in 5 US trials, but significantly prevented POAF in a set of 10 trials conducted outside of the USA, mainly in Iran and Greece, see Hemilä H, 2017 and Hemilä H, 2017. Although the conclusion by Polymeropoulos E, 2016 that vitamin C does have effects against POAF seems appropriate, the effect has been observed only in studies carried out in less wealthy countries.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 16, Jon Simons commented:

      Thank you for alerting us to this problem with the GingerALE software. We will look into it and, if necessary, consult the journal about whether a corrective communication might be appropriate.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 09, Christopher Tench commented:

      The version of GingerALE (2.3.2) has a bug in the FDR algorithm that resulted in false positive results. This bug was fixed at version 2.3.3.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 29, Inge Seim commented:

      Please note that GHRL (derived from genome sequencing data) in 31 bird species, including Columba livia, was reported in late 2014 (http://www.ncbi.nlm.nih.gov/pubmed/25500363). Unfortunately, Xie and colleagues did not cite this work.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 02, MA Rodríguez-Cabello commented:

      Dear sirs, It is a wrong translation of the original text. Where this text says "prostate-specific antigen level" the original abstract says ASA classification. I apologize for the error in the translation done from the original abstract provided by Elsevier.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 28, Lydia Maniatis commented:

      Followers of the school of thought to which the authors of this article belong believe, among other odd things, in the notion that visual perception can be studied without reference to form. Thus, the reference made in the title of this paper to "regular sparse micro-patterns." There are (micro)-patterns and there are (micro)-patterns; do the present conclusions apply to any and all "regular, sparse micro-patterns?" Or only selected ones?

      Among the other beliefs of this school is the notion that different retinal projections trigger processing at different levels of the visual system, such that, for example, the activities of V1 neurons may be directly discerned in a “simple” percept. These supposed V1 (etc) signatures, of course, only apply to restricted features of a restricted set of stimuli (e.g. "grid-textures") under restricted contexts. The supposed neural behaviors and their links to perception are simple, involving largely local summation and inhibition.

      The idea that different percepts/features selectively tap different layers of visual processing is not defensible, and no serious attempt has ever been made to defend it. The problem was flagged by Teller (1984), who labeled it the “nothing mucks it up proviso” highlighting the failure to explain the role of the levels of the visual system (whose processes involved unimaginably complex feedback effects) not invoked by a particular “low-level” explanation. With stunning lack of seriousness Graham (e.g 1992, see comments in PubPeer) proposed that under certain conditions the brain becomes transparent through to the lower levels, and contemporary researchers have implicitly embraced this view. The fact is, however, that even the stimuli that are supposed to selectively tap into low-level processes (sine wave gratings/Gabor patches) produce 3D percepts with the impression of light and shadow; these facts are never addressed by devotees of the transparent brain, whose models are not interested in and certainly couldn’t handle them.

      The use of “Gabor patches” is a symptom of the other untenable assumption that “low-levels” of the visual system perform a Fourier analysis of the luminance structure of the retinal projection at each moment. There is no conceivable reason why the visual system should do this, or how, as it would not contribute to use of luminance patterns to construct a representation of the environment. There is also no evidence that it does this.

      In addition, it is also, with no evidence, asserted that the neural “signal” is “noisy.” This assumption is quite convenient, as the degree of supposed “noise” can be varied ad lib for the purposes of model-fitting. It is not clear how proponents of a “signal detecting mechanism with noise” conceive of the distinction between neural activity denoting “signal” and neural activity denoting “noise.” In order to describe the percept as the product of “signal” and “noise,” investigators have to define the “signal,” i.e. what should be contained in the percept in the absence of (purely hypothetical) “noise;” But that means that rather than observing how the visual process handles stimulation, they preordain what the percept should be, and describe (and "model") deviations as being due to “noise.”

      Furthermore, observers employed by this school are typically required to make forced, usually binary, choices, such that the form of the data will comply with model assumptions, as opposed to being complicated by what observers actually perceive, (and by the need to describe this with precision).

      Taken together, the procedures and assumptions employed by Baker and Meese (2016) and many others in the field are very convenient, insofar as “theory” at no point has to come into contact with fact or logic. It is completely bootstrapped, as follows: A model of neural function is constructed, and stimuli are selected/discovered which are amenable to an ad hoc description in terms of this model; aspects of the percepts produced by the stimulus figure (as well as percepts produced by other known figures) that are not consistent with the model are ignored, as are all the logical problems with the assumptions (many of which Teller (1984), a star of the field, tried to call attention to with no effect); the stimulus is then recursively treated as evidence for the model. Variants of the restricted set of stimulus types may produce minor inconsistencies with the models, which are then adjusted accordingly, refitted, and so on. (Here, Baker and Meese freely conclude that perception of their stimulus indicates a "mid-level" (as they conceive it) contribution). It is a perfectly self-contained system - but it isn’t science. In fact, I think it is exactly the kind of activity that Popper was trying to propose criteria for excluding from empirical science.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 08, Harald HHW Schmidt commented:

      This paper is poorly validated. The detection of NOX4 relies on a poorly validated antibody, or in fact no one in the field believes that it is specific. Others have shown that siRNAs can be highly unspecific. We and others cannot detect NOX4 in macrophages. Thus the title and conclusions appear to be invalid.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 18, Daniel Weeks commented:

      Relative effect sizes of FTO variants and rs373863828 on body mass index in Samoans

      We would like to thank Dr. Janssens for making these helpful comments about the presentation and interpretation of our findings. And we welcome this opportunity to present our results more precisely and clearly.

      Regarding the suggestion that we should have compared standardized effects, there exists some literature that argues that comparison of standardized effects can be misleading (Cummings P, 2004, Cummings P, 2011). Indeed, Rothman and Greenland (1998, p. 672) recommend that "effects should be expressed in a substantively meaningful unit that is uniform across studies, not in standard-deviation units." While the argument for comparing standardized effects may be more compelling when different studies used different measurement scales, in this case, body mass index (BMI) has been measured in prior studies and our current one using a common scale.

      As recommended, we have now assessed the effect of variants on BMI in the FTO region to allow for direct comparison in our Samoan population. As Table 1 indicates, while the effects of these FTO variants are not statistically significant in our discovery sample, the estimates of the effect size of the FTO variants are similar in magnitude to previous estimates in other populations, and the non-standardized effect of the missense variant rs373863828 in CREBRF is approximately 3.75 to 4.66 times greater than the effects of the FTO variants in our discovery sample.

      We concur with the important reminder that the odds ratio overestimates the relative risk when the outcome prevalence is high.

      Thank you,

      Daniel E. Weeks and Ryan Minster on behalf of all of the co-authors.

      Department of Human Genetics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA, USA.

      References:

      Cummings P. (2004) Meta-analysis based on standardized effects is unreliable. Arch Pediatr Adolesc Med. 158(6):595-7. PubMed PMID: 15184227.

      Cummings P. (2011) Arguments for and against standardized mean differences (effect sizes). Arch Pediatr Adolesc Med. 165(7):592-6. doi: 10.1001/archpediatrics.2011.97. PubMed PMID: 21727271.

      Rothman, K.J. and Greenland S. (1998) Modern epidemiology, second edition. Lippincott Williams & Wilkins, Philadelphia.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 03, Cecile Janssens commented:

      This study showed that a variant in CREBRF is strongly associated with body mass index (BMI) in Samoans. The authors write that this gene variant is associated with BMI with “a much larger effect size than other known common obesity risk variants, including variation in FTO.” The risk variant was also “positively associated with obesity risk” and with other obesity-related traits. For a correct interpretation of these findings, two methodological issues need to be considered.

      Much larger effect size

      The effect size of the CREBRF variant (1.36-1.45 kg/m2 per copy of the risk allele) is indeed larger than that of FTO (0.39 kg/m2 per copy), but this comparison is not valid to claim that the gene variant has a stronger effect.

      The effect size for the FTO gene comes from a pooled analysis of studies in which the average BMI of the population was below 28kg/m2 with standard deviations lower than 4kg/m2. In this study, the mean BMI was 33.5 and 32.7 kg/m2 in the discovery and replication samples and the standard deviations were higher (6.7 and 7.2 kg/m2). To claim that the CREBRF has a stronger effect than FTO, the researchers should have compared standardized effects that take into account the differences in BMI between the study populations, or they should have assessed the effect of FTO to allow for a direct comparison in the Samoan population.

      It is surprising that the authors have not considered this direct comparison between the genes, given that an earlier publication had reported about the relationship between FTO and BMI in the replication datasets of this study (Karns R, 2012) That study showed no association between FTO and BMI in the smallest, but a higher effect size in the largest of the two replication samples (0.55-0.70 kg/m2). The effect of the CREBRF gene may still be stronger than that of the FTO gene, but the difference may not be as large as the comparison of unstandardized effect sizes between the populations suggests.

      Impact on obesity risk

      The authors also investigate the “impact of the gene variant on the risk of obesity” and found that the odds ratio for the gene variant was 1.44 in the replication sample. This value is an odds ratio and indicates the impact on the odds of obesity, not on the risk of obesity. The difference between the two is essential here.

      The value of the odds ratio is similar to the relative risk when the outcome of interest is rare. In this study, the majority of the people were obese, 55.5% and 48.8% in the discovery and replication samples had BMI higher than 32kg/m2. When the prevalence of the outcome is this high, the odds ratio overestimates the relative risk. When the odds ratio is 1.44, the relative risk is 1.43 when the prevalence of obesity in noncarriers is 1%, 1.32 when it is 20%, 1.22 when it is 40%, 1.18 when it is 50%, and 1.16 when 55% of the non-carriers is obese. Regarding the impact on obesity risk, the gene variant might be more ordinary as suggested.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 21, David Keller commented:

      Goulden's data actually confirms that minimum mortality occurs with light-to-moderate alcohol intake

      Goulden's study concludes: "Moderate alcohol consumption is not associated with reduced all-cause mortality in older adults", a finding he admits is contrary to that of many prior studies. He bases his analysis on a new category he designates as "occasional drinkers", but he gives two differently worded definitions of an "occasional drinker" in different parts of his paper. Goulden states he intends "occasional drinkers" to consume less alcohol than "light drinkers", a standard category of drinkers who consume 1 to 6 standard drinks per week, each containing about 14 grams of ethanol. Unfortunately, both of his definitions of "occasional drinker" can include heavy binge alcohol abusers, clearly not what he intends. By ignoring this new and superfluous group of drinkers, we see that his remaining data confirms that the minimum risk for mortality is associated with light to moderate alcohol intake (the familiar J-shaped curve).

      In his reply to my letter, Goulden wrote: "Keller raises the possibility that the 'occasional drinkers' group is, in fact, a group of light drinkers who have under-reported their level of consumption."

      Or worse, as we shall see. In addition, both of the definitions he gives for "occasional drinker" do not make physiological sense, are superfluous, and confusing. This new category does not contribute to understanding the data, and it increases the possibility of erroneous classification of drinkers.

      In the abstract, Goulden defines an "occasional drinker" as one who reports drinking "at least once, but never more than less than once a week [sic]". In the body of the paper, he defines an "occasional drinker" as one who reports drinking on at least 1 occasion, but always less than once per week. By failing to specify the amount of alcohol consumed on each occasion, Goulden's definitions classify both of the following as "occasional drinkers": a subject who drinks a glass champagne once a year on New Year's eve; and another who drinks an entire bottle of whiskey in one sitting every 8 days. These very different kinds of drinkers are both included in Goulden's definitions of "occasional drinker", by which I think he means those who have tried alcohol at least once, but only drink less than one drink per week. This definition rules out the heavy binge drinker, but it is easier to understand, and thus might reduce errors when classifying drinkers.

      Now, for my main point: Look at these hazard ratios (from Table 2) for all-cause mortality with their confidence limits removed for improved visibility, and the "occasional drinker" column removed because of reasons cited above. We are left with 5 columns of data, in 3 rows, which all exhibit a minimum hazard ratio for mortality at <7 drinks per week, which increases when you shift even 1 column to the left or right:

      Drinks/week..................zero.....<7.....7-13....14-20...>20

      Fully adjusted...............1.19....1.02....1.14....1.13....1.45

      Fully adjusted, men..........1.21....1.04....1.16....1.17....1.53

      Fully adjusted, women........1.16....1.00....1.13....1.11....1.59

      Note that the data in every row approximates a J-shaped curve with the minimum harm ratio in the column labeled <7 [drinks per week], which is light drinking. The next-lowest point is in the 7-13 drinks per week column, or about 2 drinks per day, which is moderate drinking. Although in some instances the confidence intervals overlap, we still have a set of trends which are consistent with past studies, demonstrating the typical J-shaped association between daily ethanol dose and mortality harm ratios. Such trends would likely become statistically significant if the study power were increased enough. The bottom line is that the data tend to support, rather than contradict, the often-observed phenomenon that all-cause mortality is minimized in persons who consume mild-to-moderate amounts of alcohol.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 21, David Keller commented:

      Goulden's results confirm the J-shaped relationship of all-cause mortality with alcohol intake

      In my recent letter about Goulden's study, I pointed out that his data actually confirm the benefit of mild alcohol ingestion for reducing all-cause mortality. I supply the text of my letter below, exactly as it was published, for convenient reference. Goulden should have titled his paper, "Yet more data confirming that a "J-shaped" relationship exists between the amount of alcohol consumed daily and the risk of all-cause mortality." My detailed rebuttal of Goulden's reply is at the following URL:

      http://www.ncbi.nlm.nih.gov/pubmed/27453387#cm27453387_26107

      Here is the text of my letter:

      "Goulden's conclusion that moderate alcohol consumption is not associated with reduced all-cause mortality in older adults conflicts with the findings of other studies, which he attributes mainly to residual confounding and bias. However, Goulden's own Table 2 indicates that regular drinkers who consume less than 7 drinks per week (whom I shall call “light drinkers”) actually do exhibit the lowest average mortality hazard ratio (HR), compared with nondrinkers or heavy drinkers (>21 drinks per week), even when fully adjusted by Goulden, for all 11 categories of subjects, based on age, sex, health, socioeconomic, and functional status.

      "Likewise, for those who consume 7 to 14 drinks per week (“moderate drinkers”), Table 2 reveals that their average mortality HR is less than that of nondrinkers or heavy drinkers, with only 1 outlying category (of 11 categories). This outlier data point is for subjects aged less than 60 years, which may be explained by the fact that the ratio of noncardiovascular mortality (particularly automobile accidents) to cardiovascular mortality is highest in this youngest age category. Thus, the trends exhibited by Goulden's average data are consistent with the previously reported J-shaped beneficial relationship between light-to-moderate ethanol ingestion and mortality, with the single exception explained above.

      "Goulden defines a new category, “occasional drinkers” as those who “report drinking at least once, but never more than ‘less than once per week,’” and assigns them the mortality HR of 1.00. Because occasional drinkers consume alcohol in amounts greater than nondrinkers, but less than light drinkers, their mortality should be between that of nondrinkers and that of light drinkers.

      "However, for 5 of the 11 categories of subjects analyzed, the mortality HR for occasional drinkers is less than or equal to that of light drinkers. This may be due to subjects who miscategorize their light alcohol intake as occasional. The effects of this error are magnified because of the small amounts of alcohol involved, and thereby obscure the J-shaped curve relating alcohol intake and benefit.

      "The only way to determine with certainty what effect ethanol ingestion has on cardiovascular and total mortality is to conduct a randomized, controlled trial, which is long overdue."

      Reference

      1: Goulden, R. Moderate alcohol consumption is not associated with reduced all-cause mortality. Am J Med. 2016; 129: 180–186.e4


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 11, Clive Bates commented:

      I should add that the authors are writing about the FDA ("FDA" appears 70 times in the paper), yet this work is funded in part by the FDA Center for Tobacco Products. It's no surprise then that it takes a wholly uncritical approach to FDA's system for consumer risk information. Somehow they managed to state:

      The authors of this manuscript have no conflicts of interest to declare.

      While the funding is made clear, the failure to acknowledge the COI is telling - perhaps a 'white hat bias'?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 11, Clive Bates commented:

      It may not be what the authors intended, but the paper offers a troubling insight into the ingrained pedantry of a regulatory bureaucracy and how this can obscure the truth and cause harm. The authors have approached their work without considering the wider implications for health of FDA's system for assessing risk communication.

      The core weakness in the paper is the assumption that nothing can be true unless the FDA says it is, and that the FDA has a way of efficiently establishing what is true. No evidence supports either contention.

      Some observations.

      1. The paper was published before FDA had deemed e-cigs to be within its jurisdiction, so the retailers involved were free to make true and non-misleading claims - if they did that, they have not broken any laws.

      2. Some of the vendors' claims documented in the paper are reasonable and true and some would benefit from more nuanced language - all are broadly communicating substantially lower risk. This is, beyond any reasonable doubt, factually correct. The makers of these products are trying to persuade people to take on much lower risk products than cigarettes, but the authors and appear to believe this should be prevented. This is indistinguishable from a regulatory protection for the cigarette trade, with all that implies.

      3. If the public bodies like CDC and FDA in the U.S. had been candid about these products from the outset instead of creating fear and confusion, vendors would not need to make claims or could quote them as reliable authorities. However, they have not done this in the way that their English equivalents have: see, for example, the Royal College of Physicians [1] and Public Health England [2]. These bodies have assessed the evidence and made estimates aiming to help consumers gain a realistic appreciation of relative risk of smoking and vaping. They estimate that e-cigarette use, while not guaranteed entirely safe, is likely to be at least 95% lower risk than smoking.

      4. This contrasts with the FDA route to providing consumers with appropriate risk information - the Modified Risk Tobacco Product (MRTP) application. This approach already appears dysfunctional. It is now two years since Swedish Match filed a 130,000-page application to make a claim for snus (a form of smokeless tobacco) that is so obviously true it does not even justify the wear and tear on a rubber stamp: WARNING: No tobacco product is safe, but this product presents substantially lower risks to health than cigarettes. If a snus vendor cannot say that, then no claim is possible under this system.

      5. In contrast with its reluctance to allow manufacturers to state the obvious, FDA does not subject its own claims or risk communications to the public health test that it requires of manufacturers or vendors. FDA intends to require the packaging of e-cigarettes to carry the following: WARNING: This product contains nicotine. Nicotine is an addictive chemical. But how does it know this will not deter smokers from switching and therefore continuing to smoke? How does it know that it is not misleading consumers by the absence of realistic information on relative risk?

      6. FDA (and the authors) take no responsibility for, and show no interest in, the huge misalignment between consumers' risk perceptions and expert judgement on the relative risks of smoking and vaping Only 5.3% Americans correctly say vaping is much less harmful than smoking, while 37.5% say it is just as harmful or more harmful [3] - a view no experts anywhere would support. By allowing these misperceptions to flourish, they are in effect indifferent to the likely harms arising from maintaining that smoking and vaping are of equivalent risk unless FDA says otherwise.

      7. It is a perverse system that requires tobacco or e-cigarette companies to go through a heavily burdensome and expensive MRTP process before the consumer can be provided with truthful information about risks. Why should the commercial judgements of nicotine or tobacco companies on the value of going through this process be what determines what the consumer is told? For the most companies, the cost and burden of the process will simply be too great to guarantee a return through additional sales, so no applications will be made and consumers will be left in the dark.

      8. The FDA's restriction on communicating true and non-misleading information to consumers is part of the Nicopure Labs v FDA case - the challenge is made under the Constitutional First Amendment protection of free speech. The authors should not assume that the FDA that is acting lawfully, and FDA (and the authors) should have the burden of proof to show a vendor's claim is false or misleading.

      To conclude, the authors should return to the basic purpose of regulation, which is to protect health. They should then look carefully at how the legislation and its institutional implementation serve or defeat that purpose. If they did that, they would worry more about the barrier the FDA creates to consumer understanding and informed choice and less about the e-cigarette vendors' efforts, albeit imperfect, to inform consumers about the fundamental and evidence-based advantages of their products.

      [1] Royal College of Physicians, Nicotine with smoke: tobacco harm reduction. April 2016 [link]

      [2] Public Health England, E-cigarettes: an evidence update. August 2015 [Link]

      [3] Risk perception data from the National Cancer Institute HINTS survey, 2015).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 09, Christopher Tench commented:

      The version of GingerALE used (2.0) had a bug that resulted in false positive results. This bug was fixed at version 2.3.3


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 22, Holger Schunemann commented:

      Error in author listing; the correct citation for this article is http://www.bmj.com/content/354/bmj.i3507: BMJ. 2016 Jul 20;354:i3507. doi: 10.1136/bmj.i3507. When and how to update systematic reviews: consensus and checklist. Garner P, Hopewell S, Chandler J, MacLehose H, Akl EA, Beyene J, Chang S, Churchill R, Dearness K, Guyatt G, Lefebvre C, Liles B, Marshall R, Martínez García L, Mavergames C, Nasser M, Qaseem A, Sampson M, Soares-Weiser K, Takwoingi Y, Thabane L, Trivella M, Tugwell P, Welsh E, Wilson EC, Schünemann HJ


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 22, Stephen Tucker commented:

      This version corrects labels missing from Fig1 and Fig 5 of the original article


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 24, Maruti Singh commented:

      Thanks for your interest. Actually Caustic soda is a common household powder meant for washing clothes especially white. It is very cheap and usually easily available in villages of India. It is not used for birth control. The reason it was used here was to control the PPH following delivery- which may have been due to a tear in vagina or cervix. Packs soaked in Caustic Soda is often used by untrained birth attendants to control PPH and to cauterise any vaginal or cervical tear. However patient was not very clear on what happened except that she was bleeding post delivery.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jul 24, Judy Slome Cohain commented:

      Would the authors please comment on the reasons for packing caustic soda in the patient's vagina? Was it meant as future birth control?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 06, University of Kansas School of Nursing Journal Club commented:

      Flinn, Sidney., Moore, Nicholas., Spiegel, Jennifer., Schemmel, Alisa, Caedo, Cassandra., Fox, Leana., Hill, RaeAnn., & Hinman, Jill. [Team 1: KUSON Class of 2017]

      Introduction

      We chose this article because this study focuses on how a clinical learning environment can impact a nursing student’s educational experience. In our Development of a Microsystem Leader course, we have been discussing the clinical microsystems and the elements present in the microsystem environments and their impact on nurses satisfaction. As students, our “work environment” can be considered as the environment in which we learn and are exposed to clinical practice. Within the context of nursing school, we have several learning microsystem environments, including our traditional classroom, as well as our clinical rotations. These separate microsystems each provide us with unique learning experience and opportunities, where we interact with each other throughout our time as a nursing student. In our classroom microsystem, we learn concepts that are applicable to clinical practice, and are encourage to use what we have learned and acquire competency through our clinical experience. Both environments play an important role in our nursing preparation and affect our ability to provide effective care as future nurses. An unsatisfactory clinical experience has the ability to negatively impact our learning and could ultimately determine the outcome of our nursing preparation and influence our practice in the microsystems care setting.

      Methods

      This article was found using PubMed database. The purpose of this study was to examine if nursing students were satisfied with their clinical setting as learning environments. The study used a quantitative descriptive correlational method using 463 undergraduate nursing students as sample from three universities in Cyprus. Data were collected from the three universities’ nursing program using the Clinical Learning Environment, Supervision and Nurse Teacher (CLES+T) questionnaire. The CLES+T was used to measure student’s satisfaction with their clinical learning environment. It consists of 34 items classified into 5 dimensions: pedagogical atmosphere on the ward; supervisory relationship; leadership style of the ward manager; premises of nursing on the ward; role of the NT in clinical practice. Out of the total 664 students from the three universities, 463 or 70.3% completed the self-report questionnaire. Along with the questionnaire each student was asked to complete a demographic data sheet that included information such as age, gender, and education level, what hospital and unit they were assigned for clinical rotation. The data was collected in the last laboratory lesson that occurred in the 2012-2013 school year. Quantitative data was derived from the questionnaires through the use of descriptive statistics (Papastavrou, Dimitriadou, Tsangari & Andreou, 2016).

      Findings

      Results showed that overall, nursing students rated their clinical learning environment as “very good” and were highly satisfactory (Papastavrou et al., 2016, p.9). This was well correlated with all five dimensions indicated in the CLES+T questionnaire including overall satisfaction. The biggest difference in scores were found among the students who met with their educators or managers frequently which is considered as successful supervision. This is considered as the “most influential factor in the students’ satisfaction with the learning environment” (p. 2). Students who attended private institutions were less satisfied, as were those placed in a pediatrics unit or ward. Important aspects of high satisfaction rates included coming from a state university, those with mentors, and those with high motivation. Limitations of the study included some students’ limited amounts of time spent in their clinical environment at the time of the study, and failure to use a “mix methodology” to compare the findings in this study with other similar studies (Papastavrou et al., 2016).

      Implications to Nursing Education

      This study is important to nursing because our educational and clinical preparation is the starting point of how we are shape to become successful nurses. The clinical learning environment is especially important because this is where we, as students, get the opportunity to use the knowledge and skills we learned in the classroom and apply it in the patient care setting. We get to actively practice assessments, apply hands-on skills, and interact with other medical professionals while becoming empowered to participate in practice autonomously. As mentioned in the article, a well-established mentorship with the nurses on the floor and with our clinical instructors sets the groundwork for a positive experience during clinical immersion experiences (Papastavrou et al., 2016). This positive relationship and experiences can lead to a healthy workplace that allows nursing students to feel empowered enough to practice their own skills, build trust with their instructors and ask appropriate questions when necessary (Papastavrou et al., 2016). Many of us have had unsatisfactory clinical experiences where we had a disengaged clinical instructor or had a designated nurse mentor who clearly lack mentoring skills to guide student in the learning environment. These situations lead us to feel quite dissatisfied with our clinical experience and hindered our learning experience. It is important for nurses and clinical faculty to be aware of how important these clinical experiences and supervisory relationships are to our preparation. Without them we would not be able to fully grasp the complexities that are associated with nursing practice, and would be inadequately prepared to work as a nurse. In relation to clinical microsystems, this study can be considered to be focusing on a positive clinical microsystem learning experience in nursing school. These experiences become the foundation of how we are being formed to become future frontline leaders, and bring out the confidence in us through their guidance and mentorship. It is crucial that nursing school establish a positive learning environment, both in the classroom and clinical setting that will help nursing students build their competence and development in the clinical setting which they can carry after graduation and applied in practice (Papastavrou et al., 2016). Creating a positive learning environment in nursing school will help bring students’ positive attitudes to the workplace where they can be a part of an empowered microsystems. This article provided us with a well-defined guidelines on how to empower nursing students and creating a healthy learning and work environment.

      Papastavrou, E., Dimitriadou, M., Tsangari, H., & Andreou, C. (2016). Nursing students’ satisfaction of the clinical learning environment: a research study. BMC Nursing, 15(1), 44. DOI: 10.1186/s12912-016-0164-4


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 24, Ellen M Goudsmit commented:

      I am not persuaded that ME, as described by clinicians and researchers prior to 1988, has much to do with neurasthenia. Indeed, fatigue was not a criterion for the diagnosis of ME [1]. It presents as a more neurological disorder, e.g. muscle weakness after minimal exertion. References to CFS/ME are misleading where research used criteria for chronic fatigue or CFS, rather than ME. The assumption of equivalence has been tested and the differences are of clinical significance.

      A useful strategy to avoid post-exertion related exacerbations is pacing [2]. I missed a reference.

      1 Goudsmit, EM, Shepherd, C., Dancey, CP and Howes, S. ME: Chronic fatigue syndrome or a distinct clinical entity? Health Psychology Update, 2009, 18, 1, 26-33. http://www.bpsshop.org.uk/Health-Psychology-Update-Vol-18-No-1-2009-P797.aspx

      2 Goudsmit, EM., Jason, LA, Nijs, J and Wallman, KE. Pacing as a strategy to improve energy management in myalgic encephalomyelitis/chronic fatigue syndrome: A consensus document. Disability and Rehabilitation, 2012, 34, 13, 1140-1147. doi: 10.3109/09638288.2011.635746.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jul 22, Tom Kindlon commented:

      Some information on an unpublished study on pupil responses:

      Dr Bansal mentions he has observed unusual responses by the pupils to light. I thought I would highlight a study that was done in the late 1990s looking at this. Unfortunately the researcher passed away before it could be published. Perhaps there are better sources than these lay articles but I thought they might be of some use in the hope that the finding might be followed up again.


      Eye test hope for ME sufferers

      Jenny Hope

      A new eye test can 'see' changes in the brain triggered by the crippling disease ME. The advance comes from a number of research projects that could lead to better treatments for the illness once ridiculed as 'yuppie flu'.

      It gives fresh hope to an estimated 150,000 victims of chronic fatigue syndrome, which can leave those worst affected bedridden with pain, suffering short-term memory loss and unable to walk even short distances.

      Scientists at the Royal Free Hospital and the City University in London have found a way to measure changes in the eyes of ME patients which may show they lack an important brain chemical.

      A study by Dr Ian James and Professor John Barbur checked the pupils of 16 ME patients and 24 healthy individuals, using a computer to measure changes identified between the two groups.

      They found patients with chronic fatigue had larger pupils and also had a stronger reaction to light and other stimuli. The changes could be linked to a deficiency of the brain chemical serotonin, which is known to occur in ME and is also linked to depression.

      Professor John Hughes, chairman of the Chronic Fatigue Syndrome Research Foundation, said the research should make it possible to understand changes occurring in the brain of a sufferer.

      This could help those studying the effect of different drugs and possibly help doctors diagnose CFS, he added.

      At present there are no reliable tests, although a checklist of symptoms developed five years ago is being used by doctors worldwide.


      BREAKTHROUGH FOR ME by Geraint Jones

      For years, ME has been treated with suspicion by doctors. Many believe that for every genuine sufferer there is another who simply believes himself to be ill. Experts cannot agree on whether the condition is a physical illness or a psychological disorder which exists only in the victim's mind. One reason for this scepticism is that, as yet, no one has been able to provide an accurate diagnosis for ME, or myalgic encephalomyelitis, which is known to affect 150,000 people in Britain. There is no known cure and treatment is often based on antidepressant drugs like Prozae, with limited success.

      All this may be about to change. Dr Ian James, consultant and reader in clinical pharmacology at London's Royal Free Hospital School of Medicine, believes that he has found a way of diagnosing the chronic fatigue syndrome and hopes to use it to develop a treatment programme. The breakthrough came after months of research spearheaded by Dr James and Professor John Barbur of London's City University. It centres round the discovery that the eyes of ME sufferers respond to light and motion stimuli in an unusual way.

      "Several doctors treating ME patients noticed that they showed an abnormal pupil response", says Dr James. "When the pupil is subjected to changes in light, or is required to alter focus from a close object to one further away, it does so by constricting and dilating. ME patients' eyes do this as well but there is an initial period of instability when the pupil fluctuates in size".

      Using a computerised "pupilometer", which precisely measures eye responses, Dr James embarked on a detailed study of this phenomenon on ME patients, using non-sufferers as a control. A variety of shapes were flashed on to a screen and moved across it, while a computer precisely measured pupil reflex to each of the 40 tests. Results confirmed that the pupil fluctuation was peculiar to those participants who suffered from ME.

      Dr James concluded that the abnormal pupil response is a result of some kind of interference in the transfer of impulses from the brain to the eye. He believes that ME is the result of a deficiency of a neuro-transmitter called 5HT, whose job it is to pass impulses through nerves to cells. The eyes of ME sufferers treated with 5HT behave normally. "I do not yet know how the ME virus causes abnormalities in 5HT transmission but it does inhibit its function", says Dr James.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 26, Darren L Dahly commented:

      Minor comment: Reference 17 is in error. It should instead point to this abstract, which was presented at the same conference. The full analysis was later published here.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 23, Sin Hang Lee commented:

      Thanks for the clarification.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 21, Steven M Callister commented:

      In our report, we correctly state that the 142 bp segments of our amplified products had 100% homology with B. miyamotoi. However, the reader is also correct that our analyses did not include the entire amplified 145 bp seqment, since we did not include the complete primer sequences. As the reader stated, there is indeed one mismatch when the primer sequences are included. However, there is still >99% homology with the B. miyamotoi CP006647.2 sequence, so the oversight does not change the legitimacy of our conclusion. As an additional point, we have also since sequenced the entire glpQ from a human patient from our region positive by PCR for B. miyamotoi, and found 100% homology with the CP006647.2 glpQ sequence.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 15, Sin Hang Lee commented:

      To the Editors: Jobe and colleagues [1] used polymerase chain reaction (PCR) to amplify a 142-bp fragment of a borrelial glycerophosphodiester phosphodiesterase (glpQ) gene in the blood samples of 7 patients. The sequences of the PCR primers were 5′-GATAATATTCCTGTTATAATGC-3′ (forward) and 5′-CACTGAGATTTAGTGATTTAAGTTC-3′ (reverse), respectively. The DNA sequence of the PCR amplicon was reported to be 100% homologous with that of the glpQ gene of Borrelia miyamotoi LB-2001 (GenBank accession no. CP006647.2) in each case. However, the database entry retrieved from GenBank accession no. CP006647.2 shows a 907293-base complete genome of B. miyamotoi which contains a 145-nucleotide segment in the glpQ gene starting with sequence GACAATATTCCTGTTATAATGC and ending with sequence GAACTTAAATCACTAAATCTCAGTG (position 248633 to 248777) matching the binding sites of the PCR primers referenced above with one-base mismatch (C) at the forward primer site. Because there is at least one base mismatch and a 3-base difference between the size of the PCR amplicon and the length of the defined DNA sequence entered in the GenBank database, the amplicon reported by the authors cannot be “100% homologous with that of B. miyamotoi LB-2001”. The authors should publish the base-calling electropherogram of the 142-bp PCR amplicon for an open review. Perhaps, they have uncovered a novel borrelial species in these 7 patients. References 1. Jobe DA, Lovrich SD, Oldenburg DG, Kowalski TJ, Callister SM. Borrelia miyamotoi infection in patients from upper midwestern United States, 2014–2015. Emerg Infect Dis. 2016 Aug. http://dx.doi.org/10.3201/eid2208.151878 Sin Hang Lee, MD Milford Molecular Diagnostics Laboratory Milford, CT Shlee01@snet.net


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 14, Thales Batista commented:

      Previously, we had explored the N-Ratio as a potential tool to select patients for adjuvant chemoradiotherapy after a D2-gastrectomy using four consecutive statistical steps. (Arq Gastroenterol. 2013 Oct-Dec;50(4):257-63.) First, we applied the c-statistic to establish the overall prognostic accuracy of the N-Ratio for predicting survival as a continuous variable. Second, we evaluated the prognostic value of N-Ratio in predicting survival when categorized according to clinically relevant cutoffs previously published. Third, we confirm the categorized N-Ratio as an independent predictor of survival using multivariate analyses to control the effect of other clinical/pathologic prognostic factors. Finally, we performed stratified survival analysis comparing survival outcomes of the treatment groups among the N-Ratio categories. Thus, we confirmed the N-Ratio as a method to improve lymph node metastasis staging in gastric cancer and suggested the cutoffs provided by Marchet et al. (Eur J Surg Oncol. 2008;34:159-65.) [i.e.: 0%, 1%~9%, 10%~25%, and >25%] as the best way for its categorization after a D2-gastrectomy. In these settings, N-Ratio appears a useful tool to select patients for adjuvant chemoradiotherapy, and the benefit of adding this type of adjuvancy to D2-gastrectomy is suggested to be limited to patients with milder degrees of lymphatic spread (i.e., NR2, 10%–25%).

      Recently, Fan M et a. (Br J Radiol. 2016;89(1059):20150758.) also explored the role of adjuvant chemoradiation vs chemo, and found similar results of ours that patients with N1-2 stage rather than those with N3 stage benefit most from additional radiation after D2 dissection. However, using data from the important RCT named ARTIST Trial, Kim Y et al. presents different results favoring the use of chemoradiotherapy after D2 gastrectomy in patients having N ratios >25%. These contrary finding warrants further investigation in future prospective studies, but highlight the N-Ration as a useful tool for a more taylored therapy based on radiation for gastric cancer patients. Since targeted therapy are currently focused in sophisticated molecular classifications, this approach might serve to improve patients selection for adjuvant radiotherapy based on simple and easely available clinico-phatological findings.

      In these settings, we would like to congratulate the authors for their interested paper re-exploring the data from Artist Trial; and also to invite other authos to re-explore your data using a similar approach.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 24, Judy Slome Cohain commented:

      It is logical that birth centre outcomes are and will always be the same as hospital birth outcomes. Women who leave home are consciously leaving the safety of their home because they are under the misconception that the dangers of birth justify leaving the safety of their home. Leaving home releases higher levels of fear hormones, such as norepinephrine and ATP, and of course exposes the mother, fetus and newborn to unfamiliar and the potentially hostile bacteria of the strange environment. When we are home and the doors are locked, we are more relaxed and our unconscious brains can function better which promotes a faster and easier birth. Being home and having lower levels of stress hormones released serves to reassure the fetus, which prevents the fetal distress detected at about 20% of hospital and birth center births. The practitioner physically has the door key to the birth center and the birth center is for her convenience, not the birthing woman. If the woman had the door key and was encouraged to lock out whomever she wanted, as she does at home, that might influence outcomes at birth centers.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 25, Judy Slome Cohain commented:

      I agree. Holding one's newborn baby works far better than ice on the perineum&rectal area. But in midwifery, it is important to NEVER SAY NEVER, because every once in a while, ice is very helpful 5 minutes after birth.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 25, Maarten Zwart commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jul 25, Maarten Zwart commented:

      Thanks for the comments, John!

      I agree the paired recordings you're suggesting would be interesting to do, but it's very difficult (I've not had any success) to separate excitation from inhibition in voltage clamp because of space clamp issues in these neurons. It seems interesting enough to try to get it to work, though. A more general description of the rhythm generator(s) will hopefully also come from further EM-based reconstructions.

      Most larval muscles receive input from a MN that only innervates that particular muscle ("unique innervator"), as well as a MN that innervates multiple muscles ("common innervator"). LT1-4 and LO1 are different; they only receive input from the "unique" MN, so that simplifies things. There are no known inhibitory MNs in the larva, which is an interesting quirk if this indeed holds up.

      It's not been exhaustively explored how different larval MNs compare in their intrinsic properties, but there is an interesting difference between the "unique" innervators and the "common" ones, with the latter showing a delay-to-first-spike caused by an Ia-type current (Choi, Park, and Griffith, 2004). I looked into intrinsic properties to test whether a similar delay-to-first-spike mediated the sequence. There will certainly be differences in input resistances between some MNs as they are not all the same size, but fast and slow ones have yet to be described.

      Thanks for the heads up on the PTX effect. We've seen different effects at different concentrations, with higher concentrations affecting intersegmental coordination in addition to intrasegmental coordination, and we've jotted these down to simply a more effective receptor block, but that's very interesting!

      Thanks again John!


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Jul 25, John Tuthill commented:

      Do transverse and longitudinal MNs receive any synchronous or correlated excitatory input? Figure S4 shows paired recordings between aCC and LO1 and they look relatively correlated. Would be interesting to look at LO1 and LT2 pairs to see whether the inputs they share drive synchronous activation at particular phases of the fictive rhythm cycle, which might be suppressed by inhibition (from iINs) at other phases. This would provide some indication of whether there is a single “CPG” that serves as a common clock/oscillator for all the MNs within a segment. It would also have some bearing on your model that intra-segmental timing is generated by selective inhibition, rather than specificity of excitation.

      Each larval muscle is controlled by multiple MNs. These different MNs receive many, but not all presynaptic inputs in common (figure 2). How does this affect the phase relationship of MNs that innervate a common muscle? A broader question might be, in an oscillating population of MNs, how well can you predict phase relationships by quantifying the proportion of overlapping presynaptic inputs to those MNs?

      Are larval MNs divided into fast/slow neurons, as in the adult? On a related note, do all larval MNs exhibit the vanilla intrinsic properties shown in Fig 1? Do fly larvae have inhibitory MNs like many adult insects? (Interested in these questions since we are working on them in the adult).

      Technical note: @ 1 micromolar, picrotoxin only blocks GABAa receptors, not GluCl (at least in the adult CNS, see Wilson and Laurent, 2005 and Liu and Wilson, 2013).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 24, Anthony Jorm commented:

      Thank you to the authors for providing the requested data. I would like to provide a further comment on the effect size for the primary outcome of their intervention, the Social Acceptance Scale. Using the pre-test and post-test means and standard deviations and the correlation between pre-test and post-test, they calculate a Cohen’s d of 0.186, which is close to Cohen’s definition of a ‘small’ effect size (d = 0.2). However, I believe this is not the appropriate method for calculating the effect size. Morris & DeShon Morris SB, 2002 have reviewed methods of calculating effect sizes from repeated measures designs. They distinguish between a ‘repeated measures effect size’ and an ‘independent groups effect size’. Koller & Stuart appear to have used the repeated measures effect size (equation 8 of Morris & DeShon). This is not wrong, but it is a different metric from that used in most meta-analyses. To allow comparison with published meta-analyses, it is necessary to use the independent groups effect size, which I calculate to give a d = 0.14 (using equation 13 of Morris & DeShon). This effect size can be compared to the results of the meta-analysis of Corrigan et al. Corrigan PW, 2012 which reported pooled results from studies of stigma reduction programs with adolescents. The mean Cohen’s d scores for ‘behavioral intentions’ (which the Social Acceptance Scale aims to measure) were 0.302 for education programs, 0.457 for in-person contact programs and 0.172 for video contact programs. I would therefore conclude that the contact-based education program reported by Koller & Stuart has a ‘less than small’ effect and that it less than those seen in other contact-based and education programs for stigma reduction in adolescents.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 10, Heather Stuart commented:

      We would like to thank Professor Jorm for his careful consideration of our results and his comment. As requested, we have provided the following additional data analysis. 1. Report means, standard deviations and Cohen’s d with 95% CI for the primary outcome. This will allow comparison with the results of the meta-analyses by Corrigan et al. Corrigan PW, 2012 and Griffiths et al. Griffiths KM, 2014. Professor Jorm’s questions raise the important issues of what constitutes a meaningful outcome when conducting anti-stigma research and how much of an effect is noteworthy (statistical significance aside). We discussed these issues at length when designing the evaluation protocol and based on the book Analysis of Pretest-Posttest Designs (Bonate, 2000) we took the approach that scale scores are not helpful for guiding program improvements. Aggregated scale scores do not identify which specific areas require improvement, whereas individual survey items do. We also considered what would be a meaningful difference to program partners (who participated actively in this discussion) and settled on the 80% (A grade) threshold as a meaningful heuristic describing the outcome of an educational intervention. Thus, we deliberately did not use the entire scale score to calculate a difference of means. Our primary outcome was the adjusted odds ratio. When we convert the odds ratio to an effect size (Chinn, 2000)we get an effect size of 0.52, reflecting a moderate effect. The mean pretest Social Acceptance score was 24.56 (SD 6.71, CI 24.34-24.75) and for the post-test it was 23.62 (SD 6.93, CI 23.40-23.83). Using these values and the correlation between the 2 scores (0.73) the resulting Cohen’s d is 0.186, reflecting a small and statistically significant effect size. It is important to point out that the mean differences reported here do not take into consideration the heterogeneity across programs, so most likely underestimate the effect. This might explain why the effect size when using the OR (which was corrected for heterogeneity) was higher than the unadjusted mean standardized effect. Whether using a mean standardized effect size or the adjusted odds ratio, results suggest that the contact based education is a promising practice for reducing stigma in high school students.<br> 2. Data on the percentage of ‘positive outliers’ to compare with the ‘negative outliers’. Because we had some regression to the mean in our data, we used the negative outliners to rule out the hypothesis that the negative changes noted could be entirely explained by this statistical artefact. We defined negative outliners as the 25th percentile minus 1.5 times the interquartile range. Outliners were 3.8% for the Stereotype Scale difference score and 2.8% for the Social Acceptance difference score suggesting that some students actually got worse. We noted that males were more likely to be among the outliers. Our subsequent analysis of student characteristics showed that males who did not self-disclose a mental illness were less likely to achieve a passing score. This supported the idea that a small group of students may be reacting negatively to the intervention and becoming more stigmatized. While the OR alone (or the mean standardized difference) could, as Professor Jorm indicates, mask some deterioration in a subset of students, our full analysis was designed to uncover this exact phenomenon.<br> Professor Jorm has asked that we show the positive outliers. If we define a positive outliner as the 75th percentile plus 1.5 times the interquartile range, then 1.9% were outliners on the Stereotype Scale difference score and 2.3% are outliers on the Social Acceptance distance score, suggesting that the intervention also resonated particularly well with a small group of students.<br> Thus, while contact based interventions appear to be generally effective (i.e. when using omnibus measures such as a standardized effect size or the adjusted odds ratio), our findings support the idea that effects are not uniform across all sub-sets of students (or, indeed programs). Consequently, more nuanced approaches to anti-stigma interventions are needed, such as those that are sensitive to gender and personal disclosure along with fidelity criteria to maximize program effects.

      1. Data on changes in ‘fail grades’, i.e. whether there was any increase in those with less than 50% non-stigmatizing responses<br> In response to Professor Jorm’s request for a reanalysis of students who failed, we defined a fail grade as giving a stigmatising response to at least 6 of the 11 statements, (54% of the questions). At pretest, 32.8% of students ‘failed’ on the Stereotype scale, dropping to 23.7% at post-test (reflecting a decrease of 9.1%). For the Social acceptance scale, at pretest 28.5% ‘failed’, dropping to 24.8% at post-test, reflecting (a decrease of 3.7%). Using McNemar’s test, both the Stereotype scale (X2 (1) = 148.7, p <.001) and the Social Acceptance scale (X2 (1) = 28.4, p <.001) were statistically significant lending further support to our conclusion that the interventions were generally effective. Bonate, P. L. (2000). Analysis of Pretest- Posttest Designs. CRC Press. Chinn, S. (2000). A simple method for converting an odds ratio to effect size for use in meta-analysis. Statistics in Medicine, 3127-3131.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Jul 20, Anthony Jorm commented:

      The authors of this study conclude that “contact-based education appears to be effective in improving students’ behavioural intentions towards people who have a mental illness”. However, it is not clear that the data on the primary outcome measure (the Social Acceptance Scale) support this conclusion. The authors measured change on this primary outcome in two ways. The first is a difference score calculated by subtracting post-test scores from pre-test scores. The second is a dichotomous grade score, with 80% non-stigmatizing responses defined as an ‘A grade’. With the difference scores, the authors do not report the means, standard deviations and an effect size measure (e.g. Cohen’s d) at pre-test and post-test, as is usually done. This makes it impossible to compare the effects to those reported in meta-analyses of the effects of stigma reduction interventions. Instead, they report the percentage of participants whose scores got worse, stayed the same or got better. It is notable that a greater percentage got worse (28.3%) than got better (19.8%), indicating that the overall effect may have been negative. The authors also report on the percentage of participants who got worse by 5 or more points (the ‘negative outliers’: 2.8%), but they do not report for comparison the percentage who got better by this amount. The dichotomous A grade scores do appear to show improvement overall, with an odds ratio of 2.57. However, this measure could mask simultaneous deterioration in the primary outcome in a subset of participants. This could be assessed by also reporting the equivalent of a ‘fail grade’. I request that the authors report the following to allow a full assessment of the effects of this intervention: 1. Means, standard deviations and Cohen’s d with 95% CI for the primary outcome. This will allow comparison with the results of the meta-analyses by Corrigan et al. Corrigan PW, 2012 and Griffiths et al. Griffiths KM, 2014. 2. Data on the percentage of ‘positive outliers’ to compare with the ‘negative outliers’. 3. Data on changes in ‘fail grades’, i.e. whether there was any increase in those with less than 50% non-stigmatizing responses.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 19, Jan Tunér commented:

      The authors have used 780 nm, 20 mW, 0.04 cm2, 10 seconds, 0.2 J per point, 1.8 J per session. This is a very low energy. Energy (J) and dose (J/cm2) both have to be within the therapeutic window. By using a thin probe, a high dose can easily be reached but the energy here is much too low in my opinion. The authors quote Kymplova (2003) as having success with these parameters, but this is not correct. The multimode approach of Kymplova was as follows: The light sources were as follows: a laser of a wave length 670 nm, power 20 mW, with continuous alternations of frequencies 10 Hz, 25 Hz, and 50 Hz, a polarized light source of a 400-2,000 nm wavelength in an interval of power 20 mW and frequency 100 Hz and a monochromatic light source of a 660 nm wavelength and power 40 mW, with simultaneous application of a magnetic field at an induction 8 mT.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 08, Melissa Rethlefsen commented:

      I thank the authors of this Cochrane review for providing their search strategies in the document Appendix. Upon trying to reproduce the Ovid MEDLINE search strategy, we came across several errors. It is unclear whether these are transcription errors or represent actual errors in the performed search strategy, though likely the former.

      For instance, in line 39, the search is "tumour bed boost.sh.kw.ti.ab" [quotes not in original]. The correct syntax would be "tumour bed boost.sh,kw,ti,ab" [no quotes]. The same is true for line 41, where the commas are replaced with periods.

      In line 42, the search is "Breast Neoplasms /rt.sh" [quotes not in original]. It is not entirely clear what the authors meant here, but likely they meant to search the MeSH heading Breast Neoplasms with the subheading radiotherapy. If that is the case, the search should have been "Breast Neoplasms/rt" [no quotes].

      In lines 43 and 44, it appears as though the authors were trying to search for the MeSH term "Radiotherapy, Conformal" with two different subheadings, which they spell out and end with a subject heading field search (i.e., Radiotherapy, Conformal/adverse events.sh). In Ovid syntax, however, the correct search syntax would be "Radiotherapy, Conformal/ae" [no quotes] without the subheading spelled out and without the extraneous .sh.

      In line 47, there is another minor error, again with .sh being extraneously added to the search term "Radiotherapy/" [quotes not in original].

      Though these errors are minor and are highly likely to be transcription errors, when attempting to replicate this search, each of these lines produces an error in Ovid. If a searcher is unaware of how to fix these problems, the search becomes unreplicable. Because the search could not have been completed as published, it is unlikely this was actually how the search was performed; however, it is a good case study to examine how even small details matter greatly for reproducibility in search strategies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 21, Jacob H. Hanna commented:

      In 2014, Theunissen et al. Cell Stem Cell 2014 Theunissen TW, 2014 reported absolute failure to detect human naïve PSC derived cell integration in chimeric mouse embryos obtained following micro-injection into mouse blastocysts, as was reported for the first time by our group (Gafni et al. Nature 2013). However, the authors failed to discuss that imaging and cell detection methods applied by Theunissen et al. Cell Stem Cell 2014 were (and still) not at par with those applied by Gafni et al. Nature 2013.

      Regardless, we find it important to alert the readers that Theunissen and Jaenisch have now revised (de facto, retracted) their previous negative results, and are able to detect naïve human PSC derived cells in mouse embryos at more than 0.5-2% of embryos obtained (Theunissen et al. Cell Stem Cell 2016 - Figure 7) Theunissen TW, 2016 < http://www.cell.com/cell-stem-cell/fulltext/S1934-5909(16)30161-8 >. They now apply GFP and RFP flourescence detection and PCR based assays for Mitochondrial DNA, which were applied by the same group to elegantly claim contribution of human neural crest cells into mouse embryos (albeit also at low efficiency (Cohen et al. PNAS 2016 Cohen MA, 2016).

      While the authors of the latter recent paper avoided conducting advanced imaging and/or histology sectioning on such obtained embryos, we also note that the 0.5-2% reported efficiency is remarkable considering that the 5i/LA (or 4i/LA) naïve human cells used lack epigenetic imprinting (due to aberrant near-complete loss of DNMT1 protein that is not seen in mouse naive ESCs!! http://imgur.com/M6FeaTs ) and are chromosomally abnormal. The latter features are well known inhibitors for chimera formation even when attempting to conduct same species chimera assay with mouse naïve PSCs.

      Jacob (Yaqub) Hanna M.D. Ph.D.

      Department of Molecular Genetics (Mayer Bldg. Rm.005)

      Weizmann Institute of Science | 234 Herzl St, Rehovot 7610001, Israel

      Email: jacob.hanna at weizmann.ac.il

      Lab website: http://hannalabweb.weizmann.ac.il/

      Twitter: @Jacob_Hanna


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 24, Judy Slome Cohain commented:

      What are the implications of this study? Has mercury intake been associated with detrimental effects? A recent review found the benefits of diets providing moderate amounts of fish during pregnancy outweigh potential detrimental effect of mercury in regards to offspring neurodevelopment.(1) Wouldn't the benefits of rice in rural China, the staple of the diet, outweigh the detrimental effects also?

      1.Fish intake during pregnancy and foetal neurodevelopment--a systematic review of the evidence. Nutrients. 2015 Mar 18;7(3):2001-14. doi: 10.3390/nu7032001. Review. This review


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Dec 25, Francisco Felix commented:

      Overall survival is based on a not long enough follow-up time (7.8 y median follow-up versus 10y-OS estimate), so its mature value will probably be a little inferior, maybe (and this is a blind shot) near 70-75%. Nevertheless, this is a homage to all the efforts and good will by so many people devoted to bring about better results for the treatment of these kids. As well as the bleak prognosis of relapsed patients reminds us all that there is so much more to do... I believe that transnational cooperative group projects have done a formidable job so far, but it is now time to move on to the next step: global open science initiatives.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 17, Stuart RAY commented:

      And now a more recent one, with overlap in topic and authorship, is reportedly being retracted.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.