16,232 Matching Annotations
  1. Jul 2018
    1. On 2017 Apr 01, Misha Koksharov commented:

      It's interesting whether LREs have a gluconolactonase function. Or whether gluconolactonases have the LRE activity


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Mar 26, Misha Koksharov commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 04, Randi Pechacek commented:

      Matt Perisin, first author of this paper, wrote a "Behind the Paper" blog on microBEnet.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 16, Robert Courtney commented:

      A study that uses questionable assumptions rather than empirical evidence leads to conclusions that stretch credibility.

      Chalder et al. [1] used the “single mediation model” for their methodology, which is explained in detail in a book by MacKinnon [2]. Explaining the methodology MacKinnon says a temporal separation between variables must be observed (i.e. changes in mediating variable must occur before changes in the mediated variable) for a mediation effect to be empirically and robustly established.

      Chalder et al. were working to this model and acknowledged that they failed to establish a clear temporal separation between variables, and therefore did not empirically establish a causal mediation effect: “Given the pattern of change in the mediators was similar to the pattern of change in the outcomes it is possible that the variables were affecting each other reciprocally”.

      However, despite the lack of robust empirical evidence to support a mediation effect, the investigators concluded that they had established mediation effects, e.g: “Our main finding was that fear avoidance beliefs were the strongest mediator for both CBT and GET.”

      The study’s conclusion relied upon an assumption that the investigators’ favoured hypothetical model of illness for ME/CFS has a robust empirical evidence base and is applicable to this study. The hypothesis is based upon the idea that symptoms and disability in ME/CFS are perpetuated by unhelpful or maladaptive illness beliefs, fear, and an avoidance of activity.

      However, the prestigious National Academy of Medicine (formerly known as the Institute of Medicine) recently released a comprehensive report [3] into ME/CFS that rejected such a hypothetical model of illness, and unambiguously concluded that ME/CFS does not have a psychological or cognitive-behavioural basis, but is an organic illness that requires biomedical research.

      Chalder et al. discussed the possibility that more frequent measurements may have potentially demonstrated a temporal separation between the variables, and therefore a mediation effect. However, this raises the possibility of whether changes in the primary outcome variables (self-report physical function and fatigue) may, in fact, have occurred before changes in the presumed mediator variables. Such an outcome would entirely contradict the investigators’ premature conclusions. According to MacKinnon [2] and Wiedermann et al. [4], unexpected outcomes should not be ruled out.

      Chalder et al. concluded that symptoms and physical impairment, in ME/CFS patients, are mediated by activity avoidance and other factors. (e.g. This would mean that a decrease in activity would cause an increase in symptoms.) However, from a common sense point of view, this seems like rather a convoluted conclusion, and it seems more likely that increased symptoms would be the direct cause of activity avoidance in any illness, rather than vice versa. To conclude that activity avoidance causes fatigue (rather than fatigue being a direct cause of activity avoidance), is similar to concluding that a person has flu because they’ve taken a day off work, rather than the obvious conclusion that they’ve taken a day off work because they have flu.

      In the case of fatigue, flu-like malaise and other symptoms of ME/CFS, it seems reasonable to consider the possibility that, as the symptoms fluctuate, patients may intuitively or rationally adapt their activity levels according to what is comfortable and safe. i.e. patients reduce activity levels because they are fatigued. The investigators have concluded that patients are fatigued because they have reduced activity levels.

      Perhaps patients’ perspectives and insights would help clarify the issues but, unfortunately, patients were not consulted for this study.

      References:

      1. Chalder T, Goldsmith KA, White PD, Sharpe M, Pickles AR. Rehabilitative therapies for chronic fatigue syndrome: a secondary mediation analysis of the PACE trial. Lancet Psychiatry 2015; 2: 141–52.

      2. MacKinnon DP. Introduction to Statistical Mediation Analysis. Taylor and Francis: New York 2008.

      3. IOM (Institute of Medicine). 2015. Beyond myalgic encephalomyelitis/chronic fatigue syndrome: Redefining an illness. Washington, DC: The National Academies Press. http://iom.nationalacademies.org/Reports/2015/ME-CFS.aspx

      4. Wiedermann W, von Eye A. Direction of Effects in Mediation Analysis. Psychol Methods 2015; 20: 221-44.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Sep 15, Tom Kindlon commented:

      (Contd.)

      References:

      1 Torjesen I. Tackling fears about exercise is important for ME treatment, analysis indicates. BMJ 2015;350:h227 http://www.bmj.com/content/350/bmj.h227

      2 Chalder T, Goldsmith KA, White PD, Sharpe M, Pickles AR. Rehabilitative therapies for chronic fatigue syndrome: a secondary mediation analysis of the PACE trial. Lancet Psychiatry 14 Jan 2015, doi:10.1016/S2215-0366(14)00069-8.

      3 Burgess M, Chalder T. Manual for Participants. Cognitive behaviour therapy for CFS/ME.http://www.pacetrial.org/docs/cbt-participant-manual.pdf (accessed: January 17, 2015)

      4 Bavinton J, Darbishire L, White PD -on behalf of the PACE trial management group. Graded Exercise Therapy for CFS/ME. Information for Participants http://www.pacetrial.org/docs/get-participant-manual.pdf (accessed: January 17, 2015)

      5 Wechsler ME, Kelley JM, Boyd IO, Dutile S, Marigowda G, Kirsch I, Israel E, Kaptchuk TJ. Active albuterol or placebo, sham acupuncture, or no intervention in asthma. N Engl J Med. 2011;365(2):119-26.

      6 Whiting P, Bagnall AM, Sowden AJ, Cornell JE, Mulrow CD, Ramírez G. Interventions for the treatment and management of chronic fatigue syndrome: a systematic review. JAMA. 2001 Sep 19;286(11):1360-8.

      7 Burgess M, Chalder T. PACE manual for therapists. Cognitive behaviour therapy for CFS/ME.http://www.pacetrial.org/docs/cbt-therapist-manual.pdf (accessed: January 17, 2015)

      8 White PD, Goldsmith KA, Johnson AL, Potts L, Walwyn R, DeCesare JC, et al, for the PACE trial management group. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. Lancet 2011;377:823-36.

      9 McCrone P, Sharpe M, Chalder T, Knapp M, Johnson AL, Goldsmith KA, White PD. Adaptive pacing, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome: a cost-effectiveness analysis. PLoS One. 2012;7(8):e40808. doi: 10.1371/journal.pone.0040808

      10 Wiborg JF, Knoop H, Stulemeijer M, Prins JB, Bleijenberg G. How does cognitive behaviour therapy reduce fatigue in patients with chronic fatigue syndrome? The role of physical activity. Psychol Med. 2010 Aug;40(8):1281-7. doi: 10.1017/S0033291709992212. Epub 2010 Jan 5.

      11 Heins MJ, Knoop H, Burk WJ, Bleijenberg G. The process of cognitive behaviour therapy for chronic fatigue syndrome: which changes in perpetuating cognitions and behaviour are related to a reduction in fatigue? J Psychosom Res. 2013 Sep;75(3):235-41. doi: 10.1016/j.jpsychores.2013.06.034. Epub 2013 Jul 19.

      12 Friedberg F, Sohl S. Cognitive-behavior therapy in chronic fatigue syndrome: is improvement related to increased physical activity? J Clin Psychol. 2009 Apr;65(4):423-42. doi: 10.1002/jclp.20551.

      13 Knoop H, Prins JB, Stulemeijer M, van der Meer JW, Bleijenberg G. The effect of cognitive behaviour therapy for chronic fatigue syndrome on self-reported cognitive impairments and neuropsychological test performance. Journal of Neurology and Neurosurgery Psychiatry. 2007 Apr;78(4):434-6.

      14 Bavinton J, Darbishire L, White PD -on behalf of the PACE trial management group. Graded Exercise Therapy for CFS/ME (Therapist manual): http://www.pacetrial.org/docs/get-therapist-manual.pdf (accessed: January 17, 2015)

      15 O'Dowd H, Gladwell P, Rogers CA, Hollinghurst S, Gregory A. Cognitive behavioural therapy in chronic fatigue syndrome: a randomised controlled trial of an outpatient group programme. Health Technology Assessment, 2006, 10, 37, 1-140.

      16 Knoop H, Wiborg JF. What makes a difference in chronic fatigue syndrome? Lancet Psychiatry 13 Jan 2015 DOI: http://dx.doi.org/10.1016/S2215-0366(14)00145-X

      17 Kindlon T. Reporting of Harms Associated with Graded Exercise Therapy and Cognitive Behavioural Therapy in Myalgic Encephalomyelitis/Chronic Fatigue Syndrome. Bulletin of the IACFS/ME. 2011;19(2):59-111http://iacfsme.org/BULLETINFALL2011/Fall2011KindlonHarmsPaperABSTRACT/ta...

      18 Lipkin DP, Scriven AJ, Crake T, Poole-Wilson PA. Six minute walking test for assessing exercise capacity in chronic heart failure. Br Med J (Clin Res Ed) 1986. 292:653–655. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1339640/pdf/bmjcred00224-001...

      19 Marin JM, Carrizo SJ, Gascon M, Sanchez A, Gallego B, Celli BR. Inspiratory Capacity, Dynamic Hyperinflation, Breathlessness, and Exercise Performance during the 6-Minute-Walk Test in Chronic Obstructive Pulmonary Disease. Am. J. Respir. Crit. Care Med. 2001 63(6):1395-1399.http://171.66.122.149/content/163/6/1395.full

      20 Goldman MD, Marrie RA, Cohen JA. Evaluation of the six-minute walk in multiple sclerosis subjects and healthy controls. Multiple Sclerosis 2008. 14(3):383-390. http://pocketknowledge.tc.columbia.edu/home.php/viewfile/download/65399/The six-minute walk test.pdf

      21 Ross RM, Murthy JN, Wollak ID, Jackson AS. The six minute walk test accurately estimates mean peak oxygen uptake. BMC Pulm Med. 2010 May 26;10:31. PMID 20504351.http://www.biomedcentral.com/1471-2466/10/31

      22 Camarri B, Eastwood PR, Cecins NM, Thompson PJ, Jenkins S. Six minute walk distance in healthy subjects aged 55–75 years. Respir Med. 2006. 100:658-65 http://www.resmedjournal.com/article/S0954-6111(05)00326-4/abstract

      23 Troosters T, Gosselink R, Decramer M. Six minute walking distance in healthy elderly subjects. Eur Respir J. 1999. 14:270-4. http://www.ersj.org.uk/content/14/2/270.full.pdf

      24 Rapport d'évaluation (2002-2004) portant sur l'exécution des conventions de rééducation entre le Comité de l'assurance soins de santé (institué auprès de l'Institut national d'assurance maladie invalidité) et les Centres de référence pour le Syndrome de fatigue chronique (SFC), Bruxelles, juillet 2006. (French language edition)

      25 Evaluatierapport (2002-2004) met betrekking tot de uitvoering van de revalidatieovereenkomsten tussen het Comité van de verzekering voor geneeskundige verzorging (ingesteld bij het Rijksinstituut voor Ziekte- en invaliditeitsverzekering) en de Referentiecentra voor het Chronisch vermoeidheidssyndroom (CVS). 2006. Available online:https://drive.google.com/file/d/0BxnVj9ZqRgk0QTVsU2NNLWJSblU/edit (accessed: January 17, 2015) (Dutch language version)

      26 Stordeur S, Thiry N, Eyssen M. Chronisch Vermoeidheidssyndroom: diagnose, behandeling en zorgorganisatie. Health Services Research (HSR). Brussel: Federaal Kenniscentrum voor de Gezondheidszorg (KCE); 2008. KCE reports 88A (D/2008/10.273/58)https://kce.fgov.be/sites/default/files/page_documents/d20081027358.pdf (accessed: January 17, 2015)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Sep 15, Tom Kindlon commented:

      Objective measures found a lack of improvement for CBT & GET in the PACE Trial: subjective improvements may simply represent response biases or placebo effects in this non-blinded trial

      [Originally posted here: http://www.bmj.com/content/350/bmj.h227/rr-10]

      This BMJ article and a flurry of articles in the lay media this week followed the publication in Lancet Psychiatry of an analysis of the mediators of change in the important PACE Trial, a chronic fatigue syndrome (CFS) trial which cost UK taxpayers £5 million[1,2]. What seems to have been lost in the coverage is that, although there were some modest improvements in the self-report measures, there was an almost complete absence of improvements in objectively measured outcomes for cognitive behavioural therapy (CBT) and graded exercise therapy (GET) compared to the control group (specialist medical care only (SMC)).

      This is a non-blinded trial, where participants were told CBT and GET had previously been found to be effective in CFS and other conditions[3,4]: one way to look at the mediation results for subjective measures when there was a lack of objective improvements is that they may merely tell us how response biases and/or placebo effects are mediated[5].

      The focus on subjective measures in some CFS studies was previously criticised in a systematic review published back in 2001 (long before the PACE Trial started)[6]. They suggested instead "a more objective measure of the effect of any intervention would be whether participants have increased their working hours, returned to work or school, or increased their physical activities."

      The model presented for cognitive behaviour therapy (CBT) in the PACE Trial manuals posits that the impairments and symptoms are reversible with the therapy[3,7]. However, the latest paper shows that fitness, as measured by a step test, didn't improve following CBT[2]. An earlier PACE Trial publication reported that the addition of CBT to SMC did not result in an improvement in 6-minute walking test scores compared to SMC alone[8].

      The PACE Trial was part funded by the UK Department of Work and Pensions, a rare move for them, presumably done due to an expectation that the therapies would improve measures of employment and levels of benefit receipt. However, again CBT brought about no improvement using objective measures, such as days of employment lost, levels of disability benefits received and levels of receipt of insurance payments[9].

      These results are in line with earlier studies of CBT. For example, an analysis of three randomized controlled trials of CBT interventions for CFS found no improvement in objectively measured activity, despite participants reporting a reduction in (self-reported) fatigue and (sometimes) functional impairments[10]. Similar results were found in another uncontrolled trial where changes in objectively measured activity did not predict fatigue levels, and objectively measured activity on completion remained low compared to population norms[11]. An uncontrolled study found improvements in self-reported physical functioning and fatigue were reported despite a numerical decrease in (objectively measured) activity[12]. In another study, the level of self-reported cognitive impairment in CFS patients decreased significantly after CBT, however, cognition had not improved when it was measured objectively using neuropsychological test performance[13].

      It is unsurprising that 15 sessions of CBT (and the associated homework exercises and management program) might alter how participants respond to self-report questionnaires. A PACE Trial manual itself says "the essence of CBT is helping the participant to change their interpretation of symptoms": this could lead to altered or biased fatigue scores, one of the two primary outcome measures[14]. Also, one of the aims of CBT (for CFS) has been said to be "increased confidence in exercise and physical activity"[15]. The possible responses for the other primary outcome measure, the SF-36 physical functioning subscale, are "yes, limited a lot", "yes, limited a little" and "no, not limited at all" to questions on a range of physical activities. Such responses could be easily be artificially altered following a therapy like CBT for CFS.

      The results were not that different with the GET cohort in the PACE Trial. Again the manuals predicted that the impairments and symptoms are reversible using the intervention[4,15]. The model said there was no reason participants should not be able to get back to full functioning. Deconditioning was posited to be an important maintaining factor. However, GET did not result in an improvement in fitness, as measured by the step test. GET did result in a small improvement on the six minute walking test to a final distance of 379 metres, or 35 metres more than the SMC-only group[7]. However, as Knoop and Wiborg commented in an accompanying editorial in Lancet Psychiatry: "an increase in distance walked during a test situation without an increased fitness suggests that patients walk more because of a change in cognitive processes (eg, daring to do more or an increased self-efficacy with respect to activity), not because of a change in physiological capacity”[16]. The result remained very poor given that normative data would suggest a group of similar age and gender should walk an average of 644 or so metres[17]. The distance walked remained comparable to people with many serious conditions[18-21], and considerably worse than the distance walked by healthy elderly adults[22,23], despite the PACE trial cohort having a mean age of only 40[8]. Also, to be allowed entry into CFS research studies such as the PACE Trial one can not have a range of chronic illnesses so with genuine recovery one would expect results comparable to healthy people[8].

      As with CBT, measures relating to employment showed no improvement following GET in days of work missed, which remained very high, nor a reduction in levels of benefits (financial support from the state) or payments from insurance companies[9].

      These results are in line with an audit of Belgian rehabilitation centres for CFS offering CBT and GET[24-26]. Some improvements in subjective measures were found, but there was no improvement in the results of the exercise test and hours in employment actually decreased.

      Probably the main contribution of the PACE Trial has been to add to a growing body of evidence that while CBT and GET for CFS have resulted in some changes on subjective measures, they haven't lead to improvements on objective measures.

      Competing interests: I am a committee member of the Irish ME/CFS Association and perform various types of voluntary work for the Association.

      (continues)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 15, Radboudumc Psycho-Oncology Journal Club commented:

      This interesting commentary, which has important implications for the field of psycho-oncology, was discussed by our Journal Club members on 14th of October, 2015 and generated a lively discussion and the following comments and questions:

      • 1) After reading your commentary, we concluded that it would be worthwhile to incorporate lifestyle variables in psychosocial intervention programmes on tertiary prevention, to further explore the relationship between psychological outcomes and health behaviours. Our group is based within an academic medical centre and there was debate within our group about the role of psycho-oncologists should take. Some members of our journal club believed that psycho-oncology as a discipline should focus more on collaboration with public health, health psychology and other disciplines working in the area of primary and secondary cancer prevention.. However, other members believed health behaviour change is the area of expertise of health psychologists not psycho-oncologists. Furthermore, in light of resource constraints, psycho-oncology professionals should focus on the core business of providing care to those affected by disease and ill-health in medical settings.<br>
      • 2) We welcomed the proposal of The Expanded Model of Research in Psycho-Oncology and congratulate the authors for stimulating debate over the role of psycho-oncology. As for a suggestion, we would have liked it if the new conceptional model had an additional box with factors relevant to secondary prevention after cancer diagnosis. For example, psycho-oncologists are well placed to conduct activities that focus on the secondary prevention of psychological problems (e.g. depression, anxiety) or which might prevent a new disease recurrence following a diagnosis of cancer (e.g via lifestyle change programmes for cancer survivors). We also believed that psychological screening within oncology settings can be conceptualised as a secondary prevention activity.
      • 3) We believe that psycho-oncology as a discipline, if it is to be involved with primary and secondary prevention at all, might focus on the three non-tumor specific health behavior changes with the biggest impact smoking, diet and exercise.
      • 4) When it comes to medical psychologists working in psycho-oncology, essentially it comes down to the question whether a hospital is a “health valley” or a place in which we treat those affected by disease. Primary and secondary prevention implies the “health valley” model, which needs a much larger paradigm shift of medicine in general rather than in psycho-oncology alone. It will also require increased funding for hospitals to enable this to occur.<br>
      • 5) We identified systemic barriers such as the organisational separation of hospitals and public health programmes in many jurisdictions. We concur with the authors that further investment is needed in the training of psycho-oncologists to ensure they learn how to apply their skills to preventive health and have more opportunities to gain work experience in preventive health care settings.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 02, Robert Groom commented:

      Thank you to the authors for this tribute to an outstanding perfusionist that invested herself in improving her field. The commentary from her friends and contemporaries exemplifies the importance of our relationships with colleagues in professional societies and how through those relationships we are able to make the World a better place.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 06, Gwinyai Masukume commented:

      Unacknowledged important limitation - the effective sample size of 229 was insufficient to detect a statistically significant association.

      Dean and colleagues find that there was a deficit of 229 pregnancies among women who were in the first trimester when the mass shooting at Port Arthur occurred and that by sex there was no statistically significant differential loss. They estimate that for every 100 females, 107 males were lost.

      Although Dean and colleagues did not find a statistically significant difference by sex, their sample size of 229 was insufficient to detect the difference given the small effect size Austad SN, 2015.

      A rigorous study Orzack SH, 2015 demonstrated that normally, in the absence of exogenous stressors such as that caused by the Port Arthur calamity, more female embryos/fetuses are lost during the first trimester, approximately 100 females for every 97 males.

      In conclusion, the results presented by Dean and colleagues show excess male loss during a period when more female loss is expected. Thus, the small sample size of 229 should have been mentioned as an important limitation of the paper and a formal power calculation was apt.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 17, Daniel Himmelstein commented:

      Thanks Dr. Levine for your thoughtful response. As you mention, the practices I criticize in your discovery phase are not unique to your study. Your study caught my attention because of its remarkable finding that such a small sample yielded a highly predictive and robust classifier for such a complex phenotype. Hopefully, others will benefit from our discussion here.

      Additionally, I agree that replication grants researchers the freedom to discover as they wish. Suboptimal model training should not cause "type I" replication error, if the replication dataset is independent.

      However, a replication p-value alone is insufficient to identify the probability of a model being true. This probability depends on the plausibility of the model. Since I think that the odds are low that your study design could produce a true model, I require extraordinary evidence before accepting the proposed PRS model. I think your replication provides good evidence but not extraordinary.

      Follow-up studies on different populations will be important for establishing extraordinary evidence. I think it would be helpful to specify which allele is minor for each PRS SNP: my impression is that minor alleles are sample not population specific.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Sep 16, Morgan E Levine commented:

      Daniel Himmelstein, thank you for your comments. I will try to address them to the best of my ability.

      We acknowledge that the sample size is very small, which we mention in our limitations section of the paper. Because we are studying such a rare phenotype, there is not much that can be done about this. “Long-lived smokers” is a phenotype that has been the subject of a number of our papers, and that we think has strong genetic underpinnings. Despite the small sample size, we decided to go ahead and see if we could detect a signal, since there is evidence to suggest that the genetic influence may be larger for this phenotype compared to many others—something we discuss at length in our introduction section.

      To the best of our knowledge the finding that highly connected genes contain more SNPs, has not been published in a peer-reviewed journal. Therefore, we had no way of knowing or evaluating the importance of this for our study. Similarly, we used commonly used networks and do acknowledge the limitations of these networks in our discussion section. The network you link to was not available at the time this manuscript was accepted.

      We acknowledge the likelihood of over-fitting in our PRS, which is probably due to our sample size. This score did validate in two independent samples. Therefore, while it is likely not perfect, we feel that it may still capture some of the true underlying signal. We followed standard protocol for calculating our score (which we reference). In the literature there are many examples of scores that have been generated by linearly combining information from SNPs that are below a given p-value threshold in a GWAS. While, not all of these replicate, many do. Our study used very similar methods, but just introduced one additional SNP selection criteria—SNPs had to also be in genes that were part of an FI network. I don't think this last criteria would introduce additional bias that would cause a type I error in the replication analysis. However, we still recognize and mention some of the limitations of our PRS. We make no claim that the score is free from error/noise or that it should be used in a clinical setting. In fact, in the paper we suggest future methods that can be used to generate better scores.

      We feel we have provided sufficient information for replication of our study. The minor alleles we used are consistent with those reported for CEU populations, which is information that is readily available. Thus, the only information we provide in Table S2 pertain to things specific to our study, that can't be found elsewhere. Lastly, the binning of ages is not 'bizarre' from a biogerontology and longevity research perspective. A number of leaders in the filed have hypothesized that the association between genes and lifespan is not linear (variants that influence survival to age 100+ are not the same as variants that influence survival to age 80+). Thus, using a linear model would not be appropriate in this case and instead we selected to look at survival by age group.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Sep 15, Daniel Himmelstein commented:

      I have several concerns with the discovery phase of this study. Specifically:

      Underpowered: The sample size (90 cases) is insufficient for a complex phenotype where effect sizes are small. My own research, Himmelstein DS, 2015, does not consider GWAS with under 1000 samples because any findings are likely to be false (Sawcer S, 2008). The pathway-based prioritization attempts to alleviate the underpowered study design but suffers from the following two criticisms.

      SNP abundance confounding: Genes were selected for the network analysis if they contained any SNPs with p < 5×10<sup>-3.</sup> Therefore, genes containing more SNPs were more likely to get selected by chance. Since genes with more SNPs appear more frequently in curated pathway databases, the enrichment analysis is confounded. A permutation test that shuffles case-control status and recomputes SNP p-values would prevent SNP abundance confounding. However, this does not appear to be the permutation test that was performed.

      Limited network relevance: The network analysis uses only curated pathway databases. These databases are heavily biased towards well-studied genes as well as being incomplete. In our recent network that includes more curated pathways than Reactome FI, only 9,511 genes are in any curated pathway. In other words, the majority of genes aren't curated to a single pathway and hence cannot contribute to this study's prioritization approach.

      Overfitting: The polygenic risk score (PRS) was ruthlessly overfit. The PRS perfectly discriminated the 90 long-lived smokers from the younger smokers. The authors don't appear to appreciate that the performance is due to overfitting and write:

      Results showed that the score completely accounted for group membership, with no overlap between the two groups.

      Not only were scores significantly higher for the long-lived group, but scores also appeared to be more homogeneous.

      Egregious overfitting is guaranteed by their PRS approach since 215 logistic regressions are fit, each with only 90 positives and without regularization or cross-validation. When a model is overfit on training data, its perfomance on novel data diminishes.

      Unreplicable: Table S2 of the supplement does not specify which allele is minor for each SNP. Therefore, the PRS computation cannot be replicated by others.

      Given these issues, I find it unlikely that the study found a reliable genotype of longevity. Rather, I suspect the successful validation resulted from confounding, p-value selection bias, or an implementation error.

      Finally, the binning of continuous outcomes, primarily age, is bizarre. The binning serves only to reduce the study's power, while providing much room for unintended p-value selection bias.

      Update 2015-09-15: I am not suggesting any misconduct, negligence, or intentional bad practices. Rather the methods are clearly described and the validation quite impressive and seemingly honest. I believe the study makes a valuable contribution by proposing a genotype of longevity, which future studies can confirm or deny.

      Update 2015-09-19: I replaced "p-hacking" with "p-value selection bias". My intended meaning is the greater investigation and preferential publication granted to more significant findings.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 18, Alejandro Diaz commented:

      Prepubertal gynecomastia is a very rare condition and there have been an unexpected number of cases reported in the Miami area. This case report highlighted three cases. However, we have seen many other children in our practices with prepubertal gynecomastia or thelarche who were similarly exposed to the lavender-containing cologne described in our report. Unfortunately, the evaluation and follow up for these children was not as in-depth as in the cases that we presented in the article, which included chemical analysis of the cologne product.

      In Politano VT, 2013 article, uterotrophic assay, performed in immature female rats, was used to evaluate estrogenic activity in vivo. However, these rats were only treated with lavender oil for a period of 3 days, whereas the children we reported on were exposed to the lavender-containing cologne for several years. As described by Henley DV, 2007, the estrogenic activity of lavender is weak, which may explain why there was not uterotrophic effects on these rats.

      You mentioned that the reason Henley found that lavender and tea tree oil activated the estrogen receptor was due to the use of polystyrene in their test system as opposed to the use of glass. Both Ohno K, 2003 and Fail PA, 1998 could not demonstrate estrogenic response from polystyrene. Therefore, this explanation is not substantiated. Moreover, there was an estrogenic response to the lavender and tea tree oils, but not to the control substance, despite having used the same test system containers in both conditions.

      While it is true that there are additional components in many lavender preparations, we have not found prepubertal gynecomastia to be associated with other non-lavender-containing colognes or topical products. Thus, lavender itself is a logical and well-founded explanation for the physical findings in our patients.

      Regarding tea tree oil, I previously evaluated a patient who was exposed over a period of several years to numerous products containing tea tree oil, including shampoos, toothpaste, detergents, home cleaning products, and melaleuca essential oil for minor cuts, burns, and other skin issues. He developed severe prepubertal gynecomastia that improved upon discontinuation of the exposure to these products. As I did in the cases of lavender exposure, I conducted a complete endocrine evaluation and his hormone levels were all within normal ranges. I did not publish this case at the request of his parents, who were in the tea tree oil industry. As clinicians, if we find patients with prepubertal gynecomastia who have been exposed for prolonged periods of time to substances known to activate the estrogen receptor, it provides substantial evidence of causality.

      The possibility exists that there are contaminants that have estrogenic activity in these preparations. However, it is the responsibility of the industry that the products as sold are safe for human exposure.

      It is, however, of utmost importance to publish these case reports to offer the balance you refer to, and to deepen the scientific knowledge base and protect consumers from unnecessary harm. Clinicians must be informed and use their clinical judgment to determine what is best for their individual patients.

      Alejandro Diaz, M.D. and Marco Danon, M.D. Pediatric Endocrinologists


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Sep 14, Tony Larkman commented:

      It is disappointing that the authors didn’t include or at least refer to the following two articles in this paper: 1. Politano VT et al (2013), Uterotrophic Assay of Percutaneous Lavender Oil in Immature Female Rats; International Journal of Toxicology; 32(2): 123-9 (http://www.ncbi.nlm.nih.gov/pubmed/23358464) 2. Carson CF et al. (2014), Lack of evidence that essential oils affect puberty; Reproductive Toxicology; 44:50-1 (http://www.ncbi.nlm.nih.gov/pubmed/24556344)

      The first paper largely exonerates lavender oil through the ’gold standard’ uterotrophic assay while the second hypothesizes a mechanism that may be causal in incidences of both gynecomastia and premature thelarche as well as the ‘in vitro’ findings of Henley et al (2007) where they in fact used polystyrene in their test system and not glass. Based on these it is not impossible to conceive that other endocrine disruptors may have been inadvertently present in the material tested.

      A further, as yet untested, hypothesis for these manifestations is the high incidence of adulteration in essential oils. The incidence of adulteration in TTO is remarkably high as reported in Wong YF et al. (2014) Enantiomeric distribution of selected terpenes for authenticity assessment of Australian Melaleuca alternifolia oil; Industrial Crops & Products; 67: 475-83 (http://ijt.sagepub.com/content/early/2013/01/24/1091581812472209) where more than 50% of 43 commercial samples tested failed to comply with the proposed chiral ratios. Of 15 commercially sourced samples in the European Union 73% of these showed significant differences in chiral abundances while TTO from both North America and Asia also displayed similar results where ≥50% of the tested samples did not match the expected results. In material of Chinese origin the incidence rises to 100%. An extraordinary range of compounds never found in pure TTO has been coincidentally detected so it has been further hypothesized that this, along with the likelihood that the material used is heavily oxidized, may be significant source of problems such as allergic contact dermatitis attributed to the use of TTO as well as conditions related to endocrine disruption.

      It is disappointing for the tea tree oil industry as a whole that TTO continues to be mentioned as a potential endocrine disruptor without a more balanced view being presented when there are clear indications available in the literature that the original proposal by Henley et al in 2007 may be flawed. The fact that TTO was not present at all in any of the material tested is also disappointing as TTO can in no way be implicated yet its mention continues to promulgate the link and implicate TTO unfairly.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 07, Mark Milton commented:

      In defense of the authors, their article was first published online on September 8, 2015 and appeared in the Issue published on April 1, 2016. The BIAL incident occurred in Jan 2016, i.e. after this article was published and hence there couldn't have been any discussion of the BIAL incident in this article.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jun 08, Christopher Southan commented:

      Some discussion of the BIA-10-2474 fatality and AEs would have been pertinant, even though it was more recent than the cut-off date https://cdsouthan.blogspot.se/2016/01/the-unfortunate-case-of-bia-10-2474.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 07, Sanjay Kumar commented:

      We appreciate the interest in our work. While we are unable to comment on the internalization mechanisms of the SmartFlare reagents based on our data, our independent RT-PCR measurements (Fig. 3) are consistent with the SmartFlare data and confirm that matrix stiffness and ligand density regulate miR18a levels in this system.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Nov 18, Raphael Levy commented:

      This article uses the SmartFlare technology to detect miR18a. I would be most interested in the authors' opinion on how these probes escape endosomes to report on miR18a level. In our experience, they do not and fluorescence increase is most likely correlated with degradation in endosomes. It might be quite plausible that endocytosis "is non-linearly regulated by matrix stiffness and fibronectin density in glioma cells."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 12, Christopher Sampson commented:

      It seems that personalised screening for breast cancer is a sheep in wolves' clothing (rather than the author's suggestion of the reverse).

      The purpose of risk-based screening is to redirect resources to those at the greatest risk of disease; those in the greatest need. I have argued elsewhere that it is possible to prioritise screening in this way, in order to maximise the benefits of screening within a given budget.

      Feig's commentary suggests that the terms "risk-based screening" and "personalized screening" are being misappropriated. That may well be true. However, the criticisms therein relate to very specific uses of these terms and to particular guidelines (I do not see these claims holding true more broadly). It does not then follow that 'risk-based screening' and 'personalisation' are wolves, and indeed the way they have been clothed does not appear desirable. They are sheep in wolves' clothing!

      We should not -- as I hope Feig would agree -- allow the unsavoury dressing-up of these ambitious developments waylay further research into risk-based screening.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 18, Clive Bates commented:

      Erroneous interpretations have been placed on these results and overly confident conclusions drawn from very small numbers of imperfectly characterised teenagers. The headline recommendations were based on the behaviour of six out of 16 baseline e-cigarette users in a sample of 694 adolescents deemed not to be susceptible to smoking. Large conclusions drawn from small numbers should always be a cause for caution, as discussed in this article about this study by Five Thirty-Eight:

      Ignore The Headlines: We Don’t Know If E-Cigs Lead Kids To Real Cigs by Christie Ashwandan, 11 September 2015

      One should expect the inclination to use e-cigarettes to be caused by the same things that cause an inclination to smoke - they are similar habits (the former much less risky) and it is quite likely that those who used e-cigarettes first would have become smokers first in the absence of e-cigarettes - a concept known as shared liability. A range of independent factors that create a common propensity to smoke or vape, such as parental smoking, rebellious nature, delinquency etc. explain the association between vaping and smoking incidence but without this relationship being causal.

      The authors try to address this by characterising teenagers non-susceptible to smoking if they answer “definitely no” when asked the following: “If one of your friends offered you a cigarette, would you try it?” and “Do you think you will smoke a cigarette sometime in the next year?”. The study concentrates on this group.

      This is not a foolproof way of characterising susceptibility to smoking, which in any case is not a binary construct but a probability distribution. Nor is susceptibility a permanent condition for any young person - for example, if a teenage girl starts seeing a new boyfriend who smokes that will materially changes her susceptibility to smoking. The fact that some were deemed unsusceptible to smoking but were already e-cigarette users is grounds for further unease - these would be more likely to be the teens where the crude characterisation failed.

      It is a near-universal feature of tobacco control research that the study presented is a wholly inadequate basis for any policy recommendation drawn in the conclusion, and this study is no exception:

      These findings support regulations to limit sales and decrease the appeal of e-cigarettes to adolescents and young adults.

      The findings do not support this recommendation, not least because the paper is concerned exclusively with the behaviour of young people deemed not susceptible to smoking and, within that group, a tiny fraction who progressed from vaping to smoking. Even for this group (6 of 16) the authors cannot be sure this isn't a result of mischaracterisation and that they would not have smoked in the absence of e-cigarettes. The approach to characterising non-susceptibility is far too crude and the numbers involved far too small to draw any policy-relevant conclusions.

      But this isn't the main limitation. Much more troubling is that the authors made this policy recommendation without considering the transitions among young people who are susceptible to smoking - i.e. those more likely to smoke, and also those much more likely to use e-cigarettes as well as or instead of smoking. This group is much more likely to benefit from using e-cigarettes as an alternative to smoking initiation, to quit smoking or cut down or as a later transition as they approach adult life.

      There are already findings (Friedman AS, 2015, Pesko MF, 2016, Pesko MF, Currie JM, 2016) that regulation of the type proposed by the authors designed to reduce access to e-cigarettes by young people has had unintended consequences in the form of increased smoking - something that should not be a surprise given these products are substitutes. While one may debate these findings, the current study makes no assessment of such effects and does not even cover the population that would be harmed by them. With these limitations, it cannot underpin its own over-confident and sweeping policy recommendation.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 15, Aadil Inamdar commented:

      The very aim of a systematic review is to provide a critical summary of the available evidence. The authors of a systematic review may draw conclusions based on the collected studies on a particular topic. It is equally important to present to their readers an unbiased quality assessment of the included studies in the review, which is an integral part of a systematic review process. This process is what differentiates a systematic review from the rest and a missing quality assessment of included studies section brings it as par with the common narrative or a literature review. The risk of bias and quality assessment of studies included in this review is missing. The onus is on the researchers to provide clear, simple conclusions of studies that has been meticulously checked and rechecked for their validity. Otherwise, on the contrary, it only helps to pollute the current evidence.

      It is important to note that;

      • Systematic reviews are one of the highest levels of information in the hierarchy of evidence LINK
      • Steps involved in a systematic review have been laid down by the Cochrane collaboration STEPS
      • A major section involves assessment of quality and risk of bias, of the included studies
      • Assessment not only helps in ascertaining validity of a particular study, but also gives a true estimate (over/under) of the intervention (or exposure) Assessment of quality and risk of bias


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 14, Arnaud Chiolero MD PhD commented:

      An enlightening review revealing the issues of universal lipid screening in children. Similar issues with the screening of other CVD risk factors in children (see www.publichealthreviews.net/content/36/1/9)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 10, David Keller commented:

      Did droxidopa increase dyskinesias, blood pressure, heart rate & adrenergic side-effects?

      In general, medications which improve rigidity and bradykinesia and other "off symptoms" in Parkinson disease (PD) patients also tend to worsen dyskinesias. If droxidopa was able to improve PD "off" symptoms without worsening dyskinesias, then this drug is a breakthrough. The effect of droxidopa on dyskinesias is a crucial outcome and should be reported in the abstract. Without this information, the study cannot be evaluated properly.

      Droxidopa is a metabolic precursor of norepinephrine, so it is expected to result in adrenergic effects, such as increasing blood pressure, heart rate, insomnia, heart arrhythmias, constipation, etc. The presence or absence of such side-effects is of interest to clinicians and patients, and should also be reported in the abstract.

      Addendum

      On September 16, 2015, I received a very informative email from Shifu Zhao, MD, in which he stated that "adrenergic side-effects were not significantly increased comparing with placebo group or baseline data at droxidopa dose 600mg/day" including "blood pressure, insomnia, palpitation, ECG" [1].

      In addition, he assured me that add-on droxidopa therapy improved tremor and alternating motion of the hands in patients with moderate-to-severe Parkinson's disease, without worsening dyskinesias. [1]

      I thank Dr. Zhao for supplying this information to me, and I hereby relay it to other Parkinson disease patients who read scientific abstracts without access to the underlying journal articles.

      Reference:

      1: Zhao S, Personal Communication by email, Received 9/16/2015 in reply to an earlier email inquiry.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 26, Gustav van Niekerk commented:

      It would be hard to over emphasise the intense selective pressure that pathogens place on host populations (as predicted by the Red Queen hypothesis). Our genome are littered with pseudonized genes formerly associated with immune-related functions. These ‘genomic fossils’ representing discarded immunological innovations that became redundant as our pathogens evolve strategies to defeat them, testifying to an ancient conflict that have been ranging on between host and pathogens. Sequencing genomes have repeatedly demonstrated that genes with immunological functions tend to be under selective pressure and tend to be highly polymorphic (Mother Nature is ‘diversifying’ the ‘portfolio’ of immunological strategies: a pathogens can overcome many immunological strategies, but not all of them simultaneously. This observed immunological heterogeneity in population is how a species ‘hedge it’s bet’ against pathogens). Host and pathogen/parasites occasionally demonstrate "mirror-image phylogenies", demonstrating the close coevolution as pathogens keep in step with host development. Collectively, such observations suggest that pathogens exerts immense evolutionary pressure and one of the biggest drivers for evolutionary novelty.

      Consequently, we suggest a less exiting narrative in which an AIS evolved initially in response to pathogen stress. Similar to your work Corcos D, 2015, we also noted that AIS evolved in an aquatic environment. Sediment and marine environments typically have a higher pathogen burden (see references in < PMID: 25698354 >), thus suggesting a higher level of pathogen stress. However, as you point out, there are scepticism about whether the AIS represent a true immunological innovation, as invertebrates do not seem any worse-off than vertebrates in suffering from infections. In this regard, we would like to point out that the AIS could have been an initial innovation that was subsequently overcome by fast evolving pathogens. That is, the AIS could have evolved in response to pathogen stress, providing an initial benefit to vertebrates, but was subsequently overcome by pathogens. And now, regardless whether the AIS is currently an ‘immunological innovation’, we are stuck with it (for better or worse). Finally, it is not obvious that the AIS do not repeat a lasting innovation. After all, “[v]ertebrates are the dominant group of animals on the Earth, given their abundance, large body sizes and presence at the top of both aquatic and terrestrial foodwebs” Wiens JJ, 2015. How did vertebrates manage to survive (and indeed flourish) in a world initially dominated by invertebrates? An AIS may have some additional benefits that have helped vertebrates invade into niches previously occupied by invertebrates. How AIS could provide a benefit that is (a) only applicable to vertebrates and (b) does not involve an enhancement of immune potency is currently an open question.

      (PS we unfortunately do not have institutional access to the article from Burnet FM. We will shortly be sending a letter to the editor, commenting on your Corcos D, 2015 interesting article)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Oct 20, Daniel Corcos commented:

      This paper is interesting for examining the relationship between the adaptive immune system and the vascular system. However it does not explain such a complexity with no obvious advantage. For instance, immunodeficiency in zebrafish does not result in a dramatic change in viability. This can be interpreted in the words of Hedrick as an indication that the AIS is "not so superior"(1) or in the words of Burnet that it is "related to something other than defence against pathogenic microorganisms."(2) I have proposed (3) that its origin was related to intraspecific predation (cannibalism).

      1)Hedrick SM. Immune system: not so superior. Science 2009;325:1623–4.

      2)Burnet FM. “Self-recognition” in colonial marine forms and flowering plants in relation to the evolution of immunity. Nature 971;232:230–5.

      3)Corcos D. Food-Nonfood Discrimination in Ancestral Vertebrates: Gamete Cannibalism and the Origin of the Adaptive Immune System. Scand J Immunol. 2015;82(5):409-17.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 04, Randi Pechacek commented:

      Alex Alexiev wrote a blog post on microBEnet, mentioning this paper, arguing the importance of equipment cleanliness for food quality.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 20, thomas samaras commented:

      A very interesting study. However, the conclusion that overweight may not be harmful at older ages is not consistent with what we know about the dangers of increasing BMI and various health parameters. Lamon-Fava showed many years ago that virtually all biological parameters get worse with increasing BMI in a linear trend; e.g., BP, fibrinogen, Apo B, Apo A, and HDL. In addition, the New England Centenarian Study found that almost all men who reached 100 years of age were lean. The Okinawan centenarians are also very lean. We also know that pre-western people with low chronic disease throughout their lives also tend to be quite lean in old age; e.g., people of Kitava (near Papua New Guinea).

      Maier, Van Heemst and Westendorp found that shorter 85 and 90 years were more likely to reach advanced ages. These shorter, longer living individuals also had longer telomeres than taller people as found in the Guzzardi, et al. study. Perhaps confounding occurred by a number of lean people who were developing health problems without current symptoms, smokers, or poorly nourished individuals. This confounding may explain the results in regard to the positive correlation of increased % of fat mass and cholesterol between the start and end of the study.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 10, Anders von Heijne commented:

      In a recent case with a very large testicular epidermoid cyst we found an ADC-value of 0.544. IT seems that testicular epidermoid cysts have distinctly lower ADC-values than their intracranial counterparts.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 08, S. Celeste Morley commented:

      Thank you very much for your interest in and comment upon our work. The safety and efficacy of PCV in preventing and reducing the incidence of invasive pneumococcal disease is unquestioned and unequivocally supported by all literature. The mechanisms by which PCV protects are multifactorial. PCV generates an immune response that protects against invasive disease, if not colonization. We did not address incidence of IPD in this study. PCV also results in decreased carriage prevalence of the disease-causing serotypes covered by the vaccine, and thus shifts serotype prevalence without necessarily altering overall carriage prevalence. Our study simply reported overall carriage prevalence and the antibiotic susceptibility profiles of carriage isolates; we did not report serotypes. Our results finding that overall likelihood of colonization with any pneumococcal serotype is not affected by PCV is in line with other larger studies (e.g. Zuccotti et al Vaccine. 2014 Jan 23;32(5):527-34. Zuccotti G, 2014) Thus, our study is not in conflict with any of the studies referenced above, which clearly show a decrease in colonization with vaccine-covered serotypes.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Dec 07, Manoochehr Karami commented:

      Manoochehr Karami, PhD, Research Center for Health Sciences and Department of Epidemiology, School of Public Health, Hamadan University of Medical Sciences, Hamadan, Iran.

      In an interesting study published recently, Julie Y. Zhou et al.(1) highlighted the prevalence of nasopharyngeal pneumococcal colonization among children in the greater St. Louis hospital. Authors have stated "pneumococcal conjugate vaccine (PCV) did not alter prevalence" of nasopharyngeal carriage. World Health Organization indicated that after PCV introduction, both targeted and non-targeted vaccination population were affected by direct and indirect effects of PCV immunization(2). Moreover, published studies (3-7) supported the changes of epidemiological profile of Streptococcus pneumonia related diseases transmission and nasopharyngeal carriage even among those individuals who were not immunized against Streptococcus pneumonia. Accordingly, it seems Zhou JY et al. interpretations based on their findings is in question and might be affected because of potential selection bias while enrolled participants. Rational for such selection bias is the catchment area of St. Louis hospital and potential differences between study participants and non-participants. Although they have excluded some patients, however this strategy does not guarantee the representativeness of their own work. Generally speaking, better explanation for Zhou et al findings is selection bias. In conclusion, lack of generalizability of this study findings should be considered by policy makers and interested readers.   References: 1. Zhou JY, Isaacson-Schmid M, Utterson EC, et al. Prevalence of nasopharyngeal pneumococcal colonization in children and antimicrobial susceptibility profiles of carriage isolates. International Journal of Infectious Diseases;39:50-52. 2. World Health Organization. Measuring impact of Streptococcus pneumoniae and Haemophilus influenzae type b conjugate vaccination. WHO Press, Geneva, Switzerland, 2012. 3. World Health Organization.Pneumococcal vaccines WHO position paper – 2012.Wkly Epidemiol Rec. 2012;87:129-244. . 4. Lehmann D, Willis J. The changing epidemiology of invasive pneumococcal disease in aboriginal and non-aboriginal western Australians from 1997 through 2007 and emergence of nonvaccine serotypes. Clinical Infectious Diseases. 2010, 50(11):1477–1486. 5. Pilishvili T, Lexau C. Sustained reductions in invasive pneumococcal disease in the era of conjugate vaccine. The Journal of Infectious Diseases, 2010, 201(1):32–41. 6. Davis S, Deloria-Knoll M, Kassa H, O’Brien K. Impact of pneumococcal conjugate vaccines on nasopharyngeal carriage and invasive disease among unvaccinated people: Review of evidence on indirect effects. Vaccine.2014;32:133-145. 7. Karami M, Alikhani MY. Serotype Replacement and Nasopharyngeal Carriage Due to the Introduction of New Pneumococcal Conjugate Vaccine to National Routine Immunization. Jundishapur Journal of Microbiology 2015;8.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 10, George McNamara commented:

      Over at PubPeer (where this will be mirrored to) there were questions as to why I consider the image quality of figures 1 and 2 are so bad. See the Ma et al 2013 paper - also published in PNAS. http://www.pnas.org/content/110/52/21048.figures-only

      Was not that hard two years ago to publish high quality images. Nor was it that hard for Deng et al to publish good quality figure 3 or supplemental files.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Sep 09, George McNamara commented:

      The good news of this paper: open access. The bad news: figures 1 and 2 are among the worst light microscopy image quality I have seen in a long time. I don't know if PNAS completely dropped the ball on image quality (fig 3 does look ok - also the authors do get to approve or fix the page proofs), or something to do with the authors. When I was a graduate student, I thought the difference between a cell biologist and a biochemist was that a cell biologist (or their lab) had microscopes and knew how to use them, and biochemists did not and did not care. Alternative hypothesis: PNAS figures are managed by biochemists. Considering the authors are down the hall from Eric Betzig, who won a prize of high resolution microscopy (and has since published two Science papers on more high res microscopy), they could have acquired and published better images. Their centromere, telomere repeats, MUC1 and MUC4 tandem repeat targets were all previously published (and cited by them) with somewhat better image quality.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 01, Badrul Arefin commented:

      Dear readers,

      We found two mistakes either as a missing vertical axis title or a typo in the following figures in this published article. In figure 7A, vertical axis title is missing when figure re-arrangement is performed in a revised version. Here it should be "Eclosure (%)". In figure 8I, N-NAME in chart area should be replaced with "L-NAME". Please excuse us for these mistakes. However, both of these are correctly-written/correct in the respective figure legends and elsewhere in the article.

      Sincerely,

      Badrul Arefin, On behalf of all authors.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 01, Friedrich Thinnes commented:

      From my point of view the dust-raising effect of antibody 6E10 fuels the idea that plasmalemma-integrated VDAC-1 works as a receptor of amyloid Aß.

      It has been shown that docking of amyloid Aß via GxxxG motif interaction to cell membrane-standing VDAC-1 opens the channel, and this way induces neuronal cell deaths accumulating over time.

      Whenever critical brain regions and all redundant structures are affected Alzheimer´s Dementia appears.

      For details see Friedrich P. Thinnes, Biochimica et Biophysica Acta 1848 (2015) 1410–1416 and www.futhin.de


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 11, Chanin Nantasenamat commented:

      Thanks Christopher for the reminder. Actually, we also wanted to make the dataset publicly available. Thus, please find the dataset used in this study (in Excel file format) from the following link:

      http://dx.doi.org/10.6084/m9.figshare.1539580


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Sep 05, Christopher Southan commented:

      This is a usefuly detailed study but direct availability of the 2,937 structures should have been a condition of publication. I note they were selected from BindingDB but also filtered post-download (and will change via updates). The authors should thus please surface at least the SMILES on figshare or other open acess option (Sep 14th - good 2 c the file :)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 23, Miguel Lopez-Lazaro commented:

      This article provides a model to explain that virtually every cancer cell within a tumor often contains the same core set of genetic alterations, with heterogeneity confined to mutations that emerge late during tumor growth.

      In my opinion, there is a simpler explanation. Recent evidence strongly suggests that cancer arises from normal stem cells. If cancer arises from normal stem cells, all the mutations occurring in these cells before becoming malignant (cancer stem cells, CSCs) will be found in all their progeny, that is, in all the tumor cancer cells. Some tumor cells may lack some of these mutations if they lose during cell division the chromosomes or pieces of chromosomes that bear these DNA alterations. The mutations arising during the self-renewal of CSCs will be found only in the tumor populations derived from these malignant stem cells. In addition to self-renewing, CSCs generate progenitor cancer cells, which divide and produce the bulk of cancer cells within a tumor. The mutations found in few tumor cancer cells probably occur during the division of these progenitor cells. In some cases, the tumor cancer cells may arise from more than one normal stem cell. In these cases, not all the cancer cells within a tumor will share the same core set of genetic alterations (1). Normal and malignant stem cells are defined by their self-renewal capacity and differentiation potential, and have a natural ability to migrate.

      (1). Lopez-Lazaro M. Selective amino acid restriction therapy (SAART): a non-pharmacological strategy against all types of cancer cells. DOI: 10.18632/oncoscience.258


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 28, Tom Kindlon commented:

      Minimal change on only objective outcome measure:

      On the only objective measure, the activity monitor, at 52 weeks (compared to baseline): the MRT group had increased by 5.8% and CBT group had increased by 6.6%.

      There was no control group. One would expect a no therapy CFS group would likely on average increase their activity a little also in 52 weeks esp. in the early years of the illness and/or in the first year or two after diagnosis.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 03, Prashant Sharma, MD, DM commented:

      The affiliation for Drs. Varma N, Sachdeva and Sharma should read as "Department of Hematology" and not "Hepatopathology".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 08, Alexandra Alexiev commented:

      This article was written about in a blog post at: http://microbe.net/2015/09/08/antimicrobial-resistance-countermeasures-in-veal-farming/

      MicroBEnet is a microbiology of the built environment blog funded by the Sloan foundation to communicate about scientific research in the field.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 26, Lydia Maniatis commented:

      We can get a sense of the intellectual poverty of this study simply by reading the "Conclusions" section:

      "Our experiments show that luminance edges play a central role in White's illusion. The illusion seems to be predominantly caused by the luminance edge between the test patch and its background bar, while the edge contrast to neighboring bars is largely ignored. The effect of contour adaptation on White's illusion could not be replicated by spatial filtering models, which adds further evidence against the adequacy of such models as a mechanistic explanation of White's illusion in particular, and lightness perception in general. Our results highlight the importance of further investigating the question of how surface lightness is computed from edge contrast. "

      There is no content here because:

      1) There is no alternative to the idea that "luminance edges play a central role in White's illusion." Both sides of the display are the same, other than the (perceived) location of the targets as lying on white or on black. Note that there is NO REFERENCE to the authors' orientation claims.

      2) The inadequacy of a priori inadequate models that weren't even properly tested (assuming they could be) is not an argument for anything.

      3). The statement that "Our results highlight the importance of further investigating the question of how surface lightness is computed from edge contrast" is meaningless. If there are questions to be answered, this study did not address them nor make them seem any more interesting.

      The ad hotness of the authors' orientation claims is not only refuted by other displays, such as the Benary cross, but can also be refuted by versions of White's illusion in which the edges of the targets are curved so as to collectively be consistent with, e.g. an interrupted circle. Try it. Such a display, for me, makes more evident the fundamentally bistable character of the lighter-looking group of targets. They can either appear to be part of an opaque surface that lies behind the white stripes, on an amodally-completed black background, or they can appear to form part of a transparency that passes over the black and white stripes. The transparency aspect of White's illusion, which has been noted before, and is noticed by naive observers, is, of course, never touched on in this paper.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Nov 08, Lydia Maniatis commented:

      This article should probably not be read from the beginning, but from the end – from the last section of the discussion, titled “Difficulties with matching tasks.” The major theoretical problems discussed earlier seem moot once you appreciate that problems with method render the data virtually worthless..

      The stimulus effects were not robust but, rather, highly ambiguous and liable to produce theoretically relevant effects that the authors did not bother to consider. This led to highly variable responses that they could not interpret. The authors try and put the blame for their data problems on the lightness matching technique itself, but the fault lies in them: If a task (or a stimulus) is not fit for purpose, it is the fault of those who chose it when it fails to deliver.

      The problem is, in fact, the stimuli, which, again, were highly unstable and often did not produce the percepts that the authors predicted and which needed to arise reliably in order to allow them to validate their predictions. Four of ten observers, we are told, did not even perceive White's illusion! These observers were “following some strategy, but [our data] does not allow us to understand what exactly they are doing.”

      In general, there was “large variability across and within observers....With simple stimuli of the kind used here, the perceived lightness of different image parts can change over time....In that case, the mean across trials may not be a good indicator of subjective experience [i.e. perception].” So the authors do not really know what their subjects are perceiving.

      It seems, further, that the “simple stimuli” produced perceptual phenomena that the authors were or should have been aware are possible – either through the literature or simply on the basis of looking at their stimuli (the least a perception scientist should be doing when planning an experiment is to notice obvious effects of their stimuli) - but did not take into consideration: “Ekroll and Faul (2013) observed an additional transparent layer for similar stimuli...” This might “explain the inconsistent results...” So the “simple stimuli” could produce complex percepts - precepts that, for these authors simply amounts to noise.

      In addition: “...observers sometimes selected match lightnesses that were outside the luminance range spanned by the grating. This is surprising....” They really have no idea what their data mean.

      The authors' stunning conclusion: “This is just one further example that lightness matching is not such an easy and straightforward task as it might appear on the surface.” They don't seem to realize that no method is secure if your stimuli are ambiguous and unstable and you have not inspected them and factored the potential effects into your theory and method.

      Unfortunately, lesson seems not to have been learned, as revealed by the final discussion section of Betz, Shapley, Wichmann and Maertens, 2015, which similarly describes devastating methodological problems as an afterthought. Why are reviewers setting such an obviously low bar for publication?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Aug 30, Lydia Maniatis commented:

      The authors of this study construct an untenable, ad hoc account of a myopic observation.

      The observation is that the contrast effect in White's illusion “is largely determined by edge contrast across the edge orthogonal to the grating, whereas the parallel edge has little or no influence.”

      This is a correct literal description of White's illusion. If we had no knowledge of any other contrast illusions, we might overgeneralize from this and conclude that, in general, orthogonal edges produce contrast effects while parallel edges do not. But we do know more. We know that contrast effects are not tied to edge orientation in this way. Consequently we would not reasonably attempt to construct a neural model of contrast based on a principle of “Orthogonal edges produce contrast, parallel edges don't,” since such an ad hoc model, shrink-wrapped to fit White's illusion, would instantly be challenged and falsified by any number of other contrast effects. However, this is precisely what Betz et al propose. (Not to mention that, being “low-level,” the authors' proposed mechanism would also have to be compatible with every other “low-level” as well as every “high-level” effect, e.g. transparency effects, since information from the early parts of the visual system is the basis for the high-level effects.)

      In addition, the authors tailor their account to a very weak version of White's illusion (as the authors show, its effect is comparable to the classic slc demo), composed of single target bars rather than columns of aligned bars. The latter produce a much larger lightness differences as well as transparency effects (in the case of the lighter-seeming bars). Does the proposed model work for this classic version as well? Does it work for the classic simultaneous contrast illusion? Does it work for round targets on square backgrounds? Does it explain the Benary cross? If not, then how does such an account contribute to theoretical or practical understanding of the phenomenon (lightness contrast) under consideration? What are the chances that the visual system possesses a mechanism especially for producing a version of White's illusion?

      As far as their experimental observations, the authors take a little too much credit for themselves when they say that their experiments have shown that “not all luminance borders that enclose a surface are treated equally in White's illusion.” This is an unnecessarily narrow framing of a many-decades-known, fundamental fact of lightness perception.

      If there is one safe statement we can make about simultaneous contrast, it is that it is closely correlated with the appearance of a figure-ground relationship. Specifically, the surface that appears as figure lightens against a darker (apparent) background, and darkens against a lighter one. Mere (apparent) adjacency does not produce contrast. So when the authors claim to have shown that “the luminance step between the test patch and the grating bar on which it is [apparently!] placed is the critical condition for perceiving White's illusion” they a. Have not shown anything new and b. seem naive to the theoretical implications and generality of their own casual description (“on which it is placed”).

      The implications are that, given the robustness of the figure-ground/simultaneous contrast link, the challenge of predicting simultaneous contrast reduces to the challenge of predicting figure-ground relations. The latter, in contrast, is not reducible to structure-blind theories involving inhibition/filtering based on luminance ratios. And it is certainly not reducible to an orthogonal/parallel edge dichotomy.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 07, Bart Verkuil commented:

      An Associate Editor of The Journal of The Norwegian Medical Association made us aware of an error in the Introduction of this paper: we mention that a meta-analysis of the longitudinal relation between workplace bullying and mental health, published in The Journal of The Norwegian Medical Association (Tidsskrift for Den norske legeforening) in 2014, is only available in the Norwegian language. This is incorrect, as it is available in English: http://www.tidsskriftet.no/article/3213422


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 23, Ben Goldacre commented:

      This trial has the wrong trial registry ID associated with it on PubMed: both in the XML on PubMed, and in the originating journal article. The ID given is NCT013655104. We believe the correct ID, which we have found by hand searching, is NCT01365104.

      This comment is being posted as part of the OpenTrials.net project<sup>[1]</sup> , an open database threading together all publicly accessible documents and data on each trial, globally. In the course of creating the database, and matching documents and data sources about trials from different locations, we have identified various anomalies in datasets such as PubMed, and in published papers. Alongside documenting the prevalence of problems, we are also attempting to correct these errors and anomalies wherever possible, by feeding back to the originators. We have corrected this data in the OpenTrials.net database; we hope that this trial’s text and metadata can also be corrected at source, in PubMed and in the accompanying paper.

      Many thanks,

      Jessica Fleminger, Ben Goldacre*

      [1] Goldacre, B., Gray, J., 2016. OpenTrials: towards a collaborative open database of all available information on all clinical trials. Trials 17. doi:10.1186/s13063-016-1290-8 PMID: 27056367

      * Dr Ben Goldacre BA MA MSc MBBS MRCPsych<br> Senior Clinical Research Fellow<br> ben.goldacre@phc.ox.ac.uk<br> www.ebmDataLab.net<br> Centre for Evidence Based Medicine<br> Department of Primary Care Health Sciences<br> University of Oxford<br> Radcliffe Observatory Quarter<br> Woodstock Road<br> Oxford OX2 6GG


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 22, Falk Leichsenring commented:

      For a response to Hofmann, see http://ebmh.bmj.com/cgi/content/full/eb-2016-102372?ijkey=giCBgiE7JTlTg&keytype=ref&siteid=bmjjournals


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 25, Stefan Hofmann commented:

      For a more detailed critique and response, see: http://www.ncbi.nlm.nih.gov/pubmed/27009055


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Feb 27, Falk Leichsenring commented:

      We absolutely agree: strong conclusions require strong evidence

      Falk Leichsenring, Patrick Luyten, Mark J. Hilsenroth, Allan Abbass, Jacques P. Barber, John R. Keefe, Frank Leweke, Sven Rabung, Christiane Steinert

      Hofmann [1] asked us to provide evidence - here it is.

      In our first response [2] to Hofmann et al. [3], we showed that their commentary on the quality of studies of PDT is not consistent with empirical evidence from quality research performed in adversarial collaboration between PDT and CBT researchers, which found no significant differences in quality between studies of PDT and CBT [4, 5]. Furthermore, we noted that Gerber et al. did not find significant correlations between methodological quality and outcome in studies of PDT [5]. These results (i.e., evidence) are inconsistent with Hofmann et al.’s conclusions [3], regardless of whether Hofmann et al.’s ratings [3] are methodologically sound.

      In addition, we emphasized that Hofmann et al. failed to demonstrate any evidence that the quality of the 64 RCTs leads to results in favor of PDT. In a similar way Bhar and Beck suggested that the lack of difference in outcome between CBT and PDT found by Leichsenring, Rabung, and Leibing [6] was due to poor treatment integrity [7]. However, using Bhar and Beck’s own integrity ratings, their assertion was not corroborated by empirical data [8]. It is of note that these meta-analyses [6, 8] included researchers from both CBT (E.L.) and PDT (e.g. F.L.).

      In our first response, we also observed that the authors failed to provide basic data on interrater reliability, raters’ training, the rating procedures, attempts to address allegiance effects, or blinding of raters [2]. The authors also did not include researchers of both approaches among the raters, as done by Gerber et al. and Thoma et al. [4, 5]. In addition, we noted that Hofmann et al. based their conclusions of poor methodological quality on "unclear" designations of quality [2]. Most authors would have attempted to contact the original authors of a study before asserting that procedural information was unclear and making strong conclusions about study quality.

      Hofmann et al. [3] drew strong conclusions about the quality of our review using extreme terms such as "invalidating the authors´ results" and "making the findings meaningless" using nonstandard procedures of questionable quality. For strong conclusions, strong evidence is required. Yet, Hofmann et al. failed to provide it. For a commentary aiming to address study quality, it is puzzling to apply procedures of such poor quality.

      We are raising these issues again since Dr. Hofmann did not address them in his response [1]. Instead of doing so, Dr. Hofmann stated [1]: ”As their only defense, the authors argue that CBT is also poorly supported." In this way, he is simply ignoring the evidence we provided and the methodological shortcomings of his commentary we had pointed out [2]. Further, we did not question that CBT is an efficacious treatment. We just pointed out that the available evidence shows that the quality of CBT studies is no better than that of PDT studies [4, 5].

      We also did not intend to attack Dr. Hofmann on a personal level, but rather intended to provide evidence that he repeatedly applied double standards when judging studies of CBT as compared to those of PDT [2, 9, p. 49-51]. We respectfully asked that if he chooses to write about PDT (e.g., comment on a meta-analysis, conduct a meta-analysis, or conduct a study involving PDT), that he considers involving a psychodynamic researcher in the process. This invitation still stands.

      Dr. Hofmann emphasized that CBT is widely disseminated in the UK. This is true, but PDT is recommended by treatment guidelines and implemented in the National Health Service in the UK as well. This is also true in other countries. In Germany, for instance, PDT is as frequently used as CBT [10]. The Scientific Board for Psychotherapy (Wissenschaftlicher Beirat Psychotherapie; WBP) is the paramount body in Germany for assessing the scientific status of psychotherapeutic interventions. For this purpose, standardized and transparent criteria are used. Based on a careful evaluation by the WBP, both CBT and PDT were acknowledged as scientific and efficacious forms of psychotherapy (www.wbpsychotherapie.de). It is noteworthy that the WBP is composed of researchers from diverse psychotherapeutic orientations (e.g. CBT, PDT, and systemic therapy). The studies of PDT were evaluated by CBT researchers, and vice versa. The conclusions by a balanced expert institution such as the WBP are incompatible with those by Hofmann et al. [3].

      We all should be happy that a variety of psychotherapeutic treatments exist that are beneficial to patients. Future research should address the question of which patients benefit most from which treatments, and why.

      Declaring the evidence of a whole treatment approach as "meaningless" is not supported by the preponderance of evidence, and is counter-productive to this goal.

      References

      1 Hofmann SG: Show us the data! Pubmed Commons Feb 17 2016 2 Leichsenring F, Luyten P, Hilsenroth MJ, Abbass A, Barber JP, Keefe JR, Leweke F, Rabung S, Steinert C: Once again: Double standards in psychotherapy research - response to hofmann et al. PubMed Commons 2016 3 Hofmann SG, Eser N, Andreoli G: Comment from pubmed commons. . January 23rd, 12:28am UTC 2016 2016 4 Thoma NC, McKay D, Gerber AJ, Milrod BL, Edwards AR, Kocsis JH: A quality-based review of randomized controlled trials of cognitive-behavioral therapy for depression: An assessment and metaregression. Am J Psychiatry 2012;169 5 Gerber AJ, Kocsis JH, Milrod BL, Roose SP, Barber JP, Thase ME, Perkins P, Leon AC: A quality-based review of randomized controlled trials of psychodynamic psychotherapy. Am J Psychiatry 2011;168:19-28. 6 Leichsenring F, Rabung S, Leibing E: The efficacy of short-term psychodynamic psychotherapy in specific psychiatric disorders: A meta-analysis. Arch Gen Psychiatry 2004;61:1208-1216. 7 Bhar SS, Beck AT: Treatment integrity of studies that compare short-term psychodynamic psychotherapy with cognitive-behavior therapy Clin Psychol-Sci Pr 2009;16:370-378. 8 Leichsenring F, Salzer S, Hilsenroth M, Leibing E, Leweke F, Rabung S: Treatment integrity: An unresolved issue in psychotherapy research. Curr Psych Rev 2011;7 313-321. 9 Leichsenring F, Rabung S: Double standards in psychotherapy research. Psychother Psychosom 2011;80:48-51. 10 Albani C, Blaser G, Geyer M, Schmutzer G, Brähler E: Outpatient psychotherapy in germany from the patient perspective: Part 1: Health care situation Psychotherapeut 2010;55:503-514.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Feb 17, Stefan Hofmann commented:

      Show us the data! The authors wrote in the abstract that psychodynamic therapy is as efficacious as treatments established in efficacy. Strong conclusions require strong evidence. Dr. Leichsenring and colleagues were unable to provide the reader with such evidence. Our earlier commentary of the article by Leichsenring and colleagues reported data suggesting that the majority of studies included in their review were of low quality. It is the responsibility of Leichsenring and colleagues to provide the reader with evidence that their conclusions are justified. Instead of providing the reader with such evidence, the authors chose to attack me on a personal level. Whether or not Dr. Leichsenring and colleagues believe that I am adequate to serve as a reviewer or editor of scientific journals or grants and collaborate with others is completely unrelated to the weaknesses of their study. As their only defense, the authors argue that CBT is also poorly supported. The authors are incorrect. The supporting evidence of CBT is overwhelmingly large. Our own review identified 269 meta-analytic studies of CBT. We observed that the quality of studies that entered some of these meta-analyses were not uniformly high. However, some of them were of high quality (e.g., Hofmann & Smits, 2008). Because CBT has such a solid empirical basis, many countries, including the UK, disseminate CBT on a large-scale basis, e.g., http://www.iapt.nhs.uk/. It should be noted that this dissemination is not limited to CBT but also includes other empirically supported treatments. Polemics and personal attacks on my scientific integrity are not the real problem. The biggest concern in my view is that these disputes distract from the real issue. They confuse our patients and policy makers, inhibit scientific progress, and inflict harm by withholding effective treatments. References 1. Hofmann, S. G., Asnaani, A., Vonk, J. J., Sawyer, A. T., & Fang, A. (2012). The efficacy of cognitive behavioral therapy: A review of meta-analyses. Cognitive Therapy and Research, 36, 427-440. doi 10.1007/s10608-012-9476-1 2. Hofmann, S. G. & Smits, J. A. J. (2008). Cognitive-behavioral therapy for adult anxiety disorders: A meta-analysis of randomized placebo-controlled trials. Journal of Clinical Psychiatry, 69, 621-632. doi: 10.4088/JCP.v69n0415


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Jan 29, Falk Leichsenring commented:

      Once again: Double standards in psychotherapy research - response to Hofmann et al.

      Falk Leichsenring, Patrick Luyten, Mark J. Hilsenroth, Allan Abbass, Jacques P. Barber, John R. Keefe, Frank Leweke, Sven Rabung, Christiane Steinert

      Referring to a recent review on psychodynamic therapy (PDT) [1], Hofmann et al. [2] criticize the quality of studies included in this review. The authors conclude that the "poor quality" of studies of PDT "invalidates" the results of this review making them "meaningless" [2]. The comment by Hofmann et al. [2] deserves some response.

      • The conclusions drawn by Hofmann et al. [2] are inconsistent with present research. As shown by an independent research group including proponents of both CBT and PDT working respectfully together, the quality of studies of PDT does not differ significantly from that of studies of CBT which fell into the lower range of adequate quality [3, p. 22, 4]. Most of the studies included by Leichsenring et al. [1] were also included in this comparison [3, 4]. However, Hofmann has not described the respective CBT studies as “meaningless”.

      Furthermore, the comment by Hofmann et al. [2] suffers from several shortcomings.

      • The authors are incorrect when referring to our publication as a meta-analysis. In fact it was a systematic review [1]. This is of note since possible shortcomings in individual studies would not invalidate the review as a whole.

      • The authors draw a highly generalizing conclusion without any differentiation, for example, by disorders or degree of risk.

      • For their ratings, Hofmann et al. [2] did not report basic data on the number and training of raters, on blinding, or interrater-reliability. For the best minimization of bias, raters of both approaches would have been included as done by Gerber et al. and Thoma et al. [3, 4]. Thus, the quality of the procedures applied by Hofmann et al. [2] themselves is questionable.

      • The conclusions by Hofmann et al. [2] are based mostly on “unclear” designations, not clear flaws. In fact, an “unclear” risk of bias indicates that the design feature could be both worse or better than described in the article. What is most concerning is that the authors did not make any effort to resolve the “unclear” assignments by carefully reading the papers or contacting their authors. Many assignments are obviously clear from the studies [e.g. 5, 6].

      • In addition, even if there are flaws, Hofmann has not shown that these particular flaws lead to results in favor of PDT (rather than e.g., greater error in effect estimates overall). Several meta-analyses did neither find significant correlations between ratings of methodological quality and outcome [4, 7] nor between treatment integrity (assessed by prominent CBT researchers, e.g. Aaron T. Beck) and differences in outcome between CBT and PDT [8].

      • If the "poor quality" of PDT studies "invalidates" [2] the results reported by Leichsenring et al. [1] making them "meaningless", this would equally apply to meta-analyses carried out by CBT researchers who included several of these same studies. Tolin [9], for example, included 10 studies also included by Leichsenring et al. [1]. Hofmann, however, has never critically commented on these meta-analyses - which are interpreted as supporting the efficacy of CBT - thus, again applying a double standard.

      • Hofmann was repeatedly shown to apply double standards when judging studies of CBT vs. PDT [10, p. 49-51].

      (a) In a previous meta-analysis, for example, Hofmann [11, p. 180] claimed that the quality of studies included by him was "considerably better" than that of studies of PDT, e.g. in the meta-analysis by Leichsenring and Rabung [12]. Hofmann et al. [11] reported a mean Jadad score of 1.23, whereas the mean Jadad score in the meta-analysis by Leichsenring et al.[12] was 1.96 (0 = poor; 5 = excellent).

      (b) Hofmann [11]criticized the meta-analysis by Leichsenring et al. [12] for including heterogeneous studies. However, between-effect hterogeneity was in the low to medium range [10, 12]. In his own meta-analysis, Hofmann [11] did not even test for heterogeneity before combining data of randomized controlled and observational studies [10, 11].

      • Thus, from a scientific perspective, it is questionable whether a strong proponent of CBT who has publicly demonstrated that he is an opponent of PDT [e.g. 13] is able to provide unbiased conclusions about PDT.

      Given the author’s very negative publicly expressed opinions about PDT, and the way he conducted this critique of the Leichsenring et al. review [1] it appears that biases can lead to a lack of even-handedness in regard to the evaluation of psychodynamic studies. Thus, we respectfully would ask that if he chooses to write about PDT(e.g., comment on a meta-analysis, conduct a meta-analysis, or conduct a study involving PDT), that he involve a psychodynamic researcher in the process (i.e., implement a version of adversarial collaboration [14], and also that he recuse himself from being involved as an editor, or reviewer, in regard to research involving PDT.

      We would welcome the collaboration of CBT researchers in researching psychotherapy and synthesizing the results from trials. We have done so several times [e.g. 15, 16].

      Given the present crisis of replicability of research [17], biased and tendentious statements as those by Hofmann bear the risk of damaging all psychotherapy research equally in the eyes of the public.

      References

      1 Leichsenring F et al.: Psychodynamic therapy meets evidence-based medicine: a systematic review using updated criteria. Lancet Psychiatry 2015;2:648-660. 2 Hofmann SG et al.: Comment from PubMed Commons. January 23rd, 12:28am UTC 2016 3 Thoma NC et al.: A quality-based review of randomized controlled trials of cognitive-behavioral therapy for depression: an assessment and metaregression. Am J Psychiatry 2012;169 4 Gerber AJ et al.: A quality-based review of randomized controlled trials of psychodynamic psychotherapy. Am J Psychiatry 2011;168:19-28. 5 Barber J et al.: Short-term dynamic psychotherapy versus pharmacotherapy for major depressive disorder: a randomized, placebo-controlled trial. J Clin Psychiatry 2012;73:66-73. 6 Crits-Christoph P et al.: Psychosocial treatments for cocaine dependence: National Institute on Drug Abuse Collaborative Cocaine Treatment Study. Arch Gen Psychiatry 1999;56:493-502. 7 Keefe JR et al.: A meta-analytic review of psychodynamic therapies for anxiety disorders. Clin Psychol Rev 2014;34:309-323. 8 Leichsenring F et al.: Treatment integrity: An unresolved issue in psychotherapy research. Curr Psych Rev 2011;7 313-321. 9 Tolin DF: Is cognitive-behavioral therapy more effective than other therapies? A meta-analytic review. Clin Psychol Rev 2010;30:710-720. 10 Leichsenring F et al.: Double standards in psychotherapy research. Psychother Psychosom 2011;80:48-51. 11 Hofmann SG et al.: The effect of mindfulness-based therapy on anxiety and depression: a meta-analytic review J Consult Clin Psychol 2010; 78 169-183. 12 Leichsenring F et al.: Effectiveness of long-term psychodynamic psychotherapy: a meta-analysis. JAMA 2008;300:1551-1565. 13 Rief W et al.: [Saving psychoanalysis. At any cost?]. Nervenarzt 2009;80:593-597. 14 Mellers B. et al.: Do frequency representations eliminate conjunction effects? An exercise in adversarial collaboration. Psychol Sc 2001;12:269-275. 15 Leichsenring F et al.: The efficacy of short-term psychodynamic psychotherapy in specific psychiatric disorders: a meta-analysis. Arch Gen Psychiatry 2004;61:1208-1216. 16 Leichsenring F et al.: Psychodynamic therapy and CBT in social anxiety disorder: a multicenter randomized controlled trial. Am J Psychiatry 2013;170:759-767. 17 Open Science Collaboration: PSYCHOLOGY. Estimating the reproducibility of psychological science. Science 2015;349:aac4716.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 05, Lydia Maniatis commented:

      It's been a month since I posted my comments on this article and on the original Blakeslee and McCourt article. A sharp increase in visits to the target articles post-comment seems to indicate that the comments are being read, but no author or reader has put in their two cents, either here or on PubPeer. I think that my argument against the "brightness-is-perceived-luminance" idea is sound and straightforward. Am I wrong?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Sep 06, Lydia Maniatis commented:

      I fully agree with the latter part of this commentary but disagree with the author's concession that, in lightness experiments, "when illumination appeared homogeneous, lightness and brightness judgments were identical. There is nothing new here. It is well known."

      Brightness is currently being described, including by Gilchrist, as the perceptual correlate of luminance. But there is no perceptual correlate of luminance, even under (apparently) homogeneous illumination, and this can be proved as follows:

      We ask an observer to report on the lightness of a set of surfaces which don't produce the impression of shadows or transparency. Then, in a second session, we present the same set of surfaces under a different level of illumination. The lightness reports for the surfaces will stay essentially the same, even though their luminances may have changed substantially. So to say that people are making "brightness" judgments in either the first or the second or in any case doesn't seem reasonable.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 24, Doug Berger commented:

      Excellent article from a brave young researcher willing to put her career in the line of fire of criticism of CBT and other psychotherapy adherents, not to mention employment opportunities where psychotherapy research has been a mainstay for years and is a cash-cow. The only caveat I would add is that the translation of basic science research into the ability og CBT to overcome the inability to single-(patient blind), or double-(patient and therapist blind), as well as the inability to have blind placebo, in a psychotherapy clinical trial is a serious doubt. Many psychiatric study drugs over the years have had considerable pre-clinical basic science data, even phase I or phase II clinical data, only to fail miserably in phase III trials when adequate numbers of subjects were studied in tightly blinded conditions with blind placebo control. Basic science research only lends itself to decide if it is worth while to test a drug in large populations clinically, it doesn’t say itself if a drug will work, especially in psychiatric conditions where endpoints are subjective and random error high.

      The whole premise of CBT, that negative or distorted cognitions are the cause of a psychiatric condition is the only instance in all of medicine and psychiatry where a symptom of an illness is also construed to be the cause. Taking a symptom and making it into a cause is a way to spin the need of a therapy aimed at a cause that actually the result, and in psychiatric conditions with subjective endpoint hope and expectation effects are an easy way to garner some symptom reduction that can easily be called a "response" in a clinical trial. Reading the paper, I wonder if the journal asked the authors to propose ways to work around the blinding problem. The work-around is heuristic but not of practical value once a clinical trial gets going-and as the authors rightly note (and as I have noted in other papers below), there is really no way to do blind/blind placebo psychotherapy clinical trial.

      Other papers on this topic: https://www.ncbi.nlm.nih.gov/pubmed/26870318 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4863672/ https://www.japanpsychiatrist.com/Abstracts/CBT_Escape.html

      D. Berger, U.S. Board Certified Psychiatrist


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 23, Kelly Drew commented:

      Hello to all! Interesting paper, but don't dismiss A1AR. Have you seen these papers?

      1: Muzzi M, Coppi E, Pugliese AM, Chiarugi A. Anticonvulsant effect of AMP by direct activation of adenosine A1 receptor. Exp Neurol. 2013 Dec;250:189-93. doi: 10.1016/j.expneurol.2013.09.010. Epub 2013 Sep 19. PubMed PMID: 24056265.

      2: Muzzi M, Blasi F, Masi A, Coppi E, Traini C, Felici R, Pittelli M, Cavone L, Pugliese AM, Moroni F, Chiarugi A. Neurological basis of AMP-dependent thermoregulation and its relevance to central and peripheral hyperthermia. J Cereb Blood Flow Metab. 2013 Feb;33(2):183-90. doi: 10.1038/jcbfm.2012.157. Epub 2012 Oct 24. PubMed PMID: 23093068; PubMed Central PMCID: PMC3564191.

      3: Muzzi M, Blasi F, Chiarugi A. AMP-dependent hypothermia affords protection from ischemic brain injury. J Cereb Blood Flow Metab. 2013 Feb;33(2):171-4. doi: 10.1038/jcbfm.2012.181. Epub 2012 Dec 5. PubMed PMID: 23211965; PubMed Central PMCID: PMC3564206.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 26, Jim Woodgett commented:

      I sound like a broken record, but it is quite remarkable that yet another paper presumes an inhibitor is specific for GSK-3beta. In this case (AR-A014418) is equally as effective at inhibiting GSK-3alpha, as is every other small molecule inhibitor of GSK-3, including lithium. This is a problem because the authors fail to assess the contribution of the second isoform which is being equivalently blocked in the experiments. There are numerous papers that describe the common and distinctive functions of these protein kinases.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 08, Salzman Lab Journal Club commented:

      This paper presents a striking view of the intron lariat spliceosome at unprecedented resolution and uses pombe as a strategy for enriching relatively homogenous structural complexes. As our lab is interested in circular RNA, we were curious about how this structure may be used to model minimal circularized exon lengths. We look forward to more structures in the future that give a more detailed look at the exon-exon junction!


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 03, David Juurlink commented:

      Post-publication note: Additional analysis of absolute risk of opioid-related death among patients receiving more than 200 mg morphine (or equivalent) per day reported here http://bit.ly/2fVaQYL


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 22, Arnaud Chiolero MD PhD commented:

      An enlightening commentary on the necessity of revisiting DCIS treatment. It also raises question about screening - the major driver of DCIS identification.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 24, Andrew R Kniss commented:

      This “Perspective” piece is basically a plea from Dr. Landrigan and Dr. Benbrook for “all aspects of the safety of biotechnology” to be “thoroughly reconsider[ed]”. However, in the two page opinion, they provide no evidence that crop biotechnology is harmful. In fact, Landrigan and Benbrook acknowledge that the National Academy of Sciences (NAS) “has twice reviewed the safety of GM crops” and they do not dispute the scientific consensus expressed by NAS that “GM crops pose no unique hazards to human health.” The way I read it, the entire Perspective piece seems to be a muddled conflation of two separate (albeit related) issues; the use of GMO crops and the use of herbicides. A full critique here: http://goo.gl/IcZt2S

      Dr. Landrigan and Dr. Benbrook cite glyphosate-resistant weeds as a primary reason why “fields must be now be treated with multiple herbicides,” but this point also deserves some scrutiny. In corn, for example, multiple herbicides have been a common practice since long before GMO crops were introduced. In the year 2000, before Roundup Ready GMO corn had gained widespread adoption in the US (and also before glyphosate-resistant weeds were growing in corn fields), corn growers were applying nearly three herbicide active ingredients per acre. The latest USDA data from 2014 show fewer than 3.5 active ingredients applied per acre. This suggests that while glyphosate-resistant weeds may certainly have increased the number of herbicides used per acre compared to 5 years ago, the change has been relatively modest when compared to herbicide use before adoption of GMO crops.

      Another misleading statement made by Dr. Landrigan and Dr. Benbrook is that the “risk assessment gave little consideration to potential health effects in infants and children, thus contravening federal pesticide law.” This claim was also addressed explicitly by EPA in their FAQ document (http://www2.epa.gov/ingredients-used-pesticide-products/registration-enlist-duo). EPA concluded that after incorporating a 10X safety factor for children, and based on a “complete and very robust” data set, that the “risks were still acceptable for all age groups for all components of the assessment: dietary food and drinking water exposure, volatility, spray drift, residential, and aggregate assessment.” This claim is addressed in even more detail (600 words) in the EPA’s response to public comment (http://www.regulations.gov/contentStreamer?documentId=EPA-HQ-OPP-2014-0195-2414&disposition=attachment&contentType=msw8).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Sep 21, M Mangan commented:

      It should be further noted that although disclosure documents were updated for Dr. Benbrook in August, following the publication of an article in the NYTimes in September, new issues of potential conflicts of interest were unearthed. Documents provided from a FOIA request by Eric Lipton illustrated that both Benbrook and Landrigan were actively working with the "Just Label It" organization and Gary Hirshberg of the organic food industry, during the time of preparation of this work. http://www.nytimes.com/interactive/2015/09/06/us/document-benbrook.html

      My request to the NEJM to investigate this and update the disclosure documents, due to the new evidence, was declined by the NEJM.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Aug 20, M Mangan commented:

      This piece is astonishingly flawed on several levels, but most importantly has basic facts that are incorrect or blatantly misrepresented. Please see some capable assessments of the claims here, from a researcher who studies herbicides: http://weedcontrolfreaks.com/2015/08/gmos-herbicides-and-the-new-england-journal-of-medicine/

      I would also encourage readers to seek out the expert reactions from the Science Media Centre: http://www.sciencemediacentre.org/expert-reaction-to-gmos-herbicides-and-public-health/

      But I would also like to note that the calls to provide information via labels do not indicate what label they think would apply to their herbicide concerns. None of the proposed labels in the US, or any other country I've seen, address herbicides in any manner. Since non-GMOs also use herbicides, and some GMOs do not, calling for a misleading label would be irresponsible for health professionals.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 16, F Morceau commented:

      Interesting article but did the authors assessed the effect of BML-210 on NB4 cells differentiation? It woud have been interesting and relevant to provide this information. ATRA is mentioned in the materials and methods section while it has not been used in fact in the article! How this could escape to the reviewers?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 30, Rashmi Das commented:

      We thank Harri for his PERSONAL (NON PEER REVIEWED) OPINION which is available at above HANDLE ( http://hdl.handle.net/10138/153180) THAT CONTAINS DIRECT COPY AND PASTE OF THREE FIGURES/IMAGES FROM OUR PREVIOUS PUBLICATIONS (JAMA 2014 and Cochrane 2013). We are happy to reply to above comments made by Harri. First regarding the Cochrane review which was withdrawn in 2015, the detailed report is already available at following link (http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD001364.pub5/abstract). This report is the collaborative observation and conclusion of the Cochrane editors (UNLIKE THE HANDLE WHICH CONTAINS MORE OF PERSONAL OPINION WHICH HAS ALREADY BEEN EXAMINED BY THE COCHRANE EDITORS BEFORE REACHING THE CONCLUSION). The same HANDLE WAS SENT TO JAMA EDITORS REGARDING THE JAMA CLINICAL SYNOPSIS (PUBLISHED IN 2014) AND HARRI REQUESTED THE EDITORS TO CARRY OUT THE INVESTIGATION AND VERIFY. THE EDITORS ASKED US FOR REPLY WHICH WE CLARIFIED IN A POINT TO POINT MANNER (BOTH THE COMMENT BY HARRI AND OUR REPLY WAS PUBLISHED, SEE BELOW). HAD THE COMMENT/REPORT BY HARRI WAS ENTIRELY CORRECT, THE JAMA EDITORS COULD HAVE STRAIGHTWAY RETRACTED/WITHDRAWN THE SYNOPSIS WITHOUT GOING FOR PUBLICATION OF THE COMMENT/REPLY (Both are available at following: https://www.ncbi.nlm.nih.gov/pubmed/26284729; https://www.ncbi.nlm.nih.gov/pubmed/26284728). IT HAS TO BE MADE CLEAR THAT THE JAMA SYNOPSIS (DAS 2014) WAS WITHDRAWN AS THE SOURCE DOCUMENT ON WHICH IT WAS BASED (COCHRANE 2013 REVIEW) WAS WITHDRAWN (NOT BASED ON THE REPORT IN THE HANDLE WHICH IS A PERSONAL NON PEER REVIEWED OPINION). The irony is that though HARRI'S COMMENT got published as LETTER TO EDITOR in JAMA after OUR REPLY, still the NON PEER REVIEWED HANDLE THAT CONTAINS DIRECT COPY OF THREE FIGURES/IMAGES FROM OUR PUBLICATION IS GETTING PROPAGATED.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Aug 30, Harri Hemila commented:

      This is a short comment on Das RR, 2014. A much more detailed description of problems in Das RR, 2014 is available at HDL.

      The paper by Das RR, 2014 was based on their Cochrane review Singh M, 2013, which had a number of problems which are descibed in HDL.

      The Cochrane review was withdrawn in April 2015, see DOI.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 04, Hamid Salehiniya commented:

      I read the article,this is a attractive and useful article, based on previous research, we conclude obesity is a important risk factor for cancer, in conclusion one strategy in cancer prevention is active life style.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 28, Lydia Maniatis commented:

      There seems to be a chicken and egg problem going on here as well. Here: https://www.youtube.com/watch?v=QORWM3Pl760, one of the authors seems to be defining "stimulus" as something that is constructed by the visual system, as opposed to a light -reflecting object in the world, and at the same time he is saying that the nature of this stimulus is determined in some way by its frequency of occurrence. But the latter has no existence prior to the former. There has to be some reference at some point to an objective situation, a stimulus in that sense, for the conversation to work.

      Do the authors consider there to be a link between the topography of the light energy on the retina and the topography of the light energy in the world one moment before it strikes the retina?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Aug 28, Lydia Maniatis commented:

      The claims in this article (and related past publications) are not credible, and barely defended.

      First, there is a little bit of sleight-of-hand in the authors' definition of “wholly empirical,” with the result that it includes both wholly acceptable and wholly unacceptable assertions. After describing their view that the visual system assigns perceptual values “without ever recovering or statistically estimating the properties of objects and conditions in the visual environment,” they label this view “wholly empirical” on the basis that it “depends entirely on feedback from trial and error experience.” But unlike the former claim, which is highly disputable, the latter seems simply to be a description of evolution by natural selection, a trial and error process in which adaptive features are preserved and less adaptive ones rejected. I cannot imagine that any scientist today would propose a mechanism for a biological function that they do not believe could have arisen via natural selection. Thus, it does not seem fair on the part of the authors to monopolize the concept for a specific set of proposals.

      They reinforce this apparent monopoly by implying that a lack of correlation between the percept and the physical world is a necessary consequence of the link between perceptual processes and behavior: “...the biological feedback loop...would progressively order the basic visual qualities we perceive (apparent length, lightness, etc) according to their impact on biological (reproductive) success, rather than with the generative properties in the world (actual physical lengths, the reflective and illumination values underlying luminance, etc) [On what basis do the authors refer to reflectance and illumination as objective properties of the world (see below)?]” The “rather than” term is a verbal trick, implying that a perceptual representation that is correlated with (selected) physical properties would be less adaptive than one that is not. But such an assumption is not warranted.

      The “evidence” for the position is also contingent on looseness in the supporting arguments. We are told, for example, that “since luminance measures the number of photons falling on the retina, common sense suggests that measurements of light intensity and perceived lightness should be proportional. Thus, if two surfaces return the same amount of light to the eye, they “should” be perceived as equally light.” This is a straw man. In the most extreme cases of same-luminance surfaces producing different lightness, the different surfaces, in addition to producing lightness percepts, also produce illumination percepts. Higher perceived lightness correlates with lower perceived illumination. The visual system is attempting to provide relative estimates for reflectance and illumination. This could not be achieved by directly representing luminance, whatever “common sense” might say. It is not correct to say that the visual system never correctly (or to a good approximations) represents relative reflectance/illumination values across surfaces. When it does, it is not by accident, it is what the system is designed (so to speak) to achieve. The idea that all images that elicit reflectance/illumination percepts do so on the basis of the frequency of evolutionary experience - behavioral responses to each particular stimulus, and their consequences for the reproductive success of the individual - is not credible. Even if we take apparent illumination out of the picture, the problem is just as big. Are we supposed to explain e.g. every Kanisza figure, on the basis of how frequently it occurred?

      And, for that matter, the notion that “sensory stimuli [are] associated with biologically useful responses” does not justify the claim that stimulus frequency determines perceptual values, since a stimulus pattern may be very frequent but have little biological consequence, or rare but have a larger biological impact. So, in another sleight-of-hand, the authors have slipped an unwarranted frequency-biologically-relevant link into their argument.

      In addition to the lightness example, the other piece of “evidence” offered is equally problematic on a number of counts. First, we are told that psychophysical experiments indicate that lines oriented at about thirty degrees from the vertical appear longer than lines at any other orientation. But the perceived length of a line is entirely contingent on the figure in which it is incorporated. So even if the authors claim that, over evolutionary time, line segments with the thirty degree orientation occurred (and were reacted to in an evolutionary dispositive way), more often than any other orientation, this would not explain why an oblique with a greater deviation from the vertical will reliably yield a longer percept than a vertical when incorporated in, e.g. a Shepard box. Even if we take the authors' natural scene statistics at face value, they are not relevant, since edges always occur in objects, and the shapes of these objects mediate the perception of length of an edge. Assumptions regarding the shapes of objects even determine whether or not a physically present or absent edge will or will not be seen.

      The natural scene statistics offered are also not credible. Human ancestors varied differed greatly in size form the present, individual humans differ in height based on age, the eyes and body are constantly in motion, the distance to the object being viewed is constantly changing, and all of these things affect the orientation of the projection of a physical edge. Most of our time is arguably spent looking at close range, and at people. A statistical distribution developed on the basis of measurements from a fixed height at a relatively great distance of groomed gardens is arguably not a good approximation of human experience. Finally, are the authors really claiming that an isolated line segment of thirty degrees looks longer than others because it (supposedly) projected more frequently on the retinas of humans and their ancestors? If all percepts are drawn from frequency distributions, then how do we account for their qualitative differences? Are colors also labelled on the basis of the frequency that each collection of wavelengths occurs?

      There is, finally, a fundamental contradiction in the argument that perceptual values don't ever recover or estimate properties of objects and conditions in the environment (I discuss this contradiction in Maniatis (in press). The problem is that if we choose to adopt this view, then we are not entitled to refer to the physical world as though we did, in fact, have knowledge about it. Thus, statements such as “consider the objective length of a line, e.g. the edge of a ruler...” are paradoxical because they imply that the authors have access to the very objective facts that they claim are inaccessible. It does no good to argue that we detect objective facts using objects called instruments, since the properties of these objects, too, are only accessible through perception. Given their position, the authors are not entitled to make any reference to the objective world and its properties, since such references directly contradict it.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Aug 26, Lydia Maniatis commented:

      In a search of the book "Perceiving Geometry," the term 'shape' comes up three times, twice in reference to the shape of distributions and once in a plain assertion that some neurons respond selectively to "higher-order stimulus characteristics...such as shape...". The assertion is not accompanied by arguments. The book has many references, are there any in particular that address the issue of shape?

      The term or concept of 'figure-ground' also does not arise in the book. Do you consider the problems of perceptual organisation to have been solved on the basis of frequency distributions?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2015 Aug 24, Dale Purves commented:

      Shape is a higher order construct of "size, distance, orientation" and geometrical factors such as the length of intervals (lines) and angles. See Howe and Purves "Perceiving Geometry" (Springer, 2005) for a full account and references to original papers.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2015 Aug 24, Lydia Maniatis commented:

      In their summary, the authors state that:

      "Visual perception is characterized by the basic qualities of lightness, brightness, color, size, distance, orientation, speed and direction of motion ordered over some range."

      They have left out the most basic quality of all, and the one that largely mediates all the others: Shape. Was this an oversight, or a purposeful or principled omission?

      They also say that: "These perceived qualities and their order within these ranges, however, do not align with reality."

      They should clarify whether they mean for this second statement to apply also to shape.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 14, Arturo Casadevall commented:

      The central criticism is that we have compared variables for which there is no causal relationship. We recognize the difficulties involved in assuming causality and dangers of spurious correlations when plotting unrelated variables. Furthermore, we are fully aware that correlation is not causation. However, the criticism made by Levine and Weinstein does not take into account a large body of published scholarly work showing that spending of public funds translates into medical goods such as new therapeutics. To make this point, we note the findings of several studies. In 2000, a United States Senate Report found that of the 21 most important drugs introduced between 1965-1992, 15 (71%) ‘were developed using knowledge and techniques from federally funded research’ (http://www.faseb.org/portals/2/pdfs/opa/2008/nih_research_benefits.pdf). A recent study of 26 transformative drugs or drug classes found that for many, their discovery was made with governmental support [1]. Numerous other studies have reinforced this point [2-4]. Kohout-Blume estimated that a 10% increase in targeted funding for specific diseases produced a 4.5% increase in the number of drugs reaching clinical trials after an average lag of 12 years [5]. In our own investigations, we have traced the ancestry of most of the drugs licensed in the past four decades to publicly funded research (unpublished data). The literature in this field overwhelmingly supports the notion that public spending in biomedical research translates into public goods. The debate is not about whether this happens but rather about the magnitude of the effect. The notion that public funding in biomedical research generates basic knowledge that is subsequently used in drug development is a concept accepted by most authorities. Hence, the use of the NIH budget as a proxy for public spending in biomedical research is appropriate.

      We are aware that establishing causal relationships among non-experimental variables can be a daunting task. However, we note that the relationship between public spending and medical goods does meet some of the essential criteria needed to establish causality. First, the relationship between these variables meets the requirement of temporal causality since, for many drugs, publicly funded basic research precedes drug development. Second, we also have mechanistic causality, since knowledge from basic research is used in designing drugs. There are numerous examples of mechanistic causality including the finding of receptors with public funds that are then exploited in drug development when industry generates an agonist or inhibitor. We acknowledge that we do not know if the relationship between public spending and drug development is linear, and the precise mathematical formulation for how public spending translates into medical goods is unknown. In the absence of evidence for a more complex relationship, a linear relationship is a reasonable first approximation, and we note that other authorities have also assumed linear relationships in analyzing inputs and outcomes in the pharmaceutical industry. For example, Scannell et al. [6] used a similar analysis to make the point that ‘The number of new drugs approved by the US Food and Drug Administration (FDA) per billion US dollars (inflation-adjusted) spent on research and development (R&D) has halved roughly every 9 years’.

      The authors claim to have done a causality analysis of the data generated in our paper, concluding that ‘We do not find evidence that NIH budget ⇒ NME (p=0.475), and thus it may not be a good indicator of biomedical research efficiency.’ However, this oversimplifies a very complex process of how public spending affects NME development; we do not agree that this simple analysis can be used to deny causality. Although the limited information provided in their comment does not permit a detailed rebuttal, we note that a failure to reject the Granger causality null hypothesis does not necessarily indicate the absence of causality. Furthermore, Granger causality refers to the ability of one variable to improve predictions of the future values of a second variable, which is distinct from the philosophical definition of causality. Whether or not NIH budget history adds predictive ability in determining the number of NMEs approved at some point in the future cannot negate the fact that basic biomedical research funding unequivocally influences the creation of future drugs, as well as many other outcomes. Therefore, we stand by our use of NIH funding and NMEs as indicators of biomedical research inputs and outcomes.

      The authors suggest that another study by Rzhetsky et al. [7] contradicts the findings of our paper and provides a better method of studying biomedical research efficiency. The work by Rzhetsky et al., while very interesting, addresses a fundamentally different question relating to how scientists can most efficiently choose research topics to explore a knowledge network [7]. The allocation of scientists to research topics is undoubtedly a possible contributor to overall research efficiency, but the approach used in this analysis is very different from our broader analysis of the biomedical research enterprise as a whole. The work in [7] has a narrow scope and does not attempt to study the impact of research investments in producing societal outcomes. The central conclusion of our paper is that biomedical research inputs and outputs are increasing much faster than outcomes, as measured by NMEs and LE.

      We do not ‘conjecture that a lack relevance or rigor in biomedical research’ is solely responsible for this phenomenon, as Levine and Weinstein assert. Instead, our paper discusses a number of possible explanations—many of which have been previously identified in the literature [6-12], including several that agree with the conclusions of Rzhetsky et al. [7]. However, the recent epidemic of retracted papers along with growing concerns about the reproducibility of biomedical studies, expressed in part by pharmaceutical companies dedicated to the discovery of NMEs [13, 14], are indisputable facts. If a substantial portion of basic science findings are unreliable, this is likely to contribute to reduced productivity of the research enterprise. We agree with the suggestion that research difficulty increases as a field matures, which has been made by others [6]; this does not contradict our analysis and is mentioned in our paper’s discussion. Biomedical research efficiency is complex, and it is likely that the decline in scientific outputs has numerous causes. It is appropriate for scientists to consider any factors that may be contributing to this trend, and the comments from Dr. Schuck-Paim in this regard (see the other posted comments) are therefore welcome.

      In summary, we do not find the arguments of Levine and Weinstein to be compelling. We note that other investigators have come to conclusions similar to ours [6, 15]. The productivity crisis in new drug development has been intensively discussed for at least a decade [6, 15-16]. We believe that addressing inefficiencies in biomedical research is essential to maintain public confidence in science and, by extension, public funding for basic research.

      Arturo Casadevall and Anthony Bowen

      [1] Health Aff (Millwood ) 2015, 34:286-293.

      [2] PNAS 1996, 93:12725-12730.

      [3] Am J Ther 2002, 9:543-555.

      [4] Drug Discov Today 2015, 20:1182-1187.

      [5] J Policy Anal Manage 2012, 31:641-660.

      [6] Nat Rev Drug Discov 2012, 11:191-200.

      [7] PNAS 2015, 112:14569-14574.

      [8] Res Policy 2014, 43(1):21–31.

      [9] Nature 2015, 521(7552):270–271.

      [10] Br J Cancer 2014, 111(6):1021–1046.

      [11] Nature 2015, 521(7552):274–276.

      [12] J Psychosom Res 2015, 78(1):7–11.

      [13] Nature 2012, 483(7391):531-533.

      [14] Nat Rev Drug Discov 2011, 10(9):712.

      [15] Nat Rev Drug Discov 2009, 8:959-968.

      [16] Innovation policy and the economy 2006, 7:1-32.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jan 08, Michael LeVine commented:

      In this recent article, Bowen and Casadevall attempt to classify biomedical research efficiency in terms of the ratio of outcomes to input. The outcomes were chosen to be approved new molecule entities (NMEs) and US life expectancy (LE); the chosen input was the NIH budget. While the resulting analysis claims that efficiency has decreased in the last decade, we argue that (i)-the analysis performed is insufficient to make that claim, and (ii)- the findings do not support the conjecture that a lack of relevance or rigor in biomedical research is causing stagnation in medicine and public health.

      Bowen and Casadevall suggest that because research projects take time to complete, it is possible that “the exponential growth in research investment and scientific knowledge over the previous five decades has simply not yet grown fruit and that a deluge of medical cures are right around the corner”. They investigate time-lagged efficiency for NMEs, but this analysis is only sufficient if there is a linear causal relationship between the two variables that is unaffected by any external variables that have not been included in the analysis. Without any evidence of such a relationship, it is unwise to interpret a trend in a ratio between two unassociated measurements. Just as two unrelated measurements can display a spurious correlation, the ratio between those measurements may display a spurious trend.

      We reanalyzed the data used in this paper to find evidence of causal relationships between the inputs and outcomes. To do this, we tested for Granger causality (1), which identifies potentially causal relationships by determining whether the time series of one variable is able to improve the forecasting of another. We analyzed the non-stationary time series from 1965-2012 using the Toda and Yamamoto method (2), which utilizes vector autoregression. We will refer to a variable X improving the forecasting of a variable Y as X ⇒ Y.

      We do not find evidence that NIH budget ⇒ NME (p=0.475), and thus it may not be a good indicator of biomedical research efficiency. However, we do find evidence that NIH budget ⇒ LE (p<10-8) and NIH budget ⇒ publications (p<machine precision). Notably, however, both VAR models utilize the maximum possible time lags (15, as selected using the Akaike information criterion (3), coincidentally the same number as used in this paper), and do not pass the Breusch-Godfrey test (4) for serially correlated residuals. As this suggests that more time lags are required to build appropriately rigorous models, it seems unwise to over-interpret the potential Granger causal relationships or make any comparisons between the Granger causality during different periods of time, until significantly more time points are available. Even with additional data, the serial correlation in the residuals might not be alleviated without the use of more complex models including non-linear terms or external variables. All three of these possible limitations also affect the analysis in this paper, but no statistical tests were performed there to assess the robustness of results.

      We conclude that this published study of biomedical research efficiency is insufficient methodologically because models of greater complexity are required. From our reanalysis, we are not able to support the hypothesis that biomedical research efficiency is decreasing. Instead, we can only conclude that from 1965-2012, the NIH budget may have a causal effect on LE and publication, but that more time points are required to improve the models.

      Another recent work (5), which aimed to study the scientific process and its efficiency, also suggested that the efficiency of biomedicinal chemistry research (defined in terms of the number of experiments that would need to be performed to discover a given fraction of all knowledge) has decreased over time. However, the analysis in (5) also suggested that even the optimal research strategy would eventually display a decrease in efficiency, due to the intrinsic increase in the difficulty of discovery as a field matures. While there are additional limitations and assumptions involved in this analysis, (5) provides an example of the level of complexity and quantitative rigor required to study research efficiency, and implies an alternative explanation for the potential reduction in biomedical research efficiency. Considering the findings in (5), we deem the hypotheses proposed in this paper, which suggests that a lack of relevance or rigor in biomedical research is causing stagnation in medicine and public health, to be unfounded.

      We write this comment in part because unfounded defamatory claims directed at the scientific community are dangerous in that they may negatively affect the future of scientific funding. Such claims should not be made lightly, and the principle of parsimony should be invoked when less defamatory alternative hypotheses are available.

      Michael V. LeVine and Harel Weinstein

      (1) Granger CWJ (1969) Investigating Causal Relations by Econometric Models and Cross-spectral Methods. Econometrica 37(3):424–438.

      (2) Toda HY, Yamamotob T (1995) Statistical inference in vector autoregressions with possibly integrated processes. J Econom 66:225–250.

      (3) Akaike H (1974) A new look at the statistical model identification. IEEE Trans Autom Control 19(6):716–723.

      (4) Breusch TS (1978) Testing for autocorrelation in dynamic linear models. Aust Econ Pap 17(31):334–355.

      (5) Rzhetsky A, Foster JG, Foster IT, Evans JA (2015) Choosing experiments to accelerate collective discovery. Proc Natl Acad Sci:201509757.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Jan 14, Arturo Casadevall commented:

      We appreciate the comment by Dr. Schuck-Paim and we fully agree that increased transparency and reporting of data during the process of therapeutic development can only benefit the scientific enterprise and enable a more efficient use of limited resources. Some of the other mentioned issues, including relevance of animal models, underpowered study design, and errors during data analysis and reporting, have all been implicated in the literature as referenced by Dr. Schuck-Paim and cited in our paper. We agree that each of these issues merits attention and expect that new tools will need to be developed to address some problems. One example would be the development of drug screening chips containing human cells as an alternative to some animal models, which may have poor predictability of a drug’s toxicity in humans (http://www.ncats.nih.gov/tissuechip).

      Anthony Bowen

      Arturo Casadevall


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2015 Nov 05, Cynthia Schuck-Paim commented:

      In exploring progress in biomedical research, the authors show that human life expectancy and the number of new molecular entities (NME) approved by the FDA have remained relatively constant over the last decades, despite increasing financial input, research efforts and publication numbers.

      To explain the slowing of therapeutic innovation they consider several negative pressures acting on the field, including prior resolution of simpler research problems, increasing regulation, overreliance on reductionist approaches (including use of animal models), and the poor quality of published research. The high prevalence of irreproducible results, obscure methods, poorly designed research and publication biases are also mentioned.

      Many of these issues would greatly benefit from initiatives that promote transparency at the various stages of the therapeutic development pipeline. It has been widely acknowledged that poor reporting prevents the accurate assessment of drug and intervention efficacy. Indeed, pre-clinical research and in vivo methods have been shown to be particularly prone to biases and selective reporting of outcomes, leading to bad decision-making, wasted resources, unnecessary replication of efforts and missed opportunities for the development of effective drugs (1). One proposal to address this issue is the extension of good disclosure practice to the pre-clinical phase by conditioning publication to the registration of the pre-clinical trial prior to the commencement of the study (2). Indeed, the exact same reasons that compelled prospective registration and deposition of clinical trial results in public databases apply to preclinical studies.

      Still, no matter how transparent, well-designed, -analyzed and -reported the research, results generated from inappropriate models will not be successfully translated into valid disease contexts. Currently, most pre-clinical studies are based on the use of animal models, despite the increasing number of articles showing that they are an expensive and ineffective option to explore pathophysiological mechanisms, evaluate therapeutics, and decide on whether drug candidates should be carried forward into the clinical phase (3-5).

      Failure rates in the clinical phase are around 95% (6), mainly due to the limited power of animal studies to predict NME efficacy, safety and toxicity in humans. These predictive odds vary depending on the understanding and complexity of disease biology: while for therapeutics targeting infectious diseases success rates are higher, for diseases involving complex mechanisms, such as cancer, they can be as low as 2.3% (7). Such low predictability drains the entire system by funneling limited resources into outputs that often fail.

      In addition, false negatives at the pre-clinical stage eliminate a large part of NMEs that may have succeeded otherwise. Let us not forget Aspirin, a blockbuster drug that would not make the preclinical trial phase if tested today given its unacceptably high toxicity in animal tests. Animal-based pre-clinical phases are certainly pivotal in explaining the small number of NMEs identified in the last decades. Implementation of methods that are more faithful to the human biology is crucial for the much needed progress to ameliorate human disease and suffering.

      References

      (1) Macleod MR, 2015 Risk of bias in reports of in vivo research: a focus for improvement. PLoS Biol 13: e1002273

      (2) Kimmelman J, Anderson JA (2012). Should preclinical studies be registered? Nat Biotechnol 30: 488–489

      (3) Hartung T (2013). Food for thought: look back in anger – what clinical studies tell us about preclinical work. Altex 30: 275–291.

      (4) Sutherland BA, 2012 Neuroprotection for ischaemic stroke: translation from the bench to the bedside. Int J Stroke 7: 407–18.

      (5) Seok J, 2013 Genomic responses in mouse models poorly mimic human inflammatory diseases. PNAS 110: 3507–12.

      (6) Arrowsmith J, 2012 A decade of change. Nat Rev Drug Discov 11:17–18

      (7) Hay M, 2014 Clinical development success rates for investigational drugs. Nat Biotechnol 32 (1): 40–51.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 31, Wichor Bramer commented:

      In my opinion the conclusions of this article are rather overdrawn. The authors have determined that the coverage of PubMed is high enough to be used in reviews. However, they have not performed searches for a systematic review, they have performed multiple searches for known items. If recall would be ideal PubMed would retrieve enough relevant articles to be used as a single database. However, recall is never ideal in any database. My recent observations (which have not yet been published) show that Medline retrieved 937 out of 1192 included references for 38 published reviews, so only 79%. Embase and medline together retrieved 1110 references (94%). So overall recall in embase is much better than in medline alone. In my opinion overall recall is not the best measure for database usefulness. When deciding a strategy for a systematic review, one decides for only one review, and not for 50 of 38. A better parameter is the minimum number observed. In medline the minimum was 53%, compared to 76% in embase/medline. Neither is acceptable in my opinion. The authors of this article also found that for some reviews, the coverage in pubmed was much lower than the total coverage. One database cannot be used to replace all other databases in the search for a systematic review.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 09, Kevin Hall commented:

      The theoretical basis of the carbohydrate-insulin model (CIM) relies on generally accepted physiology about the endocrine regulation of adipose tissue based on short-term experiments lasting days and weeks. While there are indeed metabolic adaptations that take place on longer time scales, many of these changes actually support the conclusion that the purported metabolic advantages for body fat loss predicted by the CIM are inconsistent with the data.

      For example, as evidence for a prolonged period of fat adaptation, Ludwig notes modest additional increases in blood and urine ketones observed after 1 week of either starvation Owen OE, 1983 or consuming a hypocaloric ketogenic diet Yang MU, 1976. The implication is that daily fat and ketone oxidation presumably increase along with their blood concentrations over extended time periods to eventually result in an acceleration of body fat loss with low carbohydrate high fat diets as predicted by the CIM. But since acceleration of fat loss during prolonged starvation would be counterproductive to survival, might there be data supporting a more physiological interpretation the prolonged increase in blood and urine ketones?

      Both adipose lipolysis Bortz WM, 1972 and hepatic ketone production Balasse EO, 1989 reach a maximum within 1 week as demonstrated by isotopic tracer data. Therefore, rising blood ketone concentrations after 1 week must be explained by a reduced rate of removal from the blood. Indeed, muscle ketone oxidation decreases after 1 week of starvation and, along with decreased overall energy expenditure, the reduction in ketone oxidation results in rising blood concentrations and increased urinary excretion (page 144-152 of Burstztein S, et al. ‘Energy Metabolism, Indirect Calorimetry, and Nutrition.’ Williams & Wilkins 1989). Therefore, rather than being indicative of progressive mobilization of body fat to increase oxidation and accelerate fat loss, rising concentrations of blood ketones and fatty acids occurring after 1 week arise from reductions in ketone and fat oxidation concomitant with decreased energy expenditure.

      The deleterious effects of a 600 kcal/d low carbohydrate ketogenic diet on body protein and lean mass were demonstrated in Vasquez JA, 1992 and were found to last about 1 month. Since weight loss was not significantly different compared to an isocaloric higher carbohydrate diet, body fat loss was likely attenuated during the ketogenic diet and therefore in direct opposition to the CIM predictions. Subsequent normalization of nitrogen balance would tend to result in an equivalent rate of body fat loss between the isocaloric diets over longer time periods. In Hall KD, 2016, urinary nitrogen excretion increased for 11 days after introducing a 2700 kcal/d ketogenic diet and coincided with attenuated body fat loss measured during the first 2 weeks of the diet. The rate of body fat loss appeared to normalize in the final 2 weeks, but did not exceed the fat loss observed during the isocaloric high carbohydrate run-in diet. Mere normalization of body fat and lean tissue loss over long time periods cannot compensate for early deficiencies. Therefore, these data run against CIM predictions of augmented fat loss with lower carbohydrate diets.

      Ludwig uses linear extrapolation to claim that our data “would imply a 13 kg greater body fat loss versus the higher-fat diet over a year”. However, the same computational model that correctly predicted the difference in short-term body fat loss projected only small differences in long-term body fat between the diets. Based on these model simulations we concluded that “the body acts to minimize body fat differences with prolonged isocaloric diets varying in carbohydrate and fat.”

      While I believe that outpatient weight loss trials demonstrate that low carbohydrate diets often outperform low fat diets over the short-term, there are little body weight differences over the long-term Freedhoff Y, 2016. However, outpatient studies cannot ensure or adequately measure diet adherence and therefore it is unclear whether greater short-term weight losses with low carbohydrate diets were due to reduced diet calories or the purported “metabolic advantages” of increased energy expenditure and augmented fat loss predicted by the CIM. The inpatient controlled feeding studies demonstrate that the observed short-term energy expenditure and body fat changes often violate CIM predictions.

      Ludwig conveniently suggests that all existing inpatient controlled feeding studies have been too short and that longer duration studies might produce results more favorable to the CIM. But even this were true, the current data demonstrate repeated violations of CIM model predictions and constitute experimental falsifications of the CIM. This possibility was accurately described in my review Hall KD, 2017 and requires an ad hoc modification of the CIM such that the metabolic advantages of isocaloric lower carbohydrate diets only begin after a time lag lasting many weeks – a possibility currently unsupported by data but obviously supported by sincere belief.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Feb 07, DAVID LUDWIG commented:

      In his comment of 31 January 2017, Hall continues to insist that the results of his 6-day study and other very short feeding studies of substrate oxidation inform understanding of the long-term relationship between diet and body composition. This contention can be simply dismissed, with recognition that the 36 g/d advantage in fat oxidation on Hall’s low-fat diet would imply a 13 kg greater body fat loss versus the higher-fat diet over a year. There is simply no precedent for such an effect, and if anything the long-term clinical trials suggest the opposite Tobias DK, 2015 Mansoor N, 2016 Mancini JG, 2016 Sackner-Bernstein J, 2015 Bueno NB, 2013.

      The reason short term studies of high-fat diets are misleading is that the process of adapting to reduced carbohydrate intake can take several weeks. We can clearly observe this phenomenon in 4 published graphs involving very-low-carbohydrate diets.

      For convenience, these figures can be viewed at this link:

      Owen OE, 1983 Figure 1. Ketones are, of course, the hallmark of adaptation to a very-low-carbohydrate (ketogenic) diet. Generally speaking, the most potent stimulus of ketosis is fasting, since the consumption of all gluconeogenic precursors (carbohydrate and protein) is zero. As this figure shows, the blood levels of each of the three ketone species (BOHB, AcAc and acetone) continues to rise for ≥3 weeks. Indeed, the prolonged nature of adaptation to complete fasting has been known since the classic starvation studies of Cahill GF Jr, 1971. It stands to reason that this process might take even longer on standard low-carbohydrate diets, which inevitably provide ≥ 20 g carbohydrate/d and substantial protein.

      Yang MU, 1976 Figure 3A. Among men with obesity on an 800 kcal/d ketogenic diet (10 g/d carbohydrate, 50 g/d protein), urinary ketones continued to rise for 10 days through the end of the experiment, and by that point had achieved levels equivalent only to those on day 4 of complete fasting. Presumably, this process would be even slower with a non-calorie restricted ketogenic diet (because of inevitably higher carbohydrate and protein content).

      Vazquez JA, 1992 Figure 5B. On a conventional high-carbohydrate diet, the brain is critically dependent on glucose. With acute restriction of dietary carbohydrate (by fasting or a ketogenic diet), the body obtains gluconeogenic precursors by breaking down muscle. However, with rising ketone concentrations, the brain becomes adapted, sparing glucose. In this way, the body shifts away from protein to fat metabolism, sparing lean tissue. This process is clearly depicted among women with obesity given a calorie-restricted ketogenic diet (10 g carbohydrate/d) vs a nonketogenic diet (76 g carbohydrate/d), both with protein 50 g protein/d. For 3 weeks, nitrogen balance was strongly negative on the ketogenic diet compared to the non-ketogenic diet, but this difference was completely abolished by week 4. What would subsequently happen? We simply can’t know from the short-term studies.

      Hall KD, 2016 Figure 2B. Another study by Hall shows that the transient decrease in rate of fat loss upon initiation of the ketogenic diet accelerates after 2 weeks.

      The existence of this prolonged adaptive process explains why metabolic advantages for low-fat diet are consistently seen in very short metabolic studies. But after 2 to 4 weeks, advantages for low-carbohydrate diets begin to emerge, Hall KD, 2016 Miyashita Y, 2004 Ebbeling CB, 2012. Any meaningful conclusions about the long-term effects of macronutrients must await longer studies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jan 31, Kevin Hall commented:

      My recent review of the carbohydrate-insulin model Hall KD, 2017 presented a synthesis of the evidence from 20 inpatient controlled feeding studies strongly suggesting that at least some important aspects of the model are in need of modification. In particular, our recent studies Hall KD, 2015, Hall KD, 2016 employing carefully controlled inpatient isocaloric diets with constant protein, but differing in carbohydrate and fat, resulted in statistically significant differences between the diets regarding body fat and energy expenditure that were in directions opposite to predictions of the carbohydrate-insulin model.

      Ludwig comments that the diets used in Hall KD, 2015 were either too low in fat or insufficiently low in carbohydrate. However, while these considerations may be clinically important for sustainability of the diets, they are irrelevant to whether the diets resulted in a valid test of the carbohydrate-insulin model predictions. We selectively reduced 30% of baseline calories solely by restricting either carbohydrate or fat. These diets achieved substantial differences in daily insulin secretion as measured by ~20% lower 24hr urinary C-peptide excretion with the reduced carbohydrate diet as compared with the reduced fat diet (p= 0.001) which was unchanged from baseline. Whereas the reduced fat diet resulted in no significant energy expenditure changes from baseline, carbohydrate restriction resulted in a ~100 kcal/d decrease in both daily energy expenditure and sleeping metabolic rate. These results were in direct opposition to the carbohydrate-insulin model predictions, but in accord with the previous studies described in the review as well as a subsequent study demonstrating that lower insulin secretion was associated with a greater reduction of metabolic rate during weight loss Muller MJ, 2015.

      While the DXA methodology was not sufficiently precise to detect significant differences in body fat loss between the diets, even this null result runs counter to the predicted greater body fat loss with the reduced carbohydrate diet. Importantly, the highly sensitive fat balance technique demonstrated small but statistically significant differences in cumulative body fat loss (p<0.0001) in the direction opposite to the carbohydrate-insulin model predictions. Ludwig claims that our results are invalid because “rates of fat oxidation, the primary endpoint, are exquisitely sensitive to energy balance. A miscalculation of available energy for each diet of 5% in opposite directions could explain the study’s findings.” However, it is highly implausible that small uncertainties in the metabolizable energy content of the diet amounting to <100 kcal/d could explain the >400 kcal/d (p<0.0001) measured difference in daily fat oxidation rate. Furthermore, our results were robust to the study errors and exclusions described in the report and our observations clearly falsified important aspects of the carbohydrate-insulin model.

      Ludwig argues that “it can take the body weeks to fully adapt to a high fat diet”, However, daily fat oxidation has been observed to plateau within the first week when added dietary fat is accompanied by an isocaloric reduction in carbohydrate as indicated by the rapid and sustained drop in daily respiratory quotient in Hall KD, 2016 and Schrauwen P, 1997. Similarly, Hall KD, 2015 observed a decrease and plateau in daily respiratory quotient with the reduced carbohydrate diet, whereas the reduced fat diet resulted in no significant changes indicating that daily fat oxidation was unaffected. As further evidence that adaptations to carbohydrate restriction occur relatively quickly, adipose tissue lipolysis is known to reach a maximum within the first week of a prolonged fast Bortz WM, 1972 as does hepatic ketone production Balasse EO, 1989.

      While there is no evidence that carbohydrate restricted diets lead to an acceleration of daily fat oxidation on time scales longer than 1 week, and there is no known physiological mechanism for such an effect, this possibility cannot be ruled out. Such speculative long term effects constitute an ad hoc modification of the carbohydrate-insulin model whereby violations of model predictions on time scales of 1 month or less are somehow reversed.

      Ludwig is correct that it takes the body a long time to equilibrate to added dietary fat because, unlike carbohydrate and protein, dietary fat does not directly promote its own oxidation and does not significantly increase daily energy expenditure Schutz Y, 1989 and Horton TJ, 1995. Unfortunately, these observations run counter to carbohydrate-insulin model predictions because they imply that added dietary fat results in a particularly efficient means to accumulate body fat compared to added carbohydrate or protein Bray GA, 2012. If such an added fat diet is sustained, adipose tissue will continue to expand until lipolysis is increased to sufficiently elevate circulating fatty acids and thereby increase daily fat oxidation to reestablish balance with fat intake Flatt JP, 1988.

      Of course, differences in long term ad libitum food intake between diets varying in macronutrient composition could either obviate or amplify any predicted body fat differences based solely on fat oxidation or energy expenditure considerations. Such mechanisms warrant further investigation and will inform improved models of obesity. Nevertheless, it is clear that several important aspects of the carbohydrate-insulin model have been experimentally falsified by a variety of studies, including Hall KD, 2015.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Jan 17, DAVID LUDWIG commented:

      In a recent review, the first author of this Cell Metabolism article (Kevin Hall) cites this 6-day study as a basis for having “falsified” the Carbohydrate-Insulin Model of obesity. That argument disregards some key limitations of this study, which warrant elucidation.

      In the discussion section of this study, Hall and colleagues write: “Our relatively short-term experimental study has obvious limitations in its ability to translate to fat mass changes over prolonged durations” (NB, it can take the body weeks to fully adapt to a high fat diet Hawley JA, 2011 Vazquez JA, 1992 Veum VL, 2017). Beyond short duration and confounding by transient biological adaptations, the study: 1) did not find a difference in actual fat mass by DXA (p=0.78); 2) used an exceptionally low fat content for the low-fat diet (< 8% of total energy), arguably without precedent in any population consuming natural diets; 3) used a relatively mild restriction of carbohydrate (30% of total energy), well short of typical very-low-carbohydrate diets; 4) had protocol errors and post-randomization data exclusions that could confound findings; and 5) failed to verify biologically available energy of the diet (e.g., by analysis of the diets and stools for energy content). Regarding this last point, rates of fat oxidation, the primary endpoint, are exquisitely sensitive to energy balance. A miscalculation of available energy for each diet of 5% in opposite directions could explain the study’s findings – and this possibility can’t be ruled out in studies of such short duration.

      Thus, this study should not be interpreted as providing a definitive test of the Carbohydrate-Insulin Model.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 23, Ben Goldacre commented:

      One of the trials in this article has the wrong trial registry ID associated with it on PubMed: both in the XML on PubMed, and in the originating journal article. The ID given is NCT0050446. We believe the correct ID, which we have found by hand searching, is NCT00550446.

      This comment is being posted as part of the OpenTrials.net project<sup>[1]</sup> , an open database threading together all publicly accessible documents and data on each trial, globally. In the course of creating the database, and matching documents and data sources about trials from different locations, we have identified various anomalies in datasets such as PubMed, and in published papers. Alongside documenting the prevalence of problems, we are also attempting to correct these errors and anomalies wherever possible, by feeding back to the originators. We have corrected this data in the OpenTrials.net database; we hope that this trial’s text and metadata can also be corrected at source, in PubMed and in the accompanying paper.

      Many thanks,

      Jessica Fleminger, Ben Goldacre*

      [1] Goldacre, B., Gray, J., 2016. OpenTrials: towards a collaborative open database of all available information on all clinical trials. Trials 17. doi:10.1186/s13063-016-1290-8 PMID: 27056367

      * Dr Ben Goldacre BA MA MSc MBBS MRCPsych<br> Senior Clinical Research Fellow<br> ben.goldacre@phc.ox.ac.uk<br> www.ebmDataLab.net<br> Centre for Evidence Based Medicine<br> Department of Primary Care Health Sciences<br> University of Oxford<br> Radcliffe Observatory Quarter<br> Woodstock Road<br> Oxford OX2 6GG


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 13, Stuart Phillips commented:

      See a letter to the editor "Greater electromyographic responses do not imply greater motor unit recruitment and 'hypertrophic potential' cannot be inferred" regarding the data in this paper: http://www.ncbi.nlm.nih.gov/pubmed/26670996


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 07, Siddharudha Shivalli commented:

      Importance of study tool validation and adherence to reporting guidelines in community based cross sectional studies

      I read the article titled “Prevalence and risk factors associated with malaria infection among pregnant women in a semi-urban community of north-western Nigeria” by Fana SA et al, with curiosity. Authors’ efforts are admirable. This study reiterates malaria as a major public health problem among pregnant women in Argungu and lack of education and non-usage of ITNs augments the risk of malaria. However, following issues need to be addressed. In methods section, authors mention that 266 pregnant women in their second trimester were randomly selected from the enlisted 850 households. But, following should have been mentioned : How many pregnant women were assessed for eligibility? Why only 2nd trimester pregnant women were included? What was the operational definition to categorize a pregnant woman as user or non-user of ITN? Prevalence of malaria, a key outcome variable, should have been reported with 95% confidence intervals . In results section, authors have repeatedly mentioned the p value as ‘0.000’. SPSS, by default setting, displays p value as zero if it extends beyond 3 decimal points (i.e. p=0.0000007 would be displayed as p=0.00). Practically, the value of p cannot be zero and hence, I, would suggest to report it as p<0.0001. Authors have mentioned in the limitation as they did not assess the key factors such as gravidity, trimester, whether IPT was given or not and frequency of antenatal care visits etc. While conducting a community based study with a sample frame of 850 households, one must be sure of the sampling and the study tool/s. The questionnaire should have been validated by 3/more epidemiologists. Use of the phrase ‘risk factors’ in the title is debatable as it was a cross sectional study and the observed associations may not imply causality.<br> None the less, I must congratulate the authors for investigating an important public health problem among pregnant women.

      Conflict of Interests: The author declares that there is no conflict of interest about this publication.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 08, Kausik Datta commented:

      The authors make it a point to state: "Our results highlight (i) the scholarship and rational methodology of premodern medical professionals and (ii) the untapped potential of premodern remedies for yielding novel therapeutics at a time when new antibiotics are desperately needed."

      Can someone kindly explain to me how this study, while immensely interesting to me, is any different from regular ethnobotany or pharmacognosy studies?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 24, Johannes Weinberger commented:

      Accession number for all sequencing data of this study: NCBI Sequence Read Archive (SRA): PRJNA283508


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 01, Jesus Castagnetto commented:

      One minor correction. This article mentions a program developed in Peru, but calls it "AMUATA" instead of the correct name "AMAUTA".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 27, Anthony Michael commented:

      The authors of this paper claim that this is the first report of heterologous production of 1,3-diaminopropane in E. coli. They appear to have overlooked our report of heterologous production of 1,3-diaminopropane in E. coli published six years ago in the Journal of Biological Chemistry: http://www.ncbi.nlm.nih.gov/pubmed/19196710 This is an understandable oversight considering the crowded field of heterologous production of 1,3-diaminopropane in E. coli research.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 12, RUBEN ABAGYAN commented:

      Additional disclosure. One of the authors, R.A., has an equity interest in Molsoft, LLC. The terms of this arrangement have been reviewed and approved by the University of California, San Diego in accordance with its conflict of interest policies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 21, Michael McCann commented:

      I would like to point out an earlier paper, Entezari A, 2012, which also develops theory for using splines to discretize tomography reconstruction. I'm wondering if the authors can briefly comment about the relationship between the two approaches (which I think boils down to the way projection is handled).

      (Disclosure: though I'm not an author of Entezari A, 2012, I am aware of the paper because I'm currently working in that group.)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 31, Lydia Maniatis commented:

      Todd, Egan & Kallie (2015) provide more demonstrations of violations of the "darker-is-deeper" rule. They note that the rule is already falsified by literature: "the evidence is overwhelming that observers' judgments of 3D shape from shading do not conform to the predictions of [the rule]" (Of course - the possible falsifying cases are infinite in number).

      They also note that the rule was falsified by Langer and Bulthoff (2000), who concluded from their observations that "the perception of shape from shading must be based on a process that is more sophisticated that a simple darker-is-deeper heuristic." (Todd et al, 2015).

      I wondered whether Chen and Tyler (2015) had cited Langer & Bulthoff (2000) in this paper. Indeed, they do, but the implication is the opposite of the description cited above:

      "Langer & Bülthoff (2000) showed that the observer can indeed discriminate between “hills” and “valleys” on a surface with this “dark-is-deep” rule" (Chen and Tyler, 2015). This description conveys a very different and misleading (I would prefer to describe it as dishonest) impression of the results of the cited reference, papering over complications and contradictions to smooth the way to a preferred but non-viable argument.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Nov 27, Lydia Maniatis commented:

      I just want to expand a little on my previous comment, with regard to the direction of the light source. As I said, the implied direction of the light is front-parallel to the corrugations, but my suggestion that the shadowing is consistent with attenuation due to distance along the normal seems inadequate. I want to add that the putative peaks of the corrugations would tend to reflect light from all directions, not only light coming along the normal, and that the troughs would receive less light because, from some directions, it would be blocked by the protruding sections. (I realise now that this is the point the author was trying to make with the north-south wall example. He is correct but again, that does not mean that the light is not directional, only that it comes from a number of directions. I think that my assumption of fronto-parallel lighting may be more consistent with the patterns than his assumption that the directions form the surface of a half-sphere (the sky), but they might be equivalent.)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Nov 27, Lydia Maniatis commented:

      In his comment above, CCC says that “The issue of diffuse illumination is a rather minor aspect of the paper...” However, based on the abstract, confirming the “diffuse illumination assumption" is presented as central goal: “These results validate the idea that human observers can use the diffuse illumination assumption to perceived depth from luminance gradients alone without making an assumption of light direction.”

      In the authors' stimuli, light does, in fact, have an implied direction – the light source is fronto-parallel to the (apparent) corrugations, such that its intensity is attenuated with distance in the direction along the normal.

      As a general fact, the greater weighting of what we could call pictorial cues over binocular disparity in the perception of depth did not require more corroboration. It is evident in the fact that very convincing pictorial depth effects are easy to achieve, despite the zero disparity. Also, with regard to the presence of positive (rather than zero) disparity cues, a relevant reference that was not included is Pizlo, Zygmunt, Yunfeng Li, and Robert M. Steinman. "Binocular disparity only comes into play when everything else fails; a finding with broader implications than one might suppose." Spatial Vision 21.6 (2008): 495-508. (Update 12/2: I apologise, the authors do cite this paper).

      In their abstract Langer and Buelthoff (2000) conclude that “overall performance in the depth-discrimination task was superior to that predicted by a dark-means-deep model. This implies that humans use a more accurate model than dark-means-deep to perceive shape-from-shading under diffuse lighting.”


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2015 Nov 23, Chien-Chung Chen commented:

      First of all, it should be noted that this paper is about cue combination, not shape-from-shading per se. The issue of diffuse illumination is a rather minor aspect of the paper, to which most of the discussion in the thread is quite irrelevant.

      The commenter’s argument that no shadows are generated under diffuse illumination is a serious misunderstanding of the properties of diffuse illumination. Although, the light comes from every direction, it does not reach everywhere. For instance, suppose one erects a wall running from north to south on an otherwise flat surface. Immediately to the west of the wall, all the light from the east would be blocked by the wall. Thus, this location would receive less light and in turn appear darker than it would be without the wall, effectively forming a diffuse shadow. In short, the light received at a position on a surface depends on the extent of the sky it is exposed to. On an uneven surface, a position in a “valley” would “see” less of the sky than a position on a “hill” and thus would appear darker. This is why under diffuse illumination, darker implies deeper. Notice that, deeper means deeper than the surrounding surface, not from the source of light as the commenter erroneously states.

      The difference between the “light-from-above” but with “darker-is-deeper” assumptions is one of the range of scenes to which they apply. It is indeed an empirical issue whether an interpretation that applies to many scenes is a true default or is cued by some aspect of the scene. Our claim is that the directional lighting assumption that is so common in computer graphics is typically cued by the discrepancy between symmetric contour information and asymmetric shading information. (In the case of Ramachandran’s disks, the contour information is circularly symmetric.) This particular discrepancy is a narrow subset of all possible arrangements of contours and shading in images. If this discrepancy is removed, either by making the shading (circularly) symmetric or by making the contours asymmetric, the prediction is that the visual system will not maintain the “light-from-above” assumption but will default to the “darker-is-deeper” assumption. The “darker-is-deeper” assumption is considered the more basic default because it can apply to images with any degree of symmetry or asymmetry. Of all possible visual scenes generated at random, only a small subset will contain symmetries and an even smaller subset will have contour/shading discrepancies consistent with any kind of oblique lighting direction. It is only in the presence of such discrepancies that a default for the lighting direction could come into play.

      Finally, it is true that is difficult to separate “darker-is-deeper” and “darker-is-darker-colored” from one instance. However, such heuristics do not depend on one instance but the statistics of nature scenes. In a complex scene, one does not just use one visual cue. “Good shape” is not a necessary condition for darker-is-deeper rule as the complex random stimuli used by Langer & Buelthoff (2000) showed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2015 Sep 26, Lydia Maniatis commented:

      You say in the article that under diffuse illumination "light comes from every direction." In that case, there should be no shadows; a uniformly-colored surface should appear uniformly-colored in the image and unevenness (bumps, etc) will not be evident. The fact that you postulate a “darker-deeper” rule indicates that illumination is directional – “deeper” being equivalent to “farther from the source.”

      When you say that diffuse illumination is “the default assumption when there is not strong enough evidence to support other interpretations of the scene,” how is this different from my saying that “light from the top right” is the default assumption when there is not strong enough evidence to support other interpretations of the scene?

      The information available to the visual system is simply a luminance pattern, which could derive from an infinite number of reflectance/illumination combinations at each point. A “diffuse illumination” assumption – or any illumination assumption – cannot drive a solution to this problem, because it actually cannot decide between the possibilities: “darker is deeper” and “darker-is-darker-colored.” The drivers of the solution are good shape assumptions, combined with the assumption that good shapes have uniform reflectance. So if we have a good shape, (e.g. the Ramachandran bumps) then we assume that the luminance differences are illumination-based, and the illumination assumption follows.

      (I would like to add that my earlier comment was deleted by moderators b/c it was a duplication, not b/c it was inappropriate!)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2015 Sep 25, Chien-Chung Chen commented:

      The commenter seemed to confuse the concept of “default” with “predominate”. The default assumption is like a null hypothesis in statistics. The visual system uses the default assumption when there is not strong enough evidence to support other interpretations of the scene. The commenter seemed to take it that the default assumption meant that the observer would always make this assumption about the scene over other possible interpretations (thus, her statement that we “could not explain why we often perceive illumination to be directional”). With this clarification, it should be clear that in our paper, there is no logical contradiction in our claim that diffused illumination is a visual system default.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2015 Aug 18, Lydia Maniatis commented:

      The authors claim to their results a. "validate the idea that human observers can use the diffuse illumination assumption to perceive depth from luminance gradients alone without making an assumption of light direction and b. confirm that "observers can recover shape from shading under diffuse illumination on the basis of the “dark-is-deep” rule."

      There is a logical problem with the idea that the visual system has a "diffuse illumination assumption," on the basis of which it applies a dark is deep rule.

      In viewing a scene, we perceive both the quality of the illumination (including direction) and the albedo of surfaces. Both of these qualities are constructed by the visual system, which must resolve the reflectance/illumination ambiguity. There is no default assumption as to either the reflectance of a particular patch nor the quality of the illumination. Both are inferred on the basis of structural constraints.

      For example, in the case of the Ramachandran bumps demonstration, the fundamental choice is between seeing concave or convex circles with a homogeneous reflectance, or seeing part moon shapes of varying reflectances (some dark, some light, some in-between). In the Chen/Tyler stimuli, structural assumptions as to the "best" shape consistent with the stimulus produce a percept consistent with a homogeneously coloured surface under diffuse illumination. If the diffuse illumination assumption came first, we could not explain why we see the corrugations in the obviously flat picture, and we could not explain why we often perceive illumination to be directional. In addition, we can create cartoonish versions of corrugations, in which the deeper dark region is not signalled by a gradient, and still produce a similar impression of dark is deep. If structural assumptions don't favour it, however, dark will not be deep, it will just be dark.

      In order to defend an a priori diffuse illumination assumption, the authors would need to explain why the assumption is made only sometimes (i.e. only when structural constraints demand it. )

      I would add that their stimuli are not really consistent with diffuse illumination conditions, in the sense that both the upper and lower edges are straight, and thus we must be seeing the corrugated surfaces through a rectangular aperture. The concave sections of the surface would be at some distance from the edge of this aperture and I assume the luminance profile would indicate this in some way, with shadowing, or black holes in the gaps, something.

      When it comes to 3D perception, shading follows shape, not vice versa.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 19, Seyed Moayed Alavian commented:

      Dear Author, I would like to emphasize the role of supervision on barbers and more efforts for increasing the people awareness . Yours Prof Alavian


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 09, Donald Forsdyke commented:

      PURINE LOADING AS A THERMAL ADAPTATION The proteins of thermophiles are generally more heat-stable than the corresponding proteins from mesophiles. This must be reflected in either, or both, of two major amino acid variables – composition and order. In the past the notion that amino acid composition might be reflective of the pressure in thermophiles to retain purine-rich codons (3) has been disparaged by Zeldovich et al. (6). In this elegant new paper (5), Venev and Zeldovich (2015) agree that the “multiple factors” not accounted for in their modelling “include the influence of the genetic code and guanine-cytosine (GC) content of the genomes on amino acid frequencies.” However, there is puzzlement that the “theory and simulations predict a strong increase of leucine content in the thermostable proteins, whereas it is only minimally increased in experimental data.” Perhaps it is of relevance that leucine is on the top left quadrant of the standard presentation of the genic code, its codons being extremely poor in purines.

      My response to Zeldovich et al. (6) in 2007, and my follow-up references in 2012 (1, 4), are set out below. One of the coauthors of the 2007 paper has recently further contributed to this topic (2).

      2007 Response

      This paper draws conclusions tending to oppose those of myself and coworkers (cited). A "key question" is held to be: "Which factor - amino acid or nucleotide composition - is primary in thermal adaptation and which is derivative?" Previous evidence is considered "anecdotal." Now there is evidence for "an exact and conclusive" relationship, based on an "exhaustive study" that provides a "complete picture." A set of amino acids - IVYWREL - correlates well with growth temperature. It is noted:

      "Signatures of thermal adaptation in protein sequences can be due to the specific biases in nucleotide sequences and vice versa. ... One has to explore whether a specific composition of nucleotide (amino acid) sequences shapes the content of amino acid (nucleotide) ones, or thermal adaptation of proteins and DNA (at the level of sequence compositions) are independent processes."

      In other words, are primary adaptations at the nucleic acid level driving changes at the protein level, or vice- versa? To what extent are the two processes independent? Their conclusion:

      "Resolving the old-standing controversy, we determined that the variation in nucleotide composition (increase of purine-load, or A + G content with temperature) is largely a consequence of thermal adaptation of proteins."

      Thus, the superficial reader of the paper, while noting the purine-richness of some of the codons corresponding to the IVYWREL amino acids, will conclude that the "independent processes" alternative has been excluded. Reading the paper (e.g. Figure 7) one can question the validity of this conclusion. Many of the IVYWREL amino acids have purine-poor alternative codons (especially IYLV, which at best can only change one purine unit in their codons). One of the IVYWREL amino acids has relatively purine-rich alternative codons (R, which at best can change two purine units). Two (EW) are always purine-rich, and there are no alternatives.

      Displaying more EW's as the temperature got hotter would satisfy a need both for more purines and for more tryptophan and glutamate, so here there is no discrimination as to whether one "shapes" the organism’s content of the other. Displaying more IYLVs gives only minimal flexibility in accommodating a purine-need. Most flexibility is provided by R codons.

      The authors do not give statistics for the differences between the slopes of Figs. 7a (unshuffled codons) and 7b (shuffled codons), but they appear real, presumably reflecting the choice biologically of purine-rich codons, a choice the organisms might not have to make if there were no independent purine-loading pressure. Thus, the authors note, but only in parenthesis, that the slopes "are somewhat different suggesting that codon bias may be partly responsible for the overall purine composition of DNA."

      2012 Response

      As a follow up, it can be noted that Dehouck et al. (2008) report that relationship between a protein's thermostability and the optimum growth temperature of the organism containing it, is not so close as previously thought (1). Furthermore, Liu et al. (2012) now conclude from a study of xylanase purine-rich coding sequences that "The codons relating to enzyme thermal property are selected by thermophilic force at [the] nucleotide level," not at the protein level (4).

      1.Dehouck Y, Folch B, Rooman M (2008) Revisiting the correlation between proteins' thermoresistance and organisms' thermophilicity. Protein Engineering, Design and Selection 21:275-278.Dehouck Y, 2008

      2.Goncearenco A, Berezofsky IN (2014) The fundamental tradeoff in genomes and proteomes of prokaryotes established by the genetic code, codon entropy, and the physics of nucleic acids and proteins. Biology Direct 9:29 Goncearenco A, 2014

      3.Lambros RJ, Mortimer JR, Forsdyke DR (2003) Optimum growth temperature and the base composition of open reading frames in prokaryotes. Extremophiles 7:443–450.Lambros RJ, 2003

      4.Liu L, Wang L, Zhang Z, Wang S, Chen H (2012) Effect of codon message on xylanase thermal activity. J. Biol. Chem. 287:27183-27188 Liu L, 2012

      5.Venev SV, Zeldovich KB (2015) Massive parallel sampling of lattice proteins reveals foundations of thermal adaptation. J. Chem. Phys. 143: 055101Venev SV, 2015

      6.Zeldovich KB, Berezofsky IN, Shakhnovich EI (2007) Protein and DNA sequence determinants of thermophilic adaptation. PLOS Comput. Biol. 3(1), e5.Zeldovich KB, 2007


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 23, Klaas Vandepoele commented:

      Datasets described in the paper are available as BED files via http://bioinformatics.intec.ugent.be/blsspeller/ :

      BLS files (contain phylogenetic conserved genomic regions given certain cutoffs, in bed file format): osa/zma_C90F20B15.bed

      Conserved known motif files (conserved known motif files, from CisBP database v1.02): osa/zmaconservedknownmotifsBLS15/95.bed

      DNase hypersensitive sites (regions of open chromatin, in bed file format): DNase1hypersensitivesites_osa.bed


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 09, Andrzej Miskiewicz commented:

      The clinical study fall in line with the previously published genetic research of Zhang L. and Wong D.T. However in the presented manuscript the authors focus their research on the inflammation mechanism as a possible molecular mechanism common for the mentioned conditions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 12, Donald Forsdyke commented:

      WINGE PROPOSED HYBRID STERILITY CURED BY WHOLE-GENOME DUPLICATION

      That hybrid sterility would be 'cured' by whole genome duplication was suggested by 'the father of yeast genetics' [1], Őjvind Winge [2], and has been extensively discussed in modern texts on speciation [3] and evolutionary biology [4].

      He would doubtless have been delighted that his favorite organism (apart from dogs) had formed the basis of the elegant study by Marcet-Houben and Gabaldon that provides a welcome endorsement of his viewpoint.

      [1] Szybalski W (2001) My road to Őjvind Winge, the father of yeast genetics. Genetics 158:1–6.

      [2] Winge Ő (1917) The chromosomes. Their numbers and general importance. Comptes Rendus des Travaux du Laboratoire Carlsberg. 13:131–275. see Webpage.

      [3] Forsdyke DR (2001) The Origin of Species Revisited. Montreal: McGill-Queen’s University Press, pp. 72–79.

      [4] Forsdyke DR (2011) Evolutionary Bioinformatics. 2nd edition. New York: Springer, pp. 184–186.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 27, Trevor Marshall commented:

      I see that Figure 2 of this paper closely resembles the in-silico image of D-Binding Protein which I presented (inter alia) at the Autoimmunity Congress in 2008, in Porto. I wonder if it was attributed properly? The caption I see on the journal's website does not seem to give any citation at all. Additionally, the text seems to be describing my image as the VDR, whereas this image is actually of the D-Binding Protein, not the VDR, despite the caption: "Figura 2 di 2. Il recettore della vitamina D (VDR) [D Binding Protein Complexed with 1,25-D]"


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 10, Randi Pechacek commented:

      Elisabeth Bik discussed this paper in a microBEnet blog about microbial biofilms in water meters.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 24, Salzman Lab Journal Club commented:

      This thought-provoking paper describes a potential mechanism for cryptic exon splicing due to TDP-43 proteinopathy. It is interesting to consider the evolutionary history of the sequence surrounding these cryptic exons and the origins of TDP-43 binding. The authors show that GU-repeats, the consensus motif for TDP-43, often flank these cryptically spliced exons. Is it that these GU repeats originally promoted the splicing of these cryptic exons and TDP-43 evolved the ability to bind GU repeats to restrict this process or did the GU-repeats arise later to repress the splicing by recruiting TDP-43, leading to repression? Additionally, we would be interested in seeing the effect of other engineered TDP-43 mutants and truncations on cryptically spliced exons, specifically, a mutant lacking the C-terminus and other splicing repressor domains and the full length TDP-43 rescue as simple controls.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 25, David Keller commented:

      More observational evidence suggesting cardiovascular benefits associated with testosterone replacement

      This observational study found that TRT (testosterone replacement therapy) dosing resulting in normalization of serum testosterone levels was associated with greater benefits than were lower doses. Association of an intervention with a benefit in an observational study does not prove causation and cannot be used to prove the safety or efficacy of the intervention. Observational studies are inherently affected by confounding variables which can only be eliminated by conducting a randomized controlled trial of the intervention.

      Indeed, the authors conclude that "adequately powered, prospective, well-designed trials with a long-term follow-up will be needed to reach a conclusive agreement regarding the effect of TRT on CV risk."

      We certainly have more than enough results from observational studies at this time to justify the expense of a large prospective randomized trial of testosterone replacement therapy. Given the possibility that the cardiovascular benefits associated with TRT may be caused by TRT, it is a major disservice to ageing men to delay the definitive randomized trial any longer.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 25, Noa Krawczyk commented:

      Versão em Português do artigo disponível [Portuguese version available at]: https://figshare.com/s/f82b748e5f4133e4af2c

      "A interação entre comportamentos de consumo de droga, contextos e acesso aos cuidados de saúde: Um estudo qualitativo explorando atitudes e experiências de usuários de crack no Rio de Janeiro e São Paulo, Brasil"

      Resumo:

      Antecedentes: Apesar da crescente atenção em torno do uso de crack no Brasil, pouco se sabe sobre as histórias dos usuários, seus padrões de consumo, e a interação de hábitos de consumo de droga, contextos e acesso/barreiras ao(s) cuidados de saúde. Estudos qualitativos raramente comparam os achados de pessoas que usam crack a partir de diferentes contextos. Este estudo tem como objetivo explorar os insights de usuários regulares de crack em duas grandes cidades brasileiras e examinar como fatores sociais e contextuais, incluindo o estigma e a marginalização, influenciam o uso inicial e diversos problemas de saúde e sociais.

      Métodos: Entrevistas em profundidade e grupos focais foram realizados com 38 adultos usuários de crack recrutados em bairros pobres do Rio de Janeiro e São Paulo. As entrevistas e grupos focais foram gravadas em áudio e transcritas na íntegra. Procedeuse à análise qualitativa e os conteúdos foram organizados e analisados por temas recorrentes relevantes para os interesses do estudo.

      Resultados: Para os participantes do estudo de ambas as cidades, o uso frequente de crack desempenha um papel central na vida diária e leva a uma série de consequências físicas, psicológicas e sociais. Os interesses comuns entre os usuários incluem o uso excessivo de crack, o engajamento em comportamentos de risco, a utilização pouco frequente de serviços de saúde, a marginalização e a dificuldade em reduzir o consumo de drogas.

      Conclusões: As condições desfavoráveis em que muitos usuários de crack crescem e vivem podem perpetuar os comportamentos de risco, e o estigma marginaliza ainda mais os usuários dos serviços de saúde e de recuperação necessários. A redução do estigma e do discurso moralizante relacionado ao uso de drogas, especialmente entre os profissionais de saúde e policiais, pode incentivar os usuários a procurar atendimento necessário. Novas alternativas de cuidado para usuários marginalizados, baseados em redução de danos estão sendo desenvolvidas em algumas localidades no Brasil e outros países, e deveriam ser adaptadas e expandidas para populações vulneráveis.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 05, Andrew R Kniss commented:

      It is certainly plausible that herbicides (glyphosate or other) might have some direct effect on earthworms. However, due to the design flaws, these effects cannot be evaluated from this particular study. The problems with this paper boil down to two main points:

      1) One of the herbicides applied in the study was "Roundup Speed" which contains the herbicide pelargonic acid in addition to glyphosate, so it is impossible to conclude anything about the direct effects of glyphosate. 2) More importantly, the researchers didn't include a control treatment where they killed the plants without herbicides. All of the effects on earthworms and nutrients observed in this study could simply be due to killing the plants. It is perfectly plausible the exact same effects would be observed if the plants were clipped or pulled out of the pots.

      In addition, the glyphosate rate used in this study is far greater than would be used in field applications of this product. I calculated the amount of glyphosate applied to the pots (adding up the three applications they made) and converted it into the amount of glyphosate per unit area. It turns out the amount of glyphosate applied to each pot is equivalent to a field rate of 12,680 grams per hectare. A typical application rate in a field of glyphosate-resistant crops would be somewhere between 800 to 1,300 grams per hectare. So the amount of glyphosate they applied is about an order of magnitude too high to be relevant to most field situations.

      Blog post with more detail: http://weedcontrolfreaks.com/2015/09/dead-plants-are-probably-bad-for-earthworms/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 16, Andrea Messori commented:

      How to manage the price of the newest expensive agents approved for HCV therapy? Pharmaceutical firms do not adopt the same policies across different countries and different regions

      by Andrea Messori, PharmD, Sabrina Trippoli, PharmD, Claudio Marinai, Pharm D

      HTA Unit, Tuscany Region, ESTAR, Regional Health Service, 50100 Firenze, Italy


      In managing the price of the newest expensive agents approved for HCV therapy, Scripts (the largest pharmacy benefit manager in the US) was successful in fostering the competition between Abbvie and Gilead. In particular, Abbvie accepted an exchange of more prescriptions for price rebates, and so Scripts decided to cover Viekira Pak for the majority of HCV patients and restricted the coverage of Gilead products only under certain exceptions (1).

      While Wilensky (1) emphasizes that this strategy based on competition in prices can be an effective way of reducing the cost of these expensive treatments, other experiences in this field have not been successful and therefore deserve to be mentioned. In May 2015, the Tuscany region of Italy undertook a competitive tender scheme aimed at the prescriptions for 18,000 HCV patients of our region (those without cirrhosis) (2) in which both Abbvie and Gilead were expected to participate. In fact, at national level the majority of reimbursed treatments for HCV (currently restricted essentially to patients with cirrhosis) are those based on the products of Gilead and Abbvie. Surprisingly enough, Gilead did not participate in this tender while Abbvie offered their product with no substantial price rebate thus leading to the failure of the Tuscan experience in fostering more prescriptions for price rebates. This is probably because of the reluctance of these two pharmaceutical firms to accept local agreements in which the nominal price of their products is explicitly reduced.

      On the other hand, at national level both Gilead and Abbvie have accepted a price-volume reimbursement agreement with our national Medicines Agency (AIFA) according to which the drug prices are progressively subjected to very substantial price rebates (up to 80%) as the number of treated patients increases (3-5). One reason why the two companies have accepted this national agreement is that the agreement has been kept confidential, and so the nominal values of these discounted prices have remained unknown and have not become a reference price for other jurisdictions.

      In conclusion, to manage sustainability in this field, payers of Western countries and national health systems have different tools of procurement and reimbursement at their disposal, but choosing the best strategy is a difficult task because pharmaceutical firms do not adopt the same policies across different countries and different regions.

      References

      1. Wilensky G. A New Focus on Prescription Drug Spending. JAMA 2015;314(5):440-441.

      2. Brunetto MR, De Luca A, Messori A, Zignego AL. Reducing the price of new hepatitis C drugs in the Tuscany region of Italy. BMJ 2015;350:h3363 doi: 10.1136/bmj.h3363 (Published 24 June 2015), available at http://bmj.com/cgi/content/full/bmj.h3363?ijkey=xYS3zhzXoox8A8t&keytype=ref

      3. Messori A. Newest treatments for hepatitis C: how can we manage sustainability? Clin Infect Dis. 2015 Aug pii: civ667. [Epub ahead of print], preprint available at http://www.osservatorioinnovazione.net/papers/cid2015pricing.pdf

      4. Messori A. Managing the high cost of innovative drugs: anti-cancer agents vs direct-acting antivirals for hepatitis C (Comment posted 2 April 2015), Ann Intern Med 2015, available at http://annals.org/article.aspx?articleid=2212249#tab

      5. Quotidiano Sanità Website. “Epatite C. Pani (Aifa) al Senato: Con accordi Aifa risparmi per 2,5 miliardi in due anni. Ma c’è troppo divario tra le Regioni”. http://www.quotidianosanita.it/governo-e-parlamento/articolo.php?articolo_id=30276 , accessed 30 July 2015


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 22, Lydia Maniatis commented:

      his article exhibits at least four common pathologies of the vision science literature.

      First, the authors have adopted what, after Graham, I refer to as the “transparent brain hypothesis.”

      Graham (1992), notes that, at a time when neuroscientists thought V1 was pretty much all there was to the visual cortex, many psychophysical experimental results perfectly matched the early descriptions of V1 neural characteristics.

      Unfortunately, later evidence showed not only that there were many other hierarchically later levels of processing (V2, etc) but that even the descriptions of V1 receptive fields were overly simplistic.

      How then, Graham asks, can we explain the mountains of lovely psychophysical results? Her reply is that, under certain conditions, the brain becomes “transparent” and experience directly reflects V1 activity. Teller (1984) had already described this attitude as the “nothing mucks it up proviso,” which she didn’t think was sound. Here, Kwon et al seem to believe that the use of short line segments in their stimuli causes them to directly tap into the V1 layer. (Discussions of both Teller (1984) and Graham (1992) can be found on PubPeer).

      Proponents of this frankly bizarre view need, at the least, to meet the burden of explaining what happens at all the other levels of visual processing, with respect to their phenomenon of interest - bearing in mind that the same V1 receptive field activity that mediates observers' experience of their stimuli underlies, and thus is consistent with, and thus must explain, every aspect of their visual experience.

      Second, and relatedly, the authors treat perception as a detection problem, rather than as an indirect, inferential process. They say: “Before providing details of our model, we will summarize some relevant characteristics of the topographical map of area V1, which make it uniquely suited for detecting closed curves on a retinal image.”

      The reference to closed curves on a retinal image is unfortunate, since the stimulation of the retina is point stimulation, and curves are not a property of the proximal stimulus, but of the percept. Figure-ground segmentation is not, as I'm sure the authors are well aware, achieved by directly reading off retinal activity, in a process of detection. Any description of “contour detection” that can be developed with respect to the known properties of V1 receptive fields will be too simplistic; as a result, it will be trivial to construct any number of falsifying cases requiring a much broader - both geometrically and theoretically - perspective.

      Actually, we don’t even have to try to find a falsifying case, because the authors do it for us: “But how about extracting overlapping and intersecting curves, for example, two elongated ellipses intersecting at 4 points? One of the anonymous Reviewers raised this question. The model, in its present form, does not guarantee that such [overlapping] individual ellipses can be extracted: the ambiguities at X intersections will usually not be resolved correctly…” The literature is filled with ad hoc models of this type, i.e. models that “in their present form” are inadequate to their stated purpose.

      In the article highlights, the authors call theirs “the first principled explanation of the center of gravity tendency;” but one could argue that it isn’t very principled to describe a failed hypothesis as having explained anything. Perhaps the model can be fixed so as to handle a couple of overlapping ellipses, but it will almost certainly fail again soon thereafter. Sequential ad hoc fixes probably won’t result in an adequately limber model.

      Finally, as is often the case with psychophysical experiments, the number of observers is very small (three), one of whom is an author. We’re told, for good measure, that one of the observers was naïve to the purpose of the experiment. Is this naivete important? If so, then what about the other two observers? If not, then why mention it?

      I think what the authors refer to as "contour fragments" can also be considered "contours," i.e. they're homogeneously colored figures with a (closed) contour. That is, the "contour fragment" / "contour" dichotomy is a false one.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 21, Toni Mueller commented:

      Please note regarding the subcellular fractionation protocol to obtain a synapse-enriched fraction (SYN) as indicated in the Methods and Figure 1: subsequent experiments in our lab have indicated that it is not necessary to combine P1 and P2 prior to triton solubilization. While combining these pellets enhanced the protein yield of the SYN fraction and the triton-insoluble other heavy/intermediate membrane fraction to facilitate western blot sample preparation, the SYN fraction generated by triton solubilization of only P2 appears to have less contamination by nuclear proteins. The steps performed in calculating GABA(A) receptor subunit expression and ratios do account for this potential confound (subunit expression is normalized to an inhibitory synapse marker-gephyrin, a loading control- VCP, and subunit expression in total homogenate), but other researchers and labs measuring proteins with both nuclear and synaptic expression patterns should be aware of this limitation in the fractionation method.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 01, Graham Walker commented:

      There's an interactive version of the Revised Myeloma International Staging System at MDCalc (and one for the original Myeloma International Staging System as well). Both have supplemental content on how to use them clinically, next steps, and info about the creators.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 06, KEVIN BLACK commented:

      A point of clarification for Ballard et al's Table 1: Nichols et al 2013 do report a death in the placebo group (1 of 9 subjects allocated to placebo, vs 0 of 15 allocated to olanzapine; see Figure 1 and Results text). We would be glad to share additional subject-specific information about adverse events.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 23, Ben Goldacre commented:

      This trial has the wrong trial registry ID associated with it on PubMed: both in the XML on PubMed, and in the originating journal article. The ID given is NCT015553305. We believe the correct ID, which we have found by hand searching, is NCT01553305.

      This comment is being posted as part of the OpenTrials.net project<sup>[1]</sup> , an open database threading together all publicly accessible documents and data on each trial, globally. In the course of creating the database, and matching documents and data sources about trials from different locations, we have identified various anomalies in datasets such as PubMed, and in published papers. Alongside documenting the prevalence of problems, we are also attempting to correct these errors and anomalies wherever possible, by feeding back to the originators. We have corrected this data in the OpenTrials.net database; we hope that this trial’s text and metadata can also be corrected at source, in PubMed and in the accompanying paper.

      Many thanks,

      Jessica Fleminger, Ben Goldacre*

      [1] Goldacre, B., Gray, J., 2016. OpenTrials: towards a collaborative open database of all available information on all clinical trials. Trials 17. doi:10.1186/s13063-016-1290-8 PMID: 27056367

      * Dr Ben Goldacre BA MA MSc MBBS MRCPsych<br> Senior Clinical Research Fellow<br> ben.goldacre@phc.ox.ac.uk<br> www.ebmDataLab.net<br> Centre for Evidence Based Medicine<br> Department of Primary Care Health Sciences<br> University of Oxford<br> Radcliffe Observatory Quarter<br> Woodstock Road<br> Oxford OX2 6GG


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 12, Friedrich Thinnes commented:

      Solanezumab: a therapeutic breakthrough including theoretical gain.

      The exciting paper of Siemers et al. [1] on slowing down the progress of Alzheimer´s by Solanezumab antibodies, from my point of view, not only represent a therapeutic breakthrough. The effects observed also broaden the understanding of the pathogenesis of Alzheimer´s Disease, this by pointing to induced neuronal cell death as basic in AD.

      Accordingly, I propose plasmalemmal VDAC-1 (Swiss Prot P21796) to work as a receptor of amyloid Aß mono- or oligomers.

      In line, it has been shown that docking of those Aß forms to cell surfaces result in an opening of cell membrane-standing VDAC-1, a process finally ending in neuronal cell death.

      In consequence, whenever critical brain regions and their redundant structures are affected this way neuronal loss must be expected. In contrary, to capture Aß by adequate mAbs should minimize Aß toxicity, in other words slow down AD progress.

      The voltage dependent anion channel (VDAC) is an archaic channel and thus suggested to be involved in housekeeping functions. The channel is well established in the outer mitochondrial membrane, here playing its role in the intrinsic apoptotic pathway. It is thus of proven relevance for Alzheimer´s Dementia [2].

      First data on an extra-mitochondrial came up in 1989 by showing that human lymphocytes carry a heavy load of the molecule in their plasmalemma. Those data, meanwhile, found manifold support by several laboratories using different approaches; for review see [3] and www.futhin.de.

      After studies focussed on the regulatory volume decrease (RVD) of HeLa or murine respiratory epithelial cells, respectively, had shown that cell membrane-integrated type-1 VDAC is part of the cell volume regulatory system of mammalian cells [4,5] data came up indicating that plasmalemmal VDAC-1 plays its role in apoptosis.

      In a first effort it was elaborated that opening of VDAC-1 in the plasma membrane precedes the activation of caspases in neuronal apoptosis, induced by staurosporine. In other words, the authors documented that keeping type-1 porin in the plasmalemma of neurons closed by different specific antibodies abolishes the apoptotic volume decrease (AVD) of the cells [6].

      Next, studies on the toxic effect of amyloid Aß peptides on septal (SN56) and hippocampal (HT22) neurons corroborated that blocking VDAC in cell membranes means preventing an apoptotic development of cells, and additionally demonstrated that VDAC-1 and the estrogen receptor α (mERα) co-localize and interact in cell membrane caveolae, mERα working towards neuroprotection. The topographic relationship of the molecules was further specified demonstrating that both are integrated in caveolar lipid rafts [7].

      To notice: plasmalemmal VDAC-1 carries a GxxxG motif cell outside, amyloid Aß40/42 includes several of them in series. However, GxxxG motifs are established aggregation and membrane perturbation motifs.

      Given this background recent data on an enhancement of BACE1 expression of hypometabolic neurons [8] made me ask if amyloid Aß, cut from ubiquitous APP by ß-secretase BACE1 and γ-secretase, may occasionally induce neuronal cell death via opening ubiquitous VDAC-1 in cell membranes of critical brain regions - a proposal including a general model for an induction of cell death [9].

      The authors, remembering cerebral hypometabolism and amyloid accumulation as prevailing neuropathological characteristics of Alzheimer's disease, had tried to define effects of neuronal hypoactivity on amyloid plaque pathogenesis in the Tg2576 transgenic mouse model of Alzheimer's disease. They found that unilateral naris-occlusion resulted in an elevation of the ß-secretase BACE1 in neuronal terminals of deprived bulb and piriform cortex in young adult mice [8].

      Conclusion: taking for granted that 1) amyloid Aß mono- and/or oligomers dock to cell membrane-standing type-1 VDAC via GxxxG motifs, 2) the docking reactions result in plasmalemmal VDAC-1 channel opening followed by cell death, and 3) Solanezumab antibodies neutralize Aß oligomers by scavenging, a revised version of the amyloid cascade hypothesis of Alzheimers´s pathogenesis comes up.

      Accordingly, familial as well as sporadic Alzheimer's disease rests on a putative form of extrinsic cell death via opening cell membrane-standing VDAC-1 (= receptor), and is boosted by excessive amyloid Aß (= regulating agonist) production via processing of the amyloid precursor protein (APP) of weakening cells.

      The synopsis of a series of solid data from several laboratories thus helps to further understand the pathogenesis of either form of AD.

      Phenotypically mild at the beginning, increasing brain function disturbances evidenced by worsening stages of the disease point to a progressive process on the somatic level. First occasional or just a few cells being affected, over time a burden of cell deaths accumulates finally ending in Alzheimer dementia whenever critical brain regions and their redundant structures are affected. In line, to block free amyloid by antibodies, allows slowing down AD. Finally, t he model presented allows to formally explain the reverse relationship of AD and cancer [10].

      References

      [1] Siemers ER, Sundella KL, Carlson C, Michael Case, Sethuraman G, Liu-Seifert H, Dowsett SA, Pontecorvo MJ, Dean RA, Demattos R. Phase 3 solanezumab trials: Secondary outcomes in mild Alzheimer’s disease patients Alzheimer’s & Dementia 2015; epub ahead: 1-11.

      [2] Demetrius LA, Magistretti PJ, Pellerin L. Alzheimer's disease: the amyloid hypothesis and the Inverse Warburg effect. Front Physiol. 2015 Jan 14;5:522. doi: 10.3389/fphys.2014.00522. eCollection 2014.

      [3] Thinnes FP.Phosphorylation, nitrosation and plasminogen K3 modulation make VDAC-1 lucid as part of the extrinsic apoptotic pathway-Resulting thesis: Native VDAC-1 indispensible for finalisation of its 3D structure. Biochim Biophys Acta. 2015; 1848:1410-1416. doi: 10.1016/j.bbamem.2015.02.031. Epub 2015 Mar 11. Review. PMID: 25771449

      [4] Thinnes FP, Hellmann KP, Hellmann T, Merker R, Brockhaus-Pruchniewicz U, Schwarzer C, Walter G, Götz H, Hilschmann N. Studies on human porin XXII: cell membrane integrated human porin channels are involved in regulatory volume decrease (RVD) of HeLa cells. Mol Genet Metab. 2000; 69:331-337.

      [5] Okada SF, O'Neal WK, Huang P, Nicholas RA, Ostrowski LE, Craigen WJ, Lazarowski ER, Boucher RC. Voltage-dependent anion channel-1 (VDAC-1) contributes to ATP release and cell volume regulation in murine cells. J Gen Physiol. 2004; 124:513-526. Epub 2004 Oct 11.

      [6] Elinder F, Akanda N, Tofighi R, Shimizu S, Tsujimoto Y, Orrenius S, Ceccatelli S. Opening of plasma membrane voltage-dependent anion channels (VDAC) precedes caspase activation in neuronal apoptosis induced by toxic stimuli. Cell Death Differ. 2005; 12:1134-1140. PMID: 15861186 Free Article

      [7] Marin R, Ramírez C, Morales A, González M, Alonso R, Díaz M. Modulation of Abeta-induced neurotoxicity by estrogen receptor alpha and other associated proteins in lipid rafts, Steroids 2008; 73:992–996.

      [8] Zhang X-M, Xiong K, Cai Y, Cai H, Luo XG, Feng JC, Clough RW, Patrylo PR, Struble RG, Yan XX. Functional deprivation promotes amyloid plaque pathogenesis in Tg2576 mouse olfactory bulb and piriform cortex, Eur. J. Neurosci. 2010; 31: 710–721.

      [9] Thinnes FP. Amyloid Aß, cut from APP by ß-secretase BACE1 and γ-secretase, induces apoptosis via opening type-1 porin/VDAC in cell membranes of hypometabolic cells-A basic model for the induction of apoptosis!? Mol Genet Metab. 2010; 101:301-303. doi: 10.1016/j.ymgme.2010.07.007. Epub 2010 Jul 15. No abstract available.

      [10] Thinnes FP. Alzheimer disease controls cancer - concerning the apoptogenic interaction of cell membrane-standing type-1 VDAC and amyloid peptides via GxxxG motifs. Mol Genet Metab. 2012; 106:502-503. doi: 10.1016/j.ymgme.2012.06.004. Epub 2012 Jun 15. No abstract available.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 15, David Keller commented:

      Proposal for a clinical trial to test the safety of a widely-used radionuclide scan

      A recent letter to JAMA Internal Medicine [1], asked whether substantia nigra (SN) neurons weakened by Parkinson disease (PD) may be more sensitive to the adverse effects of ionizing radiation than are healthy mature neurons. Dosimetry safety studies assume that neurons are relatively resistant to damage from ionizing radiation. Radiation safety is, instead, calculated based on exposure of such tissues as the thyroid and the lining of the bladder. If SN neurons in PD are significantly more radiosensitive than healthy neurons, then PD patients might suffer progression of PD caused by the level of ionizing radiation exposure caused by certain diagnostic scans.

      In a widely-used clinical diagnostic brain imaging procedure known as the "DaT scan", a radiopharmaceutical tracer marketed as "DaTscan" (Ioflupane I-123) is injected intravenously, crosses into the brain and binds to dopamine transporters. The tracer emits gamma radiation, thereby allowing for imaging which can help distinguish Parkinsonism from other causes of similar symptoms. According to Table 1 of the DaTscan product information, the highest concentration of injected activity occurs in the striatum, close to the substantia nigra [2]. At the recommended adult dose of DaTscan, the striatum is exposed to 185 MBq x 230 microGray/MBq = 42550 microGray = 42.55 mSv = 4.25 Rad of gamma radiation (1 Sv = 1 Gray = 100 Rad). The nearby SN receives approximately the same exposure, although the exact figure is not specified in the DaTscan product information.

      How damaging is a gamma exposure of about 42.5 mSv to SN neurons already weakened by PD? For comparison, a head CT exposes the entire brain to about 2 mSV uniformly; so the radiation exposure to the striatum caused by a dose of DaTscan is the same as it would receive from 21 brain CT scans [3]. The SN and other nearby basal ganglia presumably receive about the same exposure, although the DaTscan product insert does not specify this important information.

      The clinical effect of this radiation dose on PD patients may be found by conducting an observational study of patients who have been ordered to get a DaTscan by their neurologist. Each patient would be given a thorough UPDRS exam (a detailed PD-focused neurologic exam) prior to being scanned, and at appropriate intervals after scanning. The overall rate of UPDRS score deterioration in the study subjects should be compared with that of matched PD patients who have not undergone scanning. Any significant worsening of UPDRS scores in the intervention group, compared to the control group, would presumably be an adverse effect of the DaTscan radiotracer, and should be investigated further.

      With the increasing use of DaT scans, PD patients should be informed whether their clinical condition, as measured by the UPDRS, will be expected to worsen as a result of these scans, and if so, approximately how much.

      I emailed the above observations to the Commissioner of the FDA recently, and received a reply which failed to address my radiation safety concerns regarding the FDA-approved radiopharmaceutical tracer marketed by General Electric as DaTscan.[4]

      The Code of Federal Regulations Title 21, Section 601.35 (Evaluation of safety of diagnostic radiopharmaceuticals) mandates evaluation of "changes in the physiologic or biochemical function of the target and nontarget tissues". The effect of 42.5 mSv of gamma radiation concentrated on the already diseased neurons in the substantia nigra of patients with Parkinson's disease has not been determined, as is required under the above-cited Federal regulation.

      I urge neurologists and their patients with PD to consider the high concentration of gamma radiation caused by DaTscan and ask, before injecting this tracer, "is this scan really necessary, and how will it substantively alter clinical management?".

      References

      1: Keller DL. Non-neurologists and the Dopamine Transporter Scan. JAMA Intern Med. 2015 Aug 1;175(8):1418. doi: 10.1001/jamainternmed.2015.2497. PubMed PMID: 26236969.

      2: DaTscan drug prescribing information, visited on 9/16/2015:<br> http://us.datscan.com/wp-content/uploads/2014/06/prescribing-information.pdf

      3: M.I.T. online guide to radiation exposure, accessed on 9/20/2015 at:<br> http://news.mit.edu/2011/explained-radioactivity-0328

      4: Email received from FDA pharmacist identified only by the initials "H.P.", 10/15/2015.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 16, Stefano Biffo commented:

      Lipid metabolic kinase, ACC1? It is not a carboxylase?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 11, Jonathan Heald commented:

      Posted on behalf of the American Academy of Sleep Medicine.

      Consensus Conference Panel., 2015


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 May 05, Jonathan Heald commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 17, Annika Hoyer commented:

      With great interest we noticed this paper by Nikoloulopoulos. The author proposes an approach for the meta-analysis of diagnostic accuracy studies modelling random effects while using copulas. In his work, he compares his model to the copula approach presented by Kuss et al. [1], referred to henceforth as KHS model. We appreciate a lot Nikoloulopoulos referring to our work, but we feel there are some open questions.

      The author shows in the appendix that the association parameter from the copula is estimated with large biases from the KHS model, and this is what we also saw in our simulation study. However, the association parameter is not the parameter of main interest which are the overall sensitivities and specificities. They were estimated well in the KHS model, and we considered the copula parameter more as a nuisance parameter. This was also pointed out by Nikoloulopoulos in his paper. As a consequence, we are thus surprised that the bad performance in terms of the association parameter led the author to the verdict that the KHS method is 'inefficient' and 'flawed' and should no longer be used. We do not agree here, because our simulation as well as your theoretical results do clearly show that the KHS estimates the parameters of actual interest very well. Just aside, we saw compromised results for the association parameter also for the GLMM model in our simulation.

      Nikoloulopoulos also wrote that the KHS approximation can only be used if the 'number of observations in the respective study group of healthy and diseased probands is the same for each study'. This claim is done at least 3 times in the article. But, unfortunately, there is no proof or reference or at least an example which supports this statement. Without a mathematical proof, we think there could be a misunderstanding in the model. In our model, we assume beta-binomial distributions for the true positives and the true negatives of the i-th study. They were linked using a copula. This happens on the individual study level because we wanted to account for different study sizes. For estimating the meta-analytic parameters of interest we assume that the shape and scale parameters of the beta-binomial distributions as well as the copula parameter are the same across studies, so that the expectation values of the marginal distributions can be treated as the meta-analytic sensitivities and specificities. Of course, it is true that we used equal sample sizes in our simulation [1], however, we see no theoretical reason why different sample sizes should not work. In a recently accepted follow up paper on trivariate copulas [2] we used differing sample sizes in the simulation and we also saw a superior performance of the KHS model as compared to the GLMM. In a follow-up paper of Nikoloulopoulos [3], he repeats this issue with equal group sizes, but, unfortunately, did not answer our question [4,5] with respect to that point.

      As the main advantage of the KHS over the GLMM model we see its robustness. Our SAS NLMIXED code for the copula models converged better than PQL estimation (SAS PROC GLIMMIX) and much better that Gauss-Hermite-Quadrature estimation for the GLMM model (SAS PROC NLMIXED). This was true for the original bivariate KHS model, but also for the recent trivariate update. This is certainly to be expected because fitting the KHS model reduces essentially to the fit of a bivariate distribution, but without the complicated computations or approximations for the random effects as it is required for the GLMM and the model of Nikoloulopoulos given here. Numerical problems are also frequently observed if one uses the already existing methods for copula models with non-normal random effects from Liu and Yu [6]. It would be thus very interesting to learn how the authors’ model performs in terms of robustness.

      Annika Hoyer, Oliver Kuss

      References

      [1] Kuss O, Hoyer A, Solms A. Meta-analysis for diagnostic accuracy studies: A new statistical model using beta-binomial distributions and bivariate copulas. Statistics in Medicine 2014; 33(1):17-30. DOI: 10.1002/sim.5909

      [2] Hoyer A, Kuss O. Statistical methods for meta-analysis of diagnostic tests accounting for prevalence - A new model using trivariate copulas. Statistics in Medicine 2015; 34(11):1912-24. DOI: 10.1002/sim.6463

      [3] Nikoloulopoulos AK. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence. Statistical Methods in Medical Research 2015 11 Aug; Epub ahead of print

      [4] Hoyer A, Kuss O. Comment on 'A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence' by Aristidis K Nikoloulopoulos. Statistical Methods in Medical Research 2016; 25(2):985-7. DOI: 10.1177/0962280216640628

      [5] Nikoloulopoulos AK. Comment on 'A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence'. Statistical Methods in Medical Research 2016; 25(2):988-91. DOI: 10.1177/0962280216630190

      [6] Liu L, Yu Z. A likelihood reformulation method in non-normal random effects models. Statistics in Medicine 2008; 27(16):3105-3124.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 14, David Mage commented:

      The authors have done an excellent job in reviewing the effect of fetal sex on prenatal development. However, they seem to reach an incongruous conclusion above that "male fetuses exposed to prenatal adversities are more highly impaired than those of female fetuses." Given that the numbers of X and Y sperm in the race to conception must be identical if Dad is XY, the human primary gender distribution at the instant of conception must be 0.5 XY and 0.5 XX. However, given the nominal 5% excess male live birth rate, there must be an excess of female fetal loss during pregnancy, between the moment of conception and moment of exit from the birth canal. Conceptus and fetal loss in the first trimester can occur even before Mom knows she is pregnant, and later without fetal recovery for gender identification. Even if there is a male excess of observed fetal loss in the third trimester, from spontaneous abortion or stillbirth, it cannot be greater than the prior female fetal loss. The authors also do not appear to consider as valid Naeye et al. (1971)'s page 905 concluding explanation for the male infant disadvantage: "The biologic difference must originate in the genetic difference between the sexes and those genetic differences are the consequences of the disparity in the number of the X chromosomes." Indeed, Mage and Donner, Scandinavian Journal of Forensic Science, 2015;21(1) doi:10:1515/sjfs-2015-0001 show that an X-linked gene in Hardy-Weinberg Equilibrium with a dominant allele protective against respiratory failure with frequency p = 1/3 and non-protective recessive allele with frequency q = 1 - p = 2/3, can explain the 50% male excess rate of infant death from respiratory failures,such as SIDS, and the 25% male excess rate of ALL infant mortality up to their 5th birthday.

      David Mage, PhD (WHO retired)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 20, Lydia Maniatis commented:

      The author says that “If... [slant] judgments were based on scaling contrast or scaling gradients, then surfaces viewed under orthographic projection should all appear fronto-parallel. In order to evaluate these predictions, it is useful to consider the image of a planar surface under orthographic projection in Figure 3D.”

      This is not a fair test. Figure 3D was constructed on the basis of a set of upright rectangles. Under orthographic projection, we still have upright rectangles, and the visual system treats projections shaped like rectangles as fronto-parallel, regardless of their source. If the rectangles had been tilted (resulting in parallelogram-shaped projections), or if we were dealing with circles instead of rectangles (producing elliptical projections), then the orthographic projection would not appear fronto-parallel.

      The failure to take shape into account is typical of many studies on slant (e.g Ivanov et al 2014; Saunders and Chen 2015). But shape, whether collected in a “texture” or individually, is dispositive in slant perception and it needs to be explicitly considered, or else results will be inconsistent and uninterpretable.

      The idea that foreshortening could even be a potential cue to slant is logically untenable, as I explain in a comment on Ivanov et al (2014).

      I would also note that in the perspective projections in Figure 3, edges and objects are visually grouped to produce oblique lines, more so for the larger slants. It is known that obliques tend to be perceived as receding (e.g. Deregowski and Parker 1992). This presents a confound very difficult to disentangle from other suggestions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 22, Kenneth Witwer commented:

      Dr. Dweep's response is appreciated but does not engage our suggestions or explain why the miRWalk validated target module was:

      1) extensively erroneous (1.0, as we pointed out I believe in 2011),

      2) greatly expanded and again erroneous (2.0 at the start of 2015, apparently due to a database error, and as we shared with the authors earlier this year),

      3) and finally what appears to be a mirror of miRTarBase, at least according to our results.

      Instability and a lack of clear information on versions and methods causes confusion, especially when scientists take these data at face value. I have reviewed submitted manuscripts that used miRWalk "validated" results as input for further experiments or analysis, even when these results were, unbeknownst to the authors, completely erroneous.

      Dr. Dweep suggests that our analysis was deficient because of the terms we used for our 50 selected genes. This would indeed be important had we attempted to gauge the comprehensiveness of miRWalk or other databases (we expressly did not), or if we had used one set of terms for our miRWalk query, and another set for querying all the other databases (we did not, unless forced to do so by an interface). Which gene terms we used in our comparison of databases, then, is irrelevant.

      Dr. Dweep's second point is that our analysis focused only on a small portion of the miRWalk database, the validated target module. Should we have ignored perceived problems with such modules, simply because the file sizes for these data are smaller than the sum of all predictive or correlative information on miRNAs in any given database, or on the internet, or whatever other set of information we might consider?

      Finally, Dr. Dweep refers to supplemental methods that explain, albeit in vague terms that do not allow reproduction of the results, how validated targets are gathered using text searches and four databases. This does not explain why the results we downloaded from miRWalk2.0 in the period of April-August 2015 were exactly the same as found in another database, miRTarBase (last updated two years ago), down to the sorting of the hits, nor does it explain the drastic fluctuations in numbers of validated hits over the years, almost all of which we examined were erroneous. Thus, a miRWalk validated target module user in January, 2015, would have received a completely different set of results compared with the same query a year earlier or six months later. As we have suggested to Dr. Dweep, one might simply to link to miRTarBase in cases where miRWalk2.0 does not provide additional information, and provide more extensive versioning information or even an automated email update to users when mistakes are found or major changes implemented.

      We agree that validated target databases have potential uses and hope that our findings are somehow helpful, as a cautionary note even if they are not used to improve databases.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Sep 22, Harsh Dweep commented:

      We congratulate Dr. Witwer for the publication of his results in Drug Development Research journal. This publication is based on analyses of only one of the many key features of miRWalk2.0 comparing it with 50 genes by considering GeneCard terminology. Here, we would like to mention that the gene synonymous information is neither comprehensively maintained by Genecard nor MeSH (and other databases). For example, a total of 14, 22 and 23 synonyms are documented in NCBI-gene, MeSH and GeneCards, respectively, for the Clusterin (CLU) gene. Only 5 synonyms are common between GeneCards (as used by Witwer et al based on) and MeSH. However, the information of PubMed is relying on the terms stored in MeSH. By considering only GeneCards for evaluation (text-mining), a large amount of information on synonyms as well as their related articles can be missed. In addition, an alias of a gene can be commonly found in other genes, for example, VH alias is common for 36 different genes. These comments reflect only part of the problems related to text mining.

      Moreover, Witwer addresses only 0.008% of the information contained in miRWalk2.0.

      Additionally, it is clearly mentioned in the supplementary information of miRWalk2.0 article (page 11) that “information on miRNA interactions associated with pathways, diseases, OMIM disorders, HPOs, organs, cell lines and proteins involved in miRNA processing, is extracted by an automated text-mining search in the titles and abstracts of PubMed articles. In a next step, this information was combined with experimentally verified miRNA-target interactions downloaded from four databases”.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Sep 03, Kenneth Witwer commented:

      We recently published a small comparison of several validated miRNA-target databases (Lee YJ, 2015)--that is, catalogs of miRNA-target interactions with published experimental evidence. A "validated target" module is one part of miRWalk2.0, so this database and its predecessor (1.0) were included in our study. We queried 50 genes at different times. 82 miRNA-target interactions were returned by miRWalk1.0, 5468 by miRWalk2.0 in January, 2015, and 91 by miRWalk2.0 in May, June, and August, 2015, with only 5 from the original 82. As of August, 2015, the final set of 91 interactions was identical to that returned by miRTarBase (Hsu SD, 2014, Hsu SD, 2011), down to the sort order. Although miRTarBase is cited as one of numerous sources of information for miRWalk output, it was not clear from the methods that it would be the only source for the genes we queried. Experimental validation databases have the potential to provide useful information, but in light of the stability and accuracy issues we seem to have observed over time, users and reviewers are encouraged to 1) consult multiple sources of information (we found Diana TarBase to be among the most comprehensive and best-curated of those we tested, Vlachos IS, 2015); 2) at the same time be aware that different databases may rely entirely or to some extent on other databases; and 3) check the strength of interaction evidence in the primary literature.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 12, Sander Houten commented:

      This paper focuses on the role of KLF14 in insulin signaling via the PI3K/Akt pathway. The authors study mouse models of obesity and the Hepa 1-6 cell line, a derivative of a mouse hepatoma. The authors show that Klf14 mRNA and KLF14 protein expression is decreased in liver, adipose and muscle of C57BL/6 mice on high fat diet and db/db mice when compared to control animals. In subsequent experiments the authors use ectopic KLF14 expression in Hepa1-6 cells and show that KLF14 stimulates insulin signaling via the classical PI3K/Akt pathway (Yang M, 2015). I would like to point out that there is little evidence to support the hypothesis that KLF14 plays an important role in adult mouse liver biology. We found no evidence for expression of KLF14 in adult mouse liver as we were unable to amplify Klf14 cDNA, did not find Klf14 mapped reads in liver RNA sequencing data and found no specific signal upon immunoblotting (Argmann CA, 2017). Our data on the absence of Klf14 expression in liver are consistent with previously published work by others (Parker-Katiraee L, 2007) and publicly available data sources. We also investigated the physiological functions of KLF14 by studying a whole body KO mouse model and focused on the metabolic role of this transcription factor in mice on chow and high fat diet. Our results indicate that KLF14 does not play a role in the development of diet-induced insulin resistance in male C57BL/6 mice (Argmann CA, 2017).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 04, Milos Filipovic commented:

      Dirty dancing of NO/RSNO and H2S

      In this report by Cortese-Krott et al, 2015, the existence of SSNO− as a product of the anaerobic and aerobic reaction of H2S with NO or RSNOs, was claimed based on MS experiments. The authors failed to prove HSNO using a demo MS instrument, although HSNO was prepared by the acidification of nitrite, pulse radiolysis and trans-nitrosation and characterized by MS, IR (14N/15N labelling) and 15N-NMR (Filipovic at al, 2012), and even reported by Cortese-Krott et al, 2014. HSNO has also been recently detected by 15N NMR as a product of the reaction of PNP+SSNO- with sulphide (Wedmann et al, 2015), again going against the claim of Cortese-Krott et al, 2015 that “SSNO- mix” is stable in excess of sulphide.

      The authors use LTQ OrbiTrap to concentrate reactive ions (as demonstrated by the absence of any MS signal in the first 1.5 min of continuous injection of the reaction mixture) all of which can intercombine in the ion trap (Figure 3C). They never show the control spectra, actual intensities of the signals, nor do they show the spectrum with broad m/z. It is puzzling that the signal of the reactant, SNAP, reaches its maximum almost at the same time as SSNO− and all other reaction products. While SNAP peak slowly decays in agreement with Uv/Vis experiments, the reaction products remain at maximum intensity (although most of them have questionable S/N ratio, Figure 3C). Contrary, HS2O3− starts to disappear, suggesting that it was present even before reaction started.

      The authors refer to the formation of persulfide NONOate ([ONN(OH)S2]−). This species should have m/z 124.9479. While the Figure S6A in indeed has the title: “High resolution mass spectrum of [SS(NO)NOH] −“, the actual figure shows the spectrum of the species with m/z 141.9579, which authors assign to H4O2N3S2−. Strangely, in the same reaction mixture, identical mass peak is assigned to another species, HO5(14)N(15)NS−, (Figure 3B, right panel). 3 peaks are present in m/z ~143 in Figure S6A in. As this is the only MS spectrum shown in high resolution, one can calculate the mass of those unassigned peaks and observe that none of them show up in Figure 3B which is recorded under the same conditions. The isotopic pattern of SULFI/NO (Figure 3B (left panel)) is inconsistent with what should be expected for this species. In the Figure 3A (right panel) there is a huge unassigned background peak at m/z ~94.9252, which does not appear in the Figure 3A (left panel). It is unclear whether the reported peaks are smaller or bigger than actual background noise of the instrument.

      In the unexplainable absence of 15N NMR and IR characterisation of SSNO−, which by authors’ claim is “stable” and “abundant”, and correct isotopic patterns for reported species, the authors should have performed experiments with pure 14N and 15N labelled SNAP to independently demonstrate the isotopic distribution of each species and their corresponding m/z shifts.

      The authors use maXis Impact instrument to show that they cannot detect HSNO in the reaction of RSNO and H2S. The injection of buffer alone creates the signal intensities of ~1.5x107 (the upper detection limit of this instrument) that makes the background noise stronger than the actual signal of few milimolar GSNO (Figure S8C). The authors also send a message that due to the presence of DMSO and acetonitrile in the tubing no one should try to detect anything at m/z range 50-70. The authors could have cleaned the instrument instead, used new tubing for every measurement or used stainless steel tubing to solve this problem as it is done in the laboratories with MS experience. Furthermore, the ionization conditions which “break” DSMO (BDE ~ 50 kcal/mol, Blank et al, 1997) into CH3SO/CH3SO+ are inappropriate for RSNO/HSNO detection (BDE ~30 kcal/mol, Wedmann et al, 2015). Results look like as they were produced in a limited amount of time and on a very dirty demo instrument and should not have been used for publication.

      To prove that 412 nm species is NO dependent the authors trap NO by cPTIO (Figure S4F), ignoring the fact that nitronyl nitroxides readily reacts with H2S and therefore no conclusion can be drawn from this experiment (Wedmann et al, 2013).

      The authors also use water soluble triphenylphosphine (TXPTS) to trap nitroxyl from their “SSNO- mix” ignoring the fact that triphenylphosphines are good trapping agents for sulfane sulphur (by mixing PNP+SSNO- with triphenylphosphine Seel et al, formed SNO-), and that S-nitrosothiols react/decompose in the presence of triphenylphosphines in general and TXPTS in particular (Bechtold et al, 2010), so nothing can be concluded from those experiments either.

      In conclusion, the data presented in this study ask for more critical and in-depth re-evaluation.

      Cortese-Krott MM, et al. (2015) Key bioactive reaction products of the NO/H2S interaction are S/N-hybrid species, polysulfides, and nitroxyl. Proc Natl Acad Sci USA 112(34):E4651-60.

      Filipovic MR, et al. (2012) Chemical characterization of the smallest S-nitrosothiol, HSNO; cellular cross-talk of H2S and S-nitrosothiols. J Am Chem Soc 134(29): 12016-27.

      Cortese-Krott MM, et al. (2014) Nitrosopersulfide (SSNO(-)) accounts for sustained NO bioactivity of S-nitrosothiols following reaction with sulfide. Redox Biol 2:234-44.

      Blank DA, North SW, Stranges D, Suits AG, Lee YT (1997) Unraveling the dissociation of dimethyl sulfoxide following absorption at 193 nm. J Chem Phys 106(2):539-550.

      Wedmann R, et al. (2015) Does Perthionitrite (SSNO(-)) Account for Sustained Bioactivity of NO? A (Bio)chemical Characterization. Inorg Chem 54(19):9367-9380.

      Wedmann R, et al. (2013) Working with “H2S”: facts and apparent artefacts. Nitric Oxide 41:85-96. Seel F, et al. (1985) PNP-Perthionitrit und PNP-Monothionitrit. Z Naturforsch 40b:1607–1617.

      Bechtold E, et al. (2010) Water-soluble triarylphosphines as biomarkers for protein S-nitrosation. ACS Chemical Biology 5(4):405-414.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Nov 04, Ivana Ivanovic-Burmazovic commented:

      No evidence for SSNO- in “SSNO- mix”

      The results and conclusions published by Cortese-Krott et al. 2015 (Proc Natl Acad Sci USA 2015, 112(34):E4651-60) in the manuscript entitled “Key bioactive reaction products of the NO/H2S interaction are S/N-hybrid species, polysulfides, and nitroxyl” urgently call for comments from the chemical point of view, because they include a number of chemical misconceptions.

      The main conclusion of this work, as stated in its title, is that SSNO- (according to the authors a “S/N-hybrid species”) is one of three key bioactive reaction products of the reaction of H2S with NO or S-nitrosothiols (RSNOs). Based on authors’ claims, SSNO- can be obtained in high yields in aqueous solutions at pH 7.4 (even in the presence of oxygen) and it is stable for hours. However, everything known about the chemical/spectroscopic properties of SSNO- contradicts authors’ conclusions. We are afraid that the biological community will be confused by these results, and that researchers without inorganic chemistry background will use such undefined reaction mixtures of NO and H2S, as well as of RSNO and H2S, as a source of SSNO- that in reality is not present under given conditions at all. To help clarifying confusion in the field we bring important facts about SSNO- chemistry below.

      In general, a history of the identification of S-S bond-containing compounds in solutions was rich in contradicting conclusions (Seel F, et al. 1977), and “some of those who dared to tackle this challenging task were victims of delusions because such species that were optically (or even by other methods) observed in non-aqueous solutions could not easily be established as defined substances” (Seel F and Wagner M, 1985). This is a translated quotation from Seel F, Wagner M 1985, who first synthesized SSNO- under exclusion of water and dioxygen (Seel F, et al. 1985). However, although citing work of Seel and Wagner, Cortese-Krott et al, 2015 do not mention that i) SSNO- solutions are sensitive to oxygen and water (Seel F and Wagner M 1985) and ii) even in very alkaline aqueous solutions only 10 % of SSNO- was obtained from NO and S2-, although even that was questionable, as SSNO- could not be confirmed by 15N-NMR in aqueous solutions (Seel F and Wagner M, 1988). This contradicts that SSNO- is stable for hours at pH = 7.4, especially in high concentrations in “SSNO--enriched mixtures” (“SSNO- mix”) (Cortese-Krott et al, 2015). Related to the claimed determination of the high SSNO- yield in “SSNO- mix”, in SI (page 9) Cortese-Krott et al, 2015, state: “The (theoretical) maximal yield of SSNO- under these conditions is 1 mM, corresponding to the concentration of added nitrosothiol (please refer to Fig. S9 for experimental determination of reaction yield).” However, Fig. S9 deals with MS of dimethylsulfoxide, and nowhere in SI the clamed experimental results confirming a high SSNO- yield could be found. Instead there is some confusing statement that their putative SSNO- contains two sulfur atoms based on an observation of “two times as much sulfide as sulfane sulfur” (Cortese-Krott MM, et al. 2015). (Even more general, since authors do not provide stoichiometry of the considered reactions, they cannot provide any quantification.)

      In agreement with the original work of Seel and Wagner, we demonstrated SSNO- inherent instability by preparing pure crystals of PNP+SSNO- and characterizing its properties by 15N-NMR, IR, EPR, MS, X-ray analysis, electrochemical and computational methods (Wedmann R, et al. 2015). For example, when ca. 10% water was added to an acetone solution of a pure SSNO- salt (Wedmann R, et al. 2015), it decomposed within ca. 100 s. Cortese-Krott et al. report that SSNO- does not react with thiols, H2S and cyanide (Cortese-Krott MM, et al. 2015). However, solutions of a pure SSNO- salt, which Cortese-Krott et al. never used, quickly decompose in the presence of thiols, H2S (Wedmann R, et al. 2015) and cyanide. These authors state that SSNO- is resistant to the biological reductants (Cortese-Krott MM, et al. 2015). However, SSNO- is reduced at a physiological potential of −0.2 V vs. NHE (Wedmann R, et al. 2015). Being unstable at pH = 7.4, in the presence of thiols and biological reducing agents, SSNO- cannot exist under physiological conditions in any relevant concentration. They also report that HSSNO is more stable than HSNO, because HSSNO supposedly has increased electron density on the proximal sulfur (which is a statement for which they do not provide any experimental support) and therefore does not easily react with HS- and positive metal centers (which is contradictio in adjecto) (Cortese-Krott MM, et al. 2015). The facts are quite different: i) the proximal-S has a +0.24 charge (Wedmann R, et al. 2015), ii) the S-N bonds in HSSNO and SSNO- (calculated BDE 16.0 and 22.1 kcal/mol, respectively; B3LYP/aug-cc-pv5z, in the presence of solvent/water) are weaker than those in HSNO and SNO− (BDE 27.74 and 36.21 kcal/mol, respectively), which makes (H)SSNO more prone to homolysis than (H)SNO, and iii) SSNO- reacts with metal centers (as evidenced by the reaction with [Fe3+(TPP)]) (Wedmann R, et al. 2015). Cortese-Krott et all. quote that (H)SNO is (only) stable at 12 K (Cortese-Krott MM, et al. 2015), but the PNP+SNO- crystals have been isolated at room temperature (Seel F, et al. 1985). Furthermore, Cortese-Krott et al. have previously observed alone that (H)SNO forms at room temperature from a 1:1 mixture of RSNO and sulfide in water (pH = 7.4) at even higher yield than their “SSNO-“ (Cortese-Krott MM, et al. 2014).

      Thus, it is highly problematic to make further conclusions about the physiological effects and reactivity of the product mixtures with undefined chemical composition. To obtain valid (bio)chemical conclusions, use of pure compounds instead of undefined reaction mixtures is recommended. We are willing to provide pure SSNO- and SS15NO- salts to interested researchers.

      References:

      Cortese-Krott MM, et al. (2015) Key bioactive reaction products of the NO/H2S interaction are S/N-hybrid species, polysulfides, and nitroxyl. Proc Natl Acad Sci USA 112(34):E4651-60.

      Seel F, Guttler, H-J, Simon G, Wieckowski A (1977) Colored sulfur species in EPD-solvents. Pure Appl. Chem. 49:45-54.

      Seel F, Wagner M (1985) The reaction of polysulfides with nitrogen monoxide in non-acqueous solvents - nitrosodisulfides. Z Naturforsch 40b:762–764, and refernces therein.

      Seel F, et al. (1985) PNP-Perthionitrit und PNP-Monothionitrit. Z Naturforsch 40b:1607–1617.

      Seel F, Wagner M (1988) Reaction of sulfides with nitrogen monoxide in aqueous solution. Z Anorg Allg Chem 558(3):189–192.

      Wedmann R, et al. (2015) Does Perthionitrite (SSNO(-)) Account for Sustained Bioactivity of NO? A (Bio)chemical Characterization. Inorg Chem 54(19):9367-9380.

      Cortese-Krott MM, et al. (2014) Nitrosopersulfide (SSNO(-)) accounts for sustained NO bioactivity of S-nitrosothiols following reaction with sulfide. Redox Biol 2:234-44.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 20, Preben Berthelsen commented:

      A grant application is not a clinical trial registration. The content of such an application is not a blueprint of the research but merely an indication of what the researchers contemplate. What counts scientifically is the pre-trial registration of the study with ClinicalTrials. To finalise the discussion on the question of the aim of the study, I have copy pasted below the Primary Outcome Measure from the ClinicalTrials registration (NCT01680744).

      Primary Outcome Measures: • Renal Function [ Time Frame: 12 hours of mild hypothermia ] [ Designated as safety issue: No ] The primary outcome measures are renal function as determined by creatinine and cystatin c between declaration of neurological death and organ recovery in each of the two treatment groups. Delta creatinine and terminal creatinine are important predictors of graft quality and function, as demonstrated in the present data (HRSA study and Region 5 DMG/DGF study), and will be compared between the control and treatment group.

      Ethical Problem. If the authors planned - as the thorny lifeline thrown by Dr. Greenwald (HRSA) seems to suggest - to study recipient graft function all along, the kidney recipients should have been informed that they took part in a randomized clinical trial and they should have given their consent before being enrolled in the investigation. This did not happen.

      Preben G. Berthelsen, M.D. Charlottenlund, Denmark.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Apr 25, Preben Berthelsen commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Mar 09, Melissa Greenwald commented:

      The United States Health Resources and Services Administration (HRSA) is the federal agency that awarded the grant funding for this research proposal. Grant awards were based on ranking after applications were reviewed by an external Technical Review Committee. “Delayed graft function” (DGF) was clearly stated as one of the goals of this research study as noted in the original grant application submitted in March 2011: “The goals of the this intervention are to demonstrate that TH [Therapeutic Hypothermia]: 1) better preserves deceased donor renal function while awaiting organ recovery when compared to normothermia; 2) increases the number of suitable organs for transplantation; and 3) improves recipient renal function after transplantation as measured by a reduction in DGF [Delayed Graft Function] and SGF [Slow Graft Function].” The grant application listed “Initial graft function” one of four variables to be measured for assessment of the first of two specific objectives of this research study. This is further specified in the Methods section of the grant application as: “The primary outcome measure will be number of patients in each group showing DGF/ SGF.” The parameters for information about research grants that is included and displayed on the Clinical Trials.gov website is under oversight of the U.S. National Institutes of Health.

      Melissa Greenwald MD, Acting Director, Division of Transplantation, Health Resources and Services Administration, Rockville, Maryland, USA


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Jan 04, Preben Berthelsen commented:

      According to the ClinicalTrials registration (NCT01680744) for this study, the primary outcome measure was renal function as determined by changes in creatinine and cystatin c between declaration of neurological death and organ recovery in donors randomised to normothermia or hypothermia. No secondary outcome measures were stipulated.

      When Niemann et al report the results of their investigation, the primary outcome measure has been radically altered from changes in donor renal function to delayed graft function in the kidney recipients. This change in end-point - mirabile dictu – resulted in a positive outcome of the authors’ intervention.

      In my view, the paper does not present evidence for a benefit of induced hypothermia in brain death donors prior to kidney transplantation. As the paper falsely suggests such a benefit the only safe option is a retraction of the paper - either by the authors or by the New Engl J Med where the review process seems to have been substantially substandard.

      P.G.Berthelsen, M.D. Charlottenlund, Denmark


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2015 Dec 04, Claus U Niemann commented:

      Thank you for reviewing the study. I agree that the ethics and logistics of this study were exceedingly complex. This is further complicated by the complete lack of regulatory oversight for this type of research. Some of our efforts to alleviate this lack of oversight are described in the supplement. Two specific comments: 1.) Recipient covariates were included. Table S4 in the supplementary appendix provide important recipient variables that are known to be associated with renal allograft function. In fact, these variables are validated in several studies and are provided to the transplant community by the Scientific Registry of Transplant Recipients (SRTR),http://www.srtr.org/ 2.) Creatinine levels and GFR ( last determination prior organ recovery) appeared be lower in the hypothermia group. However, we are unsure of the effect of hypothermia on creatinine production itself and therefore cannot state with certainty that hypothermia actually resulted in an improvement. Nevertheless, the last creatinine prior organ recovery has been demonstrated to be a significant predictor of delayed graft function in the recipient.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2015 Nov 15, NephJC - Nephrology Journal Club commented:

      This study was discussed on Sep 22nd and 23rd in the open online nephrology journal club, #NephJC, on twitter. Introductory comments are available at the NephJC website.

      The discussion was quite detailed, with about 40 participants, including nephrologists, fellows and residents.

      A transcript and a curated (i.e. Storified) version of the tweetchat are available from the NephJC website.

      The highlights of the tweetchat were:

      • The authors were to commended for designing and conducting this trial, using a relatively safe and low risk intervention, which may have potentially important implications for care of potential donors and outcomes in transplant recipients.

      • A considerable discussion occurred around the ethics of doing a clinical trial without needing to obtaining consents from the recipients, which admittedly would have been logistically challenging.

      • Though the results on delayed graft function were dramatic, as also were biologically plausible with a greater benefit observed in extended criteria donor organs, most discussants thought the results needed replication and also would like to see benefit in longer term clinical outcomes. Some key issues also brought up included the lack of covariates (especially related to recipient characteristics) and the pre-transplant improvement in kidney function seen in the intervention arm.

      Interested individuals can track and join in the conversation by following @NephJC or #NephJC, or visit the webpage at NephJC.com.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 04, Randi Pechacek commented:

      Holly Ganz mentioned this article on microBEnet while discussing the spread of antibiotic resistance to wildlife populations.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 16, Martine Crasnier-Mednansky commented:

      Growth of Escherichia coli on excess glucose under aerobic conditions is NOT diauxic, and the acetate switch is NOT "classically described as a diauxie". It is therefore extraordinary that the authors are now "showing diauxic behavior does not occur under such conditions". A diauxie in the presence of BOTH glucose and acetate was reported by Kao KC, 2005.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 01, Zhiping Pang commented:

      We appreciate that Dr. Rinaman acknowledges that our conclusions are consistent with previous studies (Alhadeff et al., 2012; Dickson et al., 2012; Dossat et al., 2011; Skibicka, 2013), but strongly disagree with her surmise that our study is flawed based on the specificity of the mouse line used to manipulate GLP-1 neurons. We apologize that, primarily due to space limitations, we did not cite all the papers from Dr. Rinaman and colleagues. However, we argue that the specificity of the mouse line may be not as clearly doubted as Dr. Rinaman states, nor do we believe that our conclusions depend on that specificity alone. Based on the totality of our experimental work, we believe that our conclusions, as stated in the paper, are sound. As with all published scientific work, we provide experimental evidence for a particular hypothesis that is logical and plausible, but do not claim that our model provides a definitive answer to the question — in this study, how central GLP-1 regulates feeding. Thus, we feel that the comment from Dr. Rinaman et al. is much more apodictic and definitive than the phrasing of our paper’s conclusions.

      We would like to respond to the specific concerns raised by Dr. Rinaman and her co-authors in the comment posted on PubMed Commons.

      Dr. Rinaman et al. suggested that we claimed that Phox2b is GLP-1 specific:

      We did not state, explicitly or implicitly, that endogenous expression of Phox2b and GLP-1 are 100% overlapping. We acknowledged that it is possible that not all GLP-1 expressing neurons express the phox-2b-cre transgene and that other types of neurons may express Phox-2b-cre as well. The purpose of utilizing the Phox2b-Cre transgenic line was to assess whether a defined group of central GLP-1 neurons was involved in regulating food intake. Our experimental results provide evidence that GLP-1 neurons likely participate in the regulation of food intake, although they do not exclude the involvement of non-GLP-1 Phox2b-Cre expressing neurons. Specifically, our data support a specific role of GLP-1 neurons in the regulation of food intake behavior in the following ways: a) the anorexic effects induced by the activation of Phox2b-Cre expressing neurons are blocked by the GLP-1R specific blocker Exendin-9 (also discussed below); b) retrograde-labeled NTS-VTA projecting neurons are positive for GLP-1; c) Cre-activated expression of EYFP colocalizes with GLP-1 in brain sections detected by a commercially available antibody (Peninsula Laboratories T-4363) (Zheng and Rinaman, 2015; Zheng et al., 2015); d) injection of CNO at the VTA in DREADD-expressing animals leads to suppressed food intake after 5 hours. In ongoing, unpublished studies, we have expressed Cre-dependent channelrhodopsin in NTS neurons of Phox2b-Cre transgenic mice to express channelrhodopsin in Phox2b-Cre positive NTS neurons, and found that neuronal activation by optical stimulation of the NTS nerve terminals is blocked by Exendin-9. This presents additional evidence to support that GLP-1R (GLP-1 receptor) is expressed in Phox2b-cre expressing cells. Taken together, these findings, along with reports from Scott et al (Scott et al., 2011) and the collection of studies cited throughout our manuscript, lead us to propose evidence for the involvement of GLP-1 signaling in the VTA in the regulation of feeding behavior.

      Additionally, the transgenic mice used in this study were created based on the Bacteria Artificial Chromosome (BAC) technology. Unlike in the case of gene knockins generated by homologous recombination, in the case of transgenics, including BAC transgenics (Heintz, 2001), the introduced foreign gene is randomly inserted into the genome (Beil et al., 2012) and the expression of the transgene is influenced by epigenetic factors and genetic background (Chan et al., 2012). Therefore, the expression of the transgene does not always faithfully mimic endogenous gene expression. Indeed, the three Phox2b-Cre transgenic lines generated in Dr. Elmquist’s laboratory exhibit different expression patterns (Scott et al., 2011). Given these considerations, we did not conclude that Phox2b-Cre was only expressed in GLP-1 neurons or vice-versa. As described in the paper, the Phox2b-Crehese animals were employed as a tool to interrogate the function of a group of GLP-1 expressing neurons in regulating food intake behavior.

      Due to size limitation, full response please refer to (please copy the hyperlink address): https://www.dropbox.com/s/0n129f2ugn3tgjz/Response.pdf?dl=0


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Aug 24, Linda Rinaman commented:

      Phox2b is Not Specifically Expressed by Hindbrain GLP-1 Neurons

      M.R. Hayes, University of Pennsylvania, Philadelphia, PA, USA; L. Rinaman, University of Pittsburgh, Pittsburgh, PA, USA; K.P. Skibicka, Sahlgrenska Academy at the University of Gothenburg, Sweden; S.Trapp, University College London, London, U.K.; D.L. Williams, Florida State University, Tallahassee, FL, USA

      The first part of this study sought to extend published work [1-7] supporting a role for central GLP-1 signaling in suppressing palatable food intake. For this purpose, the authors virally expressed DREADDs within the caudal medulla of transgenic Cre-driver line mice, followed by chemogenetic DREADD activation to increase or decrease the activity of transfected neurons. Unfortunately, their experimental design depends on a Phox2b-Cre mouse model [8] that is non-specific for GLP-1 neurons.

      Phox2b is expressed by a diverse set of autonomic-related neurons distributed throughout the nucleus of the solitary tract (NTS), area postrema (AP), dorsal motor nucleus of the vagus (DMV), and other regions [9-13], including catecholaminergic and HSD-2-positive neurons that innervate mesolimbic, hypothalamic, and other central targets [14-16]. The present study includes a supplementary figure (S1) purporting to show co-localization of GLP-1 immunolabeling with mCherry reporter expression, but the depicted coronal section is well rostral to the established location of GLP-1 neurons in rats and mice [17-19].  Thus, the GLP-1 immunolabeling is non-specific, and the authors present no credible evidence that GLP-1 neurons express virally-encoded DREADDs.
      
      It is not surprising that food intake was suppressed in mice in which Phox2b-expressing AP, DMV, and NTS neurons were transfected to express hM3Dq. CNO activation of these neurons should disrupt physiological functions and activate stress-sensitive GLP-1 neurons [19-21] whether or not they express DREADDs. However, other than food intake, no physiological or behavioral measures were performed.  The authors report that the hypophagic effect of CNO was specific to the high-fat diet, with no effect on chow intake, but their experimental design and results are insufficient to support this claim.  Further, i.p. injection of Exendin-9 was reported to block the hypophagic effect of CNO (Figure 1F). The basis for this effect is difficult to understand, because the utilized i.p. dose of Exendin-9 is well below the established threshold for antagonizing GLP-1 receptors in the periphery, let alone within the brain [22,23].  In addition, stereotaxic injections were used to deliver a GLP-1 receptor agonist or CNO into the ventral midbrain of mice just before measuring their food intake (Figure 2), with no consideration of how acute surgery, presumably in anesthetized mice, might affect subsequent feeding behavior.   
      
      In summary, we believe that the present report is seriously flawed.  Although the authors' conclusions are consistent with previous work in rats, their report fails to demonstrate a specific role for endogenous central GLP-1 signaling in the control of palatable food intake in mice.
      

      REFERENCES

      1. Dossat, A.M., et al., J Neurosci, 2011. 31: 14453.

      2. Alhadeff, A.L., L.E. Rupprecht, and M.R. Hayes, Endocrinol, 2012. 153: 647.

      3. Mietlicki-Baase, E.G., et al., Am J Physiol Endocrinol Metab, 2013. 305: E1367.

      4. Dickson, S.L., et al., J Neurosci, 2012. 32: 4812.

      5. Skibicka, K.P., Front Neurosci, 2013. 7: 181.

      6. Richard, J.E., et al., Plos One, 2015. 10(3).

      7. Hayes, M.R., L. Bradley, and H.J. Grill, Endocrinol, 2009. 150: 2654.

      8. Scott, M.M., et al., J Clin Invest, 2011. 12: 2413.

      9. Kang, B.J., et al., J Comp Neurol, 2007. 503: 627.

      10. Lazarenko, R.M., et al., J Comp Neurol, 2009. 517: 69.

      11. Geerling, J.C., P.C. Chimenti, and A.D. Loewy, Brain Res, 2008. 1226: 82.

      12. Mastitskaya, S., et al., Cardiovasc Res, 2012. 95: 487.

      13. Brunet, J.F. and A. Pattyn, Curr Opin Genet Dev, 2002. 12: 435.

      14. Geerling, J.C. and A.D. Loewy, J Comp Neurol, 2006. 497: 223.

      15. Delfs, J.M., et al., Brain Res, 1998. 806: 127.

      16. Mejias-Aponte, C.A., C. Drouin, and G. Aston-Jones, J Neurosci, 2009. 29: 3613.

      17. Llewellyn-Smith, I.J., et al., Neurosci, 2011. 180: 111.

      18. Vrang, N., et al., Brain Res, 2007. 1149: 118.

      19. Rinaman, L., Am J Physiol, 1999. 277: R582.

      20. Maniscalco, J.W., A.D. Kreisler, and L. Rinaman, Front Neurosci, 2013. 6: 199.

      21. Maniscalco, J.W., et al., J Neurosci, 2015. 35: 10701.

      22. Williams, D.L., D.G. Baskin, and M.W. Schwartz, Endocrinol, 2009. 150: 1680.

      23. Kanoski, S.E., et al., Endocrinol, 2011. 152: 3103.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 17, John Tucker commented:

      The authors of this highly publicized petition raise the issue of the untenable rate of increase in the price of oncology drugs. They point out that the average cost of new cancer drugs at launch has increased by 5-fold to 10-fold over the last 15 years, and express concern regarding the effects of these drug costs on patient's financial well-being, treatment decisions, and the financial stability of the healthcare delivery systems. They propose a variety of solutions, including price controls, re-importation, reforms to the patent system, and encouraging professional groups to incorporate the cost of medical interventions into guideline development decisions.

      Certainly we can all agree that the cost of healthcare cannot be allowed to perpetually outstrip the rate of economic growth. But ultimately, controlling healthcare spending requires a data-driven examination of what is driving costs, and not just politically expedient finger pointing at other contributors.

      Per CMS (https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/Downloads/highlights.pdf), US healthcare spending increases from 2014 to 2015 included:

      • A $53B increase in hospitalization costs, from $947B to $1000B
      • A $37B increase in physician and clinic fees, from $597B to $635B
      • A $26B increase in drug costs, from $298B to $324B.

      Further, a recent IMS study (available at http://www.imshealth.com/en/thought-leadership/quintilesims-institute/reports/global-oncology-trend-report-2014) has shown that for many cancer drugs, including bevacizumb, cetuximab, pertuzumab, rituximab, and trastuzumab among others, the price paid by patients to hospitals exceeds the average wholesale price of the drug by more than 100%. This is in spite of the increasing number of hospitals that pay far below AWP for these drugs due to 340b discounts.

      In order to control healthcare costs, it will be necessary to put all cost sources on the table, not just those of pharmaceuticals. This will include the very high drug administration fees pay to hospitals, and in the long run, the mid-six figure to seven figure salaries of many signatories of the Mayo Petition on Drug Prices.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 24, Vatsalya Vatsalya commented:

      NIAAA Director's Statement for the Record on NIAAA FY 2015 Budget Request, Senate Subcommittee on Labor-HHS-Education Appropriations (Context of Varenicline in the 4th paragraph of NIAAA Research Section): http://niaaa.nih.gov/about-niaaa/our-funding/congressional-testimony/directors-statement-record-fy-2015-budget


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Dec 11, Vatsalya Vatsalya commented:

      The NIAAA director's report for September 2015 council meeting included varenicline clinical trial manuscript as one of the research highlights:

      http://www.niaaa.nih.gov/about-niaaa/our-work/advisory-council/directors-reports-council/niaaa-directors-report-institute-10#research-highlights


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 03, Robert M Flight commented:

      I have some concerns about the procedure of propagation to GO child terms mentioned as part of the procedure for generating the mutation-GO profile, as this results in new gene-GO annotations that do not exist in the original gene-GO annotation. A github repo with the results of my investigation of this issue is available, along with copies of the paper, supplementary data, all R and octave code used, and my correspondence with Sael.

      The authors state that this propagation is necessary for the function of the ONMF procedure, but do not provide any evidence that this is so, and as part of the overall analysis actually perform propagation back to parent terms to determine significant GO terms. I wonder if both sets of propagation might have been avoided by using a GOSlim or another GO subset.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 06, Kenneth Witwer commented:

      The figures in this article appear to have been previously published by another group (Tian Q, 2012) in the context of a different miRNA, miR-550a. The miRNA sequence in Figure 2 is also the sequence of miR-550a, not miR-1246 as labeled.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Aug 05, Roger H Reeves commented:

      Response to Benoit Bruneau (Bruneau’s comments in italics)

      Bruneau: The paper by Polk et al purports to demonstrate a genetic interaction between Tbx5 and Ts65Dn. I have a number of comments and questions related to the data used to reach this conclusion. First, the reduced amount of Tbx5 in the Ts65Dn is interesting, but puzzling is the almost complete absence in the Ts65Dn;Tbx5+/- embryos: based on previous investigations, this level of Tbx5 mRNA (almost zero) should result in extremely severe defects in heart formation, which are not observed.

      Response: We show a qPCR analysis at a single developmental time point, the E12.5 heart. Although continuous low Tbx5 levels beginning earlier in heart development would result in severe heart defects as Bruneau suggests, the ramifications of diminished expression at E12.5 are unclear. Perhaps the increased lethality in Ts65Dn;Tbx5+/- mice is related to this observation. In an unnecessary ad homonym attack, Bruneau intimates that we are incapable of using qPCR appropriately (“This brings into question the quantitation of the mRNA levels”). Instead, it is Bruneau’s supposition that our observation at E12.5 could only be accurate if severe heart malformations were observed which is incorrect.

      Bruneau: Data are presented only for 2 of the 4 genotypes that would be necessary to derive any conclusion [about genetic association]. In table 3, WT and Ts65Dn genotypes are not present. In Figure 2, only the compound heterozygote Ts65Dn;Tbx5+/- and WT are shown.

      Response: The expected frequencies for these defects in wt and Ts65Dn mice have been published multiple times by us and others and three relevant studies are cited [1-3] showing that in a sample of this size, we would expect to see <1 ASD, <1 VSD, 0 AVSD and 0 OA in the same genetic background as this study. That is, we would expect to observe few or no defects and the same for wt. Perhaps we could have made a more concrete reference to the cited studies with regard to this specific point; however, neither we nor the reviewers found our presentation to be problematic. Even for those who missed the reference, we disagree with the statement that there is “no evidence [in Table 3] for genetic interaction.” The combination of genotypes unequivocally changes the phenotype; what forces not involving genetics might be responsible?

      Bruneau: The same genotypes are missing from Fig 4.

      Reponse: Fig. 4 is a histological representation. Since the defects don’t occur and such controls in the same genetic background have already been published, we choose to cite them rather than reproduce them here.

      Bruneau: The in situ hybridization in Fig 4 suggests a reduction in Pitx2 expression in Ts65Dn;Tbx5+/- hearts; despite a very weak signal, this may be true, but this in no way indicates that the mice have atrial isomerism nor that the left-right pathway is involved in the defects shown.

      Response: Here Bruneau overstates our conclusions about left-right patterning in order to criticize them (straw man argument). We show that OA incidence increases in Tbx5;Ts65Dn mice (Tbl. 3) – Bruneau tacitly agrees. We show that Pitx2 expression is lower in LA of Ts65Dn – Bruneau agrees with this as well. We discuss our findings in the context of the literature on OA where one finds frequent references to the relationship between OA and left-right development of the heart (see 12 references to the relationship between OA, Pitx2 and left-right signaling, 2nd paragraph of the Discussion). We point out the established link between altered patterns of Pitx2 expression, left-right isomerism and OA as a justification for doing this experiment in Fig 4 with the results shown. However, we do not state that OA is due to atrial isomerism, nor do we state that any of these mice have atrial isomerism. Bruneau has misstated our conclusion.

      Bruneau: The atria of the mutant mice (in Fig 4) clearly have their normal morphology

      Response: We note that it is not possible to correctly deduce complex pathological relationships from a single histological image. Regardless, it is irrelevant, as we do not assert that left-right isomerism is observed; that is a straw man of Bruneau’s invention.

      Bruneau: I look forward to reading the authors' responses to these issues.

      Response: We expect differences of opinion in interpretation by experts as a useful and necessary part of the scientific enterprise. Here, however, Bruneau has offered a superficial and needlessly aggressive critique replete with mischaracterization of our stated conclusions. We trust that objective readers interested in the complex phenomena resulting in heart defects in Down syndrome will consider our data for its intrinsic value and not mischaracterize our qualified speculations in the Discussion as conclusions.

      Signed: Roger Reeves, Renita Polk, Peter Gergics, Ivan Modkowitz and Sally Camper

      1. Moore CS (2006) Postnatal lethality and cardiac anomalies in the Ts65Dn Down syndrome mouse model. Mamm Genome 17: 1005-1012.
      2. Williams AD, Mjaatvedt CH, Moore CS (2008) Characterization of the cardiac phenotype in neonatal Ts65Dn mice. Dev Dyn 237: 426-435.
      3. Li H, Cherry S, Klinedinst D, DeLeon V, Redig J, et al. (2012) Genetic modifiers predisposing to congenital heart disease in the sensitized Down syndrome population. Circ Cardiovasc Genet 5: 301-308.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Jul 29, Benoit Bruneau commented:

      The paper by Polk et al purports to demonstrate a genetic interaction between Tbx5 and Ts65Dn. I have a number of comments and questions related to the data used to reach this conclusion. First, the reduced amount of Tbx5 in the Ts65Dn is interesting, but puzzling is the almost complete absence in the Ts65Dn;Tbx5+/- embryos: based on previous investigations, this level of Tbx5 mRNA (almost zero) should result in extremely severe defects in heart formation, which are not observed. This brings into question the quantitation of the mRNA levels. This is very minor point compared to the presentation of the data: data are presented only for 2 of the 4 genotypes that would be necessary to derive any conclusion. In table 3, WT and Ts65Dn genotypes are not present. In Figure 2, only the compound heterozygote Ts65Dn;Tbx5+/- and WT are shown; how can one judge any genetic interaction if the individual genotypes (Ts65Dn and Tbx5+/-) are not shown? In certain genetic backgrounds we see such defects occasionally in Tbx5+/- neonates, therefore the conclusions proposed by the authors cannot be reached with the data provided. The same genotypes are missing from Fig 4. The in situ hybridization in Fig 4 suggests a reduction in Pitx2 expression in Ts65Dn;Tbx5+/- hearts; despite a very weak signal, this may be true, but this in no way indicates that the mice have atrial isomerism nor that the left-right pathway is involved in the defects shown. The atria of the mutant mice clearly have their normal morphology, and there is no evidence presented of isomerism (e.g. ectopic or missing sinoatrial node, abnormal venous valve connections), not are any left-right pathway components (upstream or downstream of Pitx2) explored. The authors' conclusions regarding overriding aorta as a product of defective LR asymmetry, especially that of the atria, is particularly puzzling, as these are opposite poles of the heart. Therefore the conclusions related to disruption of left-right pathways in this genotype is not at all supported. I look forward to reading the authors' responses to these issues.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 27, Andy Collings commented:

      Jawdat Al-Bassam's comment on this article (https://elifesciences.org/content/4/e08811#comment-2953448767) is reproduced below:

      Negative stain EM 3D-reconstruction and consequent interpretations are often ambiguous due to their low resolution and rely on biochemical data for substantiation as provided in our manuscript. During the past year, we have used different 3D-reconstruction strategies to re-analyze negative stain data and obtain structures. From this re-analysis, we observed some changes in the features of 3D-reconstructions in Figures 5 and 7 in a program-dependent manner, which could result in changes to the fitted models described in Figures 5 and 7. As such, we would like the community to be aware that there are possible ambiguities and alternative interpretations of the published reconstructions, which likely arose from complex heterogeneity due to different conformational states or deformations from the negative staining process. However, these potential reconstruction differences do not change the general conclusions made in the manuscript regarding the overall organization of the complexes and the sites of binding for tubulin and tubulin cofactor C at low resolution. In addition, we deposited our raw data several months ago (EMPAIR-10034, 10035) and welcome suggestions and input from the community. We are currently focused on high-resolution structural studies using cryo-electron microscopy that will allow us to determine the de novo structure for the Tubulin cofactors-D-E with Arl2 assembly in complex with Tubulin dimer and Tubulin cofactor C.

      Jawdat Al-Bassam (jawdat@ucdavis.edu)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 09, NLM Trainees’ Data Science Journal Club commented:

      This is a preliminary study examining the discoverability of and access to biomedical datasets generated by research funded by the U.S. National Institutes of Health (NIH). More specifically, it focuses on datasets not deposited in a known repository and considered “invisible.” Analysis of the NIH-funded journal articles shows that only 12% explicitly mention deposition of datasets in known repositories, leaving a surprising 88% that are invisible datasets. The authors suggest that approximately 200,000 to 235,000 invisible datasets are generated from NIH-funded research published in one year alone. The study identifies issues to improve discoverability and access to research data: definition of a dataset, determining which data should be preserved and archived, and better methods for calculating the number of datasets of interest.

      Our group had the honor of having two of the authors in attendance – Betsy Humphreys and Lou Knecht – to provide personal insights into the study. Betsy pointed out one the article’s strength – the opportunity to share this surprising discovery that has potential practical benefits to the research community. The study has received a fair amount of positive feedback, and Betsy mentioned that Clifford Lynch from the Coalition for Networked Information (CNI) has distributed this paper and calling for more studies on this subject. She also recognized the study’s weakness of not holding up as a model of research methodology and lack of consensus among the annotators on the definition of a dataset.

      There was a lively discussion from the group on what constitute “invisible” dataset – are links to scientist or institutional websites considered invisible and how to detect visibility with better JATS tagging. For clinical researchers, personal domain self-archiving is the norm and considered invisible. Everyone agreed that it would be a priority to define visible and invisible for future research. Another interesting part of the discussion focused on having dataset be “in context” i.e., datasets are meaningful only when considered along with the published paper. It was surprising to learn that a large part of the formal repository name mentioned in the full text did not turn out to be in the context of an actual deposit. We discussed that it would be helpful to have guidelines and a consistent way to label the information – database name, date of deposit, accession number – in the acknowledgement for easier discoverability and preservation. We were pleased to see this preliminary study published because it brought light to the large problem of invisible dataset, and look forward to seeing more research in this area.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Jul 25, Irving I. Gottesman commented:

      This experimental confirmation of a "rumor" about pain insensitivity in patients with schizophrenia is a welcome one. If (when) it is noticed by schizophrenia researchers, the finding could be extended to testing the first degree relatives of patients for such decreased pain sensitivity.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 21, Daniel Corcos commented:

      It is surprising that in situ cancer cumulative incidence after 7 years in the intervention group is only twice that in the control group, as these cancers are usually detected by screening, and a 4-fold difference would be expected instead.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 18, Duke RNA Biology Journal Club commented:

      This comment is the summary of a discussion from our journal club meeting.

      General Impressions: An impressively thorough paper which uses a combination of cell biology, biochemistry and high-throughput sequencing approaches to first identify lncRNAs associated with repressed chromatin, determine what genes a model lncRNA regulates and, finally, identify a specific structural mechanism by which this occurs. This paper lays the groundwork for future identification and characterization of chromatin associated lncRNAs. One overarching criticism with this work is it reads as two separate papers - one identifying the biological role that MEG3 plays in transcription regulation and a second developing the triple helix method of regulation.

      Specific Points:<br> A technique, ChRIP-seq, was used to determine the global lncRNA environment of repressive chromatin. The design of the RIP protocol using two different proteins associated with repressed chromatin greatly simplified the final analysis by narrowing the pool of lncRNA targets to 70 unique lncRNAs. However, it was interesting that, even though both RIPs used 4SU crosslinking, the enrichment seen for T-to-C (or A-to-G) conversions was within the EZH2 pulldown and not the H3K27me3 pulldown. What are the 440 RNAs that specifically interacted with H3K27me3 but not EZH2 and why don’t they show crosslinking to the protein over input levels? While this was an interesting puzzle, the comparison that was done between the EZH2 crosslinked enriched RNAs and the chromatin enriched RNAs to provide a list of only 70 lncRNAs, was a clever way of finding chromatin associated RNAs that specifically bound to the PRC2 complex.

      From the list of 70 candidate lncRNAs to study, MEG3 was selected. Even though the focus is on one lncRNA, a similar characterization pipeline could be used for the other RNAs identified through this technique. The cross-links identified through the T-to-C transitions were used as a starting point to identify a clear binding site to the protein. Luckily for them, the 4SU labeling, which usually over-crosslinks to its targets, shows only two identifiable clusters of crosslinks. Starting with crosslinks in the more conserved exon 3, they identified 9 bases that abolish ~50% of binding to EZH2 in vitro and in vivo, however, this would also imply there are separate sites on MEG3 that facilitate the other 50% of the binding. Additionally, the non-conserved sequences in MEG3 could fine-tune the binding to various proteins, including EZH2, in different tissues or organisms. Neither of these points is discussed further in the article but would be interesting to delve into if the ChRIP-seq analysis could be modified and applied to tissue samples.

      After identifying MEG3’s putative site of binding to EZH2, the group did the obvious experiment and made knockdowns of both EZH2 and MEG3 to determine which genes were affected by RNA-seq experiments. An interesting addition might have been to use complete knockdown cells and a “rescue” with the mutant MEG3 from Fig 2 to provide insight into what genes that binding site specifically affects. They determine that both knockdowns show overlap for the TGF-beta pathway. This is further validated by using an orthogonal assay called ChOP-seq.<br> To explore the intricacies of the MEG3:TGF-beta gene interaction, Mondal and co-workers looked for sequence motifs within the MEG3-binding regions of the genome and discovered a GA-rich sequence motif. Interestingly, they also found GA-rich sequence motifs in the genomic binding locations of rox2 and HOTAIR, two well studied lncRNA known to bind the genome. This information suggests that GA-rich motifs are important for lncRNA localization across the genome.

      Several previous studies explored the possibility that lncRNA form triple-helix structures with their target genome binding site using a combination of computational and experimental techniques; Mondal and co-workers used the Triplexator software, which is based on the binding rules needed for triple helix formation, as well as RNAse digestion assays. The Triplexator software identified several regions in MEG3 with high probability of forming triple helices, and those with the highest probability were GA-rich sequences. The assays combined GA-rich dsDNA probes and a target GA-rich segment of MEG3. After incubation, these solutions were separately treated with RNase A and RNase H. RNase A does not degrade ssDNA or dsDNA while RNase H degrades only ssRNA. The solution was sensitive to RNase A digestion, but resisted RNase H digestion. This indicates that the RNA present is not single stranded; however, the authors assume that a triple-helix structure is forming. Because they are not directly observing the structure, it is potentially possible that the RNA is displacing one of the DNA strands and forming an RNA:DNA double-helix. Mondal and co-workers use an anti- triplex dA.2rU antibody for their in vivo assays to confirm the presence of triple-helices, but there is no indication that this antibody was purchased from a commercial source, nor do they explain the method of raising the antibody in-house.

      While this information indicates a RNA:DNA triple-helix structure, direct observation is necessary to confirm. Numerous methods could be used to assess the triple-helix structure: X-ray crystallography, nuclear magnetic resonance (NMR), small-angle x-ray or neutron scattering (SAXS/SANS), cryoelectron-microscopy (cryo-EM). As initially stated, we believe this work would have benefitted from splitting into two separate papers: one exploring the genomic binding of MEG3 to the TGF-beta pathway genes, and the other exploring the potential role of triple-helix formation in lncRNA binding.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 10, Holly G Prigerson commented:

      Data appear to confirm that in some (most?) cases, less chemo is more QoL


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Aug 20, Bill Cayley commented:

      A good example of when "less" is "more" - for more examples, see: https://lessismoreebm.wordpress.com/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 05, Stuart RAY commented:

      I appreciate Prof Pybus' response, and have no objection to this use of the "Simplot" name; I was just providing context.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Feb 01, Oliver Pybus commented:

      I named the method "Deep Simplot" in homage to the very popular Simplot program. I apologise to Prof Ray and his colleagues for (i) not seeking their permission to use this term and (ii) not citing the original Simplot paper. To avoid confusion I recommend that, in future, the method described by our paper is referred to as "deep divergence plotting". My thanks to Prof Ray for bringing this issue to my attention.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jan 31, Stuart RAY commented:

      As the author of Simplot (Lole KS, 1999) I infer that the "Deep Simplot" method described was named after Simplot; it also shares characteristic display elements (as seen in figure 3b). Of course, the bootscanning approach that I implemented in Simplot was pioneered by Mika Salminen et al (Salminen MO, 1995, which is cited by Iles et al.).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 12, Víctor M. Baizabal-Aguirre commented:

      The PLKO1-GSK3beta1 and PLKO1-GSK3beta2 vectors used in our work were successfully used in a previous report by Yoeli-Lerner et al. (2009). As in our study, these authors were also able to fully silence the GSK3beta isoform (PMID: 19258413, see Figures 1A and 3A). Most of the studies on GSK3-dependent regulation of beta-catenin have used unspecific inhibitors that affect both GSK3 isoforms. Therefore, the contribution of GSK3alpha and GSK3beta to the regulation of beta-catenin is an issue that remains open. In this regard, Yu et al. (2003) reported that specific gene silencing of GSK3alpha or beta by siRNA expression vectors induces the stabilization of beta-catenin in P19 mouse embryonic carcinoma cells (PMID: 12597911, see Figure 5). As to the effect of GSK3 silencing on beta-catenin, results published in 2009 by Mamaghani et al., demonstrated that GSK3alpha or beta inhibition by siRNA increased the stabilization of beta-catenin in pancreatic cancer cells (PMID: 19405981, see Figure 3A). In contrast, Ryu et al., (2012) reported that specific GSK3alpha inhibition by siRNA decreased beta-catenin levels in human gastric cancer cells (PMID: 22328534, see Figure 3E). These findings indicate that complete removal of GSK3alpha or GSK3beta, as in our work, affect the relative abundance of beta-catenin and that GSK3alpha and GSK3beta alter in different ways the stabilization of beta–catenin, depending on the type of cell.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Aug 04, Jim Woodgett commented:

      Figure 3 appears to be missing panel B (beta-catenin blot).

      The RNAi knockdowns here are remarkably efficient and it looks from the legend as though only one siRNA was used (although in the methods 3 alpha sequences and 2 beta sequences are listed). Figure 5 shows essentially complete removal. Blowing up the figure reveals some image artifacts. Typically, even complete inhibition of either GSK-3alpha or GSK-3beta has no effect on beta-catenin as the other isoform compensates fully (Axin is present at far lower concentrations that either isoform of GSK-3 and is the limiting factor in beta-catenin phosphorylation).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 21, Will Rowe commented:

      SEAR is now available for use as an App on the BaseSpace platform (Illumina).

      This iteration of SEAR has been updated, changes include:

      • The use of the open source VSEARCH (instead of USEARCH).

      • New result output.

      Please try it at:

      https://basespace.illumina.com/apps/2083081/SEAR-Antibiotic-Resistance

      Finally, the SEAR source code (for the App, Docker container image and original code) is available on GitHub:

      https://github.com/wpmr2/SEAR


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 05, Anne Carpenter commented:

      As senior author on this paper, I report an error on page 10: the words "after cell fixation with 16% paraformaldehyde" should be replaced by "after cell fixation with 3.2% formaldehyde".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Jul 25, David Keller commented:

      This just in: pioglitazone proved futile for slowing the progression of Parkinson's disease

      A large double-blind placebo-controlled randomized phase II study has proved pioglitazone futile for slowing the progression of early Parkinson's disease [1]. Pioglitazone is the only widely-prescribed thiazolidinedione ("glitazone") since the FDA placed safety restrictions on the use rosiglitazone (Avandia).

      Glitazones now join the ranks of disproved neuro-protectants, including creatine, co-enzyme Q10, vitamin E and minocycline. Back to the laboratory....

      Reference:

      1: NINDS Exploratory Trials in Parkinson Disease (NET-PD) FS-ZONE Investigators. Pioglitazone in early Parkinson's disease: a phase 2, multicentre, double-blind, randomised trial. Lancet Neurol. 2015 Aug;14(8):795-803. doi: 10.1016/S1474-4422(15)00144-1. Epub 2015 Jun 23. PubMed PMID: 26116315.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Jul 23, David Keller commented:

      Exenatide, a different kind of diabetes drug, also exhibits activity against Parkinson's disease

      The type-2 diabetes drug exenatide is in the category of incretin mimetics, and is not a thiazolidinedione ("glitazone") like the drugs in this study by Brauer and colleagues. Exenatide lowers blood sugar by raising levels of endogenous insulin, among other effects. It also has exhibited symptomatic benefits in Parkinson's disease, in a prospective randomized interventional trial [1], with persistent motor and cognitive benefits which suggest possible disease modification. Exenatide can be used concomitantly with glitazones, and such use should be accounted for in this glitazone study, to avoid skewed results. For example, if the percentage of glitazone patients taking exenatide was higher than the percentage of placebo patients taking exenatide, then the apparent benefits of glitazone use may have been all or partly due to this imbalance in the use of exenatide. Was this possibility controlled for?

      Reference:

      1: Aviles-Olmos I, Dickson J, Kefalopoulou Z, Djamshidian A, Kahan J, Ell P, Whitton P, Wyse R, Isaacs T, Lees A, Limousin P, Foltynie T. Motor and cognitive advantages persist 12 months after exenatide exposure in Parkinson's disease. J Parkinsons Dis. 2014;4(3):337-44. doi: 10.3233/JPD-140364. PubMed PMID: 24662192.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 28, Raphael Levy commented:

      A response from Chad Mirkin. Well, nearly: a section of William Briley's PhD which starts with: "Though the endosomal escape of SNA nanostructures such as the Nanoflare and stickyflare is evident based upon their ability to provide sequence-specific information regarding RNA levels and locations within cells, one researcher [That’s me!] has concluded that SNAs cannot escape from endosomes.[75] That researcher is ignoring the many papers now that use such architectures for sequence-specific cell-sorting experiments." More here


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Nov 17, Raphael Levy commented:

      David Mason and myself submitted a letter to the Editor of PNAS regarding that article. It was however deemed not to "contribute significantly to the discussion of this paper" by the editorial board, and therefore publication was declined. I leave it to the readers of PubMed Commons to decide: the letter was published as a PrePrint on bioRxiv. Our article argues that Briley et al own data show that the Sticky-Flare remain in endosome where they get degraded by nucleases (and therefore cannot report on RNA level and localisation).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Nov 17, Raphael Levy commented:

      The PNAS article itself includes a sentence which could be interpreted as SmartFlare advertising: "As a result, the Nanoflare has grown into a powerful and prolific tool in biology and medical diagnostics, with ∼1,600 unique forms commercially available today (sold under the SmartFlare trade name)." Furthermore, there is a Sticky-Flare patent which was published around a month before the communication of the PNAS article.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2015 Sep 28, George McNamara commented:

      I posted this on PubPeer, https://pubpeer.com/publications/25CC01C366B9593D1686A78B52461F#fb36935

      The Briley et al 2015 paper is deficient in methods - what is the length of the new product? what are the design criteria for specificity? how are the spherical nucleic acids constructed? Is there a mechanism by which the flare gets kicked off the SNA? I suggest PNAS explicitly require self contained full methods and materials in manuscripts they accept. The details can be in the supplemental file, and can both provide full details and cite -- or even quote -- earlier work.

      The Briley COI statement states: "The authors declare no conflict of interest.". The authors and their University previously commercialized NanoFlare/SmartFlare - is PNAS sure they have not submitted patent applications for Sticky-Flare and intent to make money from it = financial interest. I am fine with commercialization of products, but if this is an advertisement for a future product, the authors should be honest in their COI an PNAS should mark the paper as an advertisement.

      Citation for commercialization: "NanoFlares have been very useful for researchers that operate in the arena of quantifying gene expression. AuraSense, Inc., a biotechnology company that licensed the NanoFlare technology from Northwestern University, and EMD-Millipore, another biotech company, have commercialized NanoFlares. There are now more than 1,700 commercial forms of NanoFlares sold under the SmartFlare name in more than 230 countries." http://www.northwestern.edu/newscenter/stories/2015/07/new-tool-for-investigating-rna-gone-awry.html#sthash.GwI4hbRx.dpuf

      One of their patents is US8507200B2 https://patents.google.com/patent/US8507200B2/en?q=mirkin&q=nanoflare


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 27, Christopher Southan commented:

      There was tutorial on this theme at the 2017 International Conference on Trends for Scientific Information (ICIC) https://www.slideshare.net/Haxel/icic-2017-tutorial-digging-bioactive-chemistry-out-of-patents-using-open-resources/1


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jan 25, Dimitrios Tzalis commented:

      The ELF Public Compound Collection (PCC) is a very unique collection of compounds that are already synthesized and available for screening within ELF Screening Campaign. The goal of the paper was to compare the PCC with other compounds that are available for biological screening or that were actually screened. That is why, with a full awareness, we have analyzed the PCC against part of the PubChem collection called MLP of NIH, commercially available Maybridge and already tested compounds represented by ChEMBL. It might be interesting to collate the PCC and the 15 millions of patent-extracted compounds, in respect to broadly understood novelty, but we wanted to avoid the contamination of our comparison with theoretical compounds. In such a way the work is much more consistent. Nevertheless, thank you for your valuable comment and we can think of extending our novelty check also in respect to theoretical compounds in the future analysis since our collection is still growing. We believe that by focusing on exploring underrepresented chemical space of spiro-compounds and saturated, fused hetero rings we are offering a very competitive and unprecedented collection.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Jul 22, Christopher Southan commented:

      This is part of a special issue "From chemistry to biology database curation" http://www.sciencedirect.com/science/journal/17406749/14/supp/C. The PubMed ID series is 26194580-6.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Jul 22, Ellen M Goudsmit commented:

      As one of the main authors, I note that the PACE trial could not have used the London criteria for ME, as often claimed, as there should have been at least one difference between the groups, and this was not reported. Ergo, they did not use the criteria for ME correctly and the results from the trial can not be extrapolated to this population. The attitude of the authors towards ME and the scientists who specialise in this disease is disappointing.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 03, Seyed Moayed Alavian commented:

      Dear All, I would like to declare that receiving the medical services may be an important risk factor for acquiring the HCV and HBV infections in Middle East Countries. Yours Alavian SM


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 14, Noman Shahzad commented:

      The article is the highest level of evidence available on the topic. Reading the article, I am interested in knowing more about those who developed hernia. Like group wise details of how many of them were symptomatic, how many required surgical repair. Was there any statistical difference in detection method used to diagnose hernia? Was there difference in Quality of Life of those who developed hernia in traditional vs small suture group? This information will help in understanding the impact of change in technique from patient perspective. Clustering leading to increased risk of alfa error is a frequent problem of multicentric trials, it will be informative if information could be provided if adjustment was made for clustering in statistical analysis.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jan 10, Noman Shahzad commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Jul 24, Jay Kaufman commented:

      Etsuji Suzuki of Okayama University, Japan contacted the authors to note that on page 3 of the article, the text states that that conditioning on CVD in Figure 1c opens four paths that bias the effect of obesity on mortality. However, the first of these paths that is listed in the text (obesity <- smoking -> CVD -> mortality) does not in fact contain a collider on the path, since no nodes are entered and exited through arrowheads. Therefore, the text should state that there are three such paths, not four. This error in the exposition of the problem changes neither the quantitative results nor the conclusions of the paper.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 11, Mark Johnston commented:

      Interactive protocols from this article are freely available at protocols.io.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 29, George McNamara commented:

      Why does this paper's authorship section state:

      Conflict-of-interest disclosure: The authors declare no competing financial interests.

      when one of the authors works for the company that supplied the CB-839 drug feature in the abstract and key points?

      http://www.calithera.com/programs/cb-839/

      Genetically mandated alterations in the fundamental metabolic pathways of tumors often cause a dramatic rise in the uptake of the nutrients glucose and glutamine. Removal of glutamine leads to a substantial reduction in cell growth or induces cell death in certain types of cancer cells, indicating that these cells are dependent on, or “addicted” to, glutamine. Normal cells do not show this pronounced dependence on glutamine. The enzyme glutaminase, which converts glutamine to glutamate, has been identified as a critical choke point in the utilization of glutamine by cancer cells. CB-839 is a potent, selective, reversible and orally bioavailable inhibitor of human glutaminase.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 24, Geriatric Medicine Journal Club commented:

      This article was critically appraised at the October 2015 Geriatric Medicine Journal Club (follow #GeriMedJC on Twitter). One of the study authors was also present for the tweet chat discussion! The full discussion can be found at: http://gerimedjc.blogspot.com/2015/10/october-2015-gerimedjc.html?spref=tw This is a very interesting study which separates common beliefs about older people in general as different from older patients in health care settings. This will help inform policies and education strategies to improve care for older adults.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 16, Xiang Ming commented:

      Thanks very much for Murtaugh’s careful review. According to your comment, we make following explanations. 1. The pancreas photos on the first row of Fig.5A showed that the surgical removal of tissues do not include intestine. After being photographed, the tissues were embedded in paraffin, so we could ensure that all sections assessed in this paper are from pancreas. 2. As you said “the treatment might have induced cancers of an intestine-like pathology”, we should explain that a great quantity of non-tumor components existed in pancreatic cancer including stromal cells and lymphocytes. The immunohistochemical analysis in Figure.5 was consistent with this pathological characteristic which was similar to intestine-like pathology. 3. In our study, the malignant cell could secrete a small amount of amylase (Fig.5A). This also prove that the treatment didn’t induce cancers of an intestine-like pathology. On the other hand, the Ki67 is mainly expressed in the malignant acinar cell nucleus, so high levels of Ki67 expression could support our conclusion that Reg3g could promote proliferation in acinar cells


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Jul 30, L Charles Murtaugh commented:

      While this is an interesting study, and the role of REG genes in pancreatic cancer is arguably understudied, the conclusions of this paper are tempered by irregularities in the histological analysis. Only one study group developed pancreatic cancer in this study, namely the high-dose pReg3g + DMBA treatment, referred to as HA10R. Inspection of the histology data presented for the HA10R group (Fig. 5A) reveals that several of the most relevant images (H&E staining, Ki67 and cytokeratin-19), for this group specifically, appear to be taken from sections of the intestine rather than the pancreas. While, in principal, the treatment might have induced cancers of an intestine-like pathology, this would not explain the clear organization of proliferative crypts and non-dividing villi, apparent from Ki67 staining. This raises doubts about the quantitative analysis of these and other variables, in Fig. 5B, as well as about the overall conclusion that treated mice developed cancer of the pancreas.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 05, David Keller commented:

      Patient-oriented result: response rate to magnetic stimulation was the same as to placebo

      This study of repetitive transcranial magnetic stimulation (rTMS) was designed to test the hypothesis that rTMS would result in a "statistically significantly greater percentage of responders to treatment in an active rTMS group compared with a placebo rTMS group" [1]. A relatively new metric called the Tinnitus Functional Index (TFI) was used to measure response to treatment. The TFI rated 18 of the 32 subjects actively treated with rTMS as responders to treatment (56%), while only 7 of the 32 subjects treated with sham therapy were rated as responders (22%). These two rates differed significantly, which was pre-specified in the Objectives section as defining a successful outcome.

      However, 7 of the 18 treated subjects rated as "responders to therapy" using the TFI scale nevertheless believed they had received sham therapy, implying that they did not perceive any treatment benefit beyond the placebo effect. When a subject states that his treatments seemed like sham therapy, providing only placebo-strength benefit, this is important information. Since it is a direct expression of the subject's assessment of the efficacy of rTMS therapy, it has more validity than a contrived metric like the TFI, from a patient-oriented perspective.

      The data in e-Table 12 indicate that, of the 32 subjects who received active rTMS treatments, only 11 correctly guessed they had received active therapy at the end of the last treatment, which implies that only 11 out of 32 actively-treated subjects (about 34%) noted perceptible improvement in their tinnitus symptoms. Coincidentally, 11 of the 32 placebo-treated subjects (also 34%) guessed that they had received active rTMS therapy, which equals the placebo effect. Thus, active rTMS treatments had the same response rate as sham therapy, equal to the placebo effect of 34%.

      Conclusion: rTMS is no more effective than placebo for treating tinnitus, when assessed by subjects after a full course of treatments, based on their perception of whether they received active or sham therapy. The advantage of this assessment is that it eliminates uncertainty about the accuracy and clinical relevance of the TFI metric, because the assessment of treatment benefit came directly from the subjects themselves.

      Reference

      1: Folmer RL, Theodoroff SM, Casiana L, Shi Y, Griest S, Vachhani J. Repetitive Transcranial Magnetic Stimulation Treatment for Chronic Tinnitus: A Randomized Clinical Trial. JAMA Otolaryngol Head Neck Surg. 2015 Aug;141(8):716-22. doi: 10.1001/jamaoto.2015.1219. PubMed PMID: 26181507.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Nov 29, David Keller commented:

      38% of the reported "responders to therapy" thought they had been randomized to placebo & reply by author

      At the end of this study, only 11 of the 32 subjects who received active treatment for tinnitus guessed that they had received active treatment. The remaining 21 subjects who were actively treated guessed that they had received placebo (sham treatments).

      18 of the 32 actively treated subjects were rated as "responders" to therapy by Folmer et al. Thus, 7 of the actively treated subjects, who were rated as "responders" to therapy, thought they had received sham treatments. Bottom line: 7 of the 18 tinnitus sufferers (38%) who were reported to be "responders to therapy" actually did not perceive any benefit.

      Tinnitus is a subjective phenomenon. I contend that, by definition, a responder to tinnitus therapy cannot believe that he received sham therapy. If a subject thinks he was treated with sham therapy, he did not perceive any benefit, and he cannot be a reported to be a "responder to therapy". This is the essence of my criticism of this study, and it has not been addressed.

      Addendum (12/4/2015): Yesterday, in reply to the above comment, Dr. Folmer issued the following statement (start of quotation):

      In our study, participants were categorized as "responders" or “non-responders” to TMS treatment based solely on the change in their TFI score from baseline to post-TMS assessment – this is stated in the article.

      The definition of "responders" or “non-responders” we used had nothing to do with

      1. Whether or not study participants “perceived” any benefit if that was based on anything else but their TFI score
        
      2. Study participants’ guesses that they received active or placebo rTMS  
        

      These are separate issues. You can debate, discuss or disagree with them, but they remain separate issues and definitions as specified in the article.

      --Robert L. Folmer, Ph.D. (end of quotation)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 05, Eiko Fried commented:

      A recent study published in Nature by the CONVERGE consortium [1] identified two Single Nucleotide Polymorphisms (SNPs) for Major Depressive Disorder (MDD) that replicated across two samples of Han-Chinese women with recurrent depression. The report was accompanied by an editorial [2] that hailed the findings as biologically and diagnostically relevant, suggesting that large-scale exploratory genome-wide studies offer enticing prospects towards aiding diagnosis and the development of new drugs.

      We disagree with the editorial’s interpretation (and most of the media coverage) of these CONVERGE results, which also contrast with the careful phrasing of the authors themselves. Although the two SNPs discovered in the comparatively homogenous CONVERGE sample did replicate in a similarly ascertained group, the editorial fails to mention that they did not in the more heterogeneous Psychiatric Genomics Consortium (PGC) data also examined by the authors. Moreover, in polygenic risk score analysis, the genetic signal in the PGC sample explained less than 0.1% of disease risk in the CONVERGE data, implying a fundamental lack of overlap in genetic risk signal across samples.

      The laudable effort of the CONVERGE consortium to ensure genetically and phenotypically homogenous samples confirms the elusiveness of the genetics of MDD. Hailing the results as robust insights into the biology of depression detracts from the true scientific relevance of the study: genetic effects for MDD are, even in large homogenous samples, small and do not generalize.

      Given the hitherto negative results of genetic MDD studies [4,5], slogging along on this current road of ever-larger samples and discovering at best small effects is not an alluring prospect, especially so considering that these effects are likely not specific to MDD [6]. Instead, we suggest revising complex psychiatric phenotypes such as MDD that were transferred unquestioningly from psychiatry to genetics. Incorporating recently proposed network models [7], symptom- rather than syndrome-level analyses [8], and the development of new instruments that tap variation along the entire continuum [9,10] (i.e., in both "cases" and "controls") offer promising ways forward.

      References

      • 1.Cai, N. et al. Sparse whole-genome sequencing identifies two loci for major depressive disorder. Nature 523, 588–91 (2015).

      • 2.Ledford, H. First robust genetic links to depression emerge. Nature 523, 268–269 (2015).

      • 3.Keener, A. B. Genetic Variants Linked to Depression. Sci. (2015).

      • 4.Hek, K., Demirkan, A., Lahti, J. & Terracciano, A. A Genome-Wide Association Study of Depressive Symptoms. Biol. Psychiatry 73(7), 667–78 (2013).

      • 5.Daly, J. et al. A mega-analysis of genome-wide association studies for major depressive disorder. Mol. Psychiatry 18, 497–511 (2013).

      • 6.Kendler, K. S. ‘A gene for...’: the nature of gene action in psychiatric disorders. Am. J. Psychiatry 162, 1243–52 (2005).

      • 7.Cramer, A. O. J., Kendler, K. S. & Borsboom, D. Where are the Genes? The Implications of a Network Perspective on Gene Hunting in Psychopathology. Eur. J. Pers. 286, 270–271 (2011).

      • 8.Fried, E. I. & Nesse, R. M. Depression sum-scores don’t add up: why analyzing specific depression symptoms is essential. BMC Med. 13, 1–11 (2015).

      • 9.Lee, S. H. & Wray, N. R. Novel genetic analysis for case-control genome-wide association studies: quantification of power and genomic prediction accuracy. PLoS One 8, e71494 (2013).

      • 10.Van der Sluis, S., Posthuma, D., Nivard, M. G., Verhage, M. & Dolan, C. V. Power in GWAS: lifting the curse of the clinical cut-off. Mol. Psychiatry 18, 2–3 (2012).

      Authors

      • EI Fried, University of Leuven, Belgium

      • S van der Sluis, VU Medical Center, Amsterdam, The Netherlands

      • AOJ Cramer, University of Amsterdam, The Netherlands

      PDF of this commentary (DOI: 10.13140/RG.2.1.3480.4963) available at: http://eiko-fried.com/wp-content/uploads/Nat_Correspondence_blog.pdf


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Sep 23, Angelo Gaitas commented:

      The entire response appears in the PLOS1 comment section under response: http://www.plosone.org/article/comments/info:doi/10.1371/journal.pone.0127219

      In two recent articles [1, 2] two techniques for removing or inactivating blood borne pathogens were introduced. The initial experiments were performed in vitro under simplified conditions. First, the primary achievement of the PDT work deserves clarification [1]. PDT is a powerful therapeutic modality, but its clinical application has been hampered by the inability of light to penetrate deep layers of the tissue, which is mainly due to hemoglobins in the blood readily absorbing photons. Utilizing a millimeter- diameter transparent tube for extracorporeal blood circulation allows PDT to function well despite the presence of hemoglobins in blood. Another point that deserves clarification is that the tube capturing device is not a microfluidic device [2]. This technique can be adapted using existing medical tubing without the need for complicated microfluidics and micro-fabrication. The device is a medical tube that has been chemically modified using simple steps to adapt the internal surface for cell capturing. 
      
      We would like to take this opportunity to respond to concerns brought up in [3]. We start off by addressing concern (1), which speculates about the possibility of overheating during the use of near IR light. Our control data (Fig.3 and Fig.4 of [3]), confirmed that controls illuminated without photosensitizer-antibody conjugates did not undergo cell death, whereas those with photosensitizer-antibody conjugates underwent significant cell death under identical conditions. Thus it is clear from our data that temperature did not affect the outcome. It has been shown that 660 nm irradiation is safe and effective [4-6]. 
      
      Moving on to concern (2) part (a) that brings up the problem of using the CD-44 antigen as a target. Limitations of antibody specificity are common knowledge and not unique to CD-44, but to all antibodies. To our knowledge, a targeting method that exclusively binds only to cancer cells does not yet exist, making the use of such a compound an unreasonable standard for publication. We used CD-44 antibody to demonstrate feasibility. As targeting methodologies advance and better selectivity to target cells becomes available, this technique will have improved selectivity. Our experiments were designed to avoid non-specific damage to other cells by pre-staining pure cancer cells with the photosensitizer-antibody conjugates and subsequently removing extra free conjugates before spiking into blood (described in detail in [1]). This elimination of the possibility of side effects due to undesired binding to other blood cells and excess free photosensitizer-antibody conjugates precluded the need for a toxicity study, particularly because we were at the proof-of-principle stage.
      
      Part (b) of concern (2) suggests that we may have caused non-specific damage to non-cancerous cells by ROS' convection in the blood stream. We believe that this is highly unlikely. One of the authors has been conducting research focusing on ROS and PDT for years, in collaboration with other researchers [7-15]. This research demonstrated that PDT is extremely selective to targeted cells [13]. 
      
       Part (c) of concern (2) states that we should have used additional cytotoxicity assays, such as Annexin V, TUNEL, and MTT. However, because none of these techniques are cell-type specific, they would be useless for the particular objective they were suggested. Once our line of investigation reaches a more mature stage, we plan to undertake more useful studies, such as applying separate fluorescent tags, or radio labels, in addition to a cell viability assay and analyzing cell death with a cell sorting technology, such as FACS, MACS, density gradient centrifugation, etc.  
      
      Concern (3) is that the capturing work [2] lacked purity confirmation concerning non-specific capturing of blood cells. Though purity confirmation is critical in diagnostic testing, our work was strictly limited to in vitro conditions, using spiked pure PC-3 cells as a model. To visualize and quantify PC-3 cells in the presence of whole blood, PC-3 cells were pre-labeled using a fluorescence tag (Calcein AM) and the extra free dye was subsequently removed before spiking PC-3 cells into blood. Because only PC-3 cells can have fluorescence in the blood mixture, and because quantification was based on fluorescing cells, false-positive results from other blood cells can be reasonably excluded. Furthermore, if other blood cells were captured but not identified by our detection method our data would then indicate that the simple tube captured cancer cells despite being blocked by other blood cells. If our technique were applied to CTC diagnosis, independent isolation procedures could be used to ensure the purity of captured cells. In contrast, if used for therapy, the purity of captured cells would not be as critical, provided that CTCs are effectively removed. If, by chance, capturing is hampered by accumulation of non-specific binding in filtering the entire blood volume, this issue can be addressed with strategies such as scaling up the tube and carefully determining the tube dimensions, flow rate, frequency of tube replacements, etc. 
      
      Finally, concern (4), points out that the experimental conditions were not translatable to clinical applications. Part (a) regards scaling up the system to show high throughput. The concept of extracorporeal cleansing of the entire blood volume has been used for years in cases such as hemodialysis. We already are working on optimizing the technique for larger blood volume processing. Part (b) of concern (4) discusses the static no-flow condition as being unrealistic. This issue was brought up during the review process, and we provided with our results showing data under constant flow conditions by peristaltic pump (to be published in future publication). The reviewers agreed that the use of a no-flow condition as a conservative approach during a proof-of-concept stage was appropriate.
      
      Despite its preliminary nature, we believe that our work communicates novel ideas, an important objective of research and publication. Given the number of research articles dealing with diagnostics and microfluidics, perhaps a further point of confusion came about by thinking of our work in those terms. We want to clarify that diagnostics were not the primary objective in our work. Furthermore, as it becomes evident by this response our experimental design was carefully devised to minimized unnecessary interferences. We hope that this response mitigates any confusion and addresses the concerns raised. 
      
      1. Kim G, Gaitas A. PloS One. 2014;10(5):e0127219-e.
      2. Gaitas A, Kim G. PLoS One. 2015;10(7):e0133194. doi: 10.1371/journal.pone.0133194.
      3. Marshall JR, King MR. DOI: 101007/s12195-015-0418-3. 2015;First online.
      4. Ferraresi C, et al. Photonics and Lasers in Medicine. 2012;1(4):267-86.
      5. Avci P, et al. Seminars in cutaneous medicine and surgery; 2013.
      6. Jalian HR, Sakamoto FH. Lasers and Light Source Treatment for the Skin. 2014:43.
      7. Ross B, et al. Biomedical Optics, 2004
      8. Kim G, et al. Journal of biomedical optics. 2007;12(4):044020--8.
      9. Kim G, et al Analytical chemistry. 2010;82(6):2165-9.
      10. Hah HJ, et al. Macromolecular bioscience. 2011;11(1):90-9.
      11. Qin M, et al. Photochemical & Photobiological Sciences. 2011;10(5):832-41.
      12. Wang S, et al. et al. Lasers in surgery and medicine. 2011;43(7):686-95.
      13. Avula UMR, et al.Heart Rhythm. 2012;9(9):1504-9.
      14. Kim G, et al. R. Oxidative Stress and Nanotechnology, 2013. p. 101-14.
      15. Lou X, et al. E. Lab on a Chip. 2014;14(5):892-901.
      16. https://www.roswellpark.org/patients/treatment-services/innovative-treatments/photodynamic-therapy.
      17. Yin H, et al. Artificial organs. 2014;38(6):510-5.
      18. Yin H, et al. Journal of Photochemistry and Photobiology B: Biology. 2015.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Aug 29, Michael King commented:

      Our recent commentary discusses this paper:

      http://link.springer.com/article/10.1007/s12195-015-0418-3

      In cancer research, the discovery and study of circulating tumor cells (CTCs) have seemed to open a world of possibilities. We now have the potential to gain cellular and molecular understanding of individual cases of metastatic cancer without invasive procedures. This area of research is, however, not without some basic pitfalls. In this commentary, we address some of these pitfalls by considering two recent examples in the published literature and discuss ways to overcome their limitations with the hope of informing those who may be entering the growing field of CTC research. Careful research design should always be followed to prevent incomplete or misleading studies from entering the literature, and thereby avoid setting back this burgeoning field.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.