10,000 Matching Annotations
  1. Jul 2018
    1. On 2017 Apr 06, Randi Pechacek commented:

      Holly Bik, new faculty addition to UC Riverside and 1st author of this paper, wrote a blog describing the background for this research on microBEnet. Read about it here


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 08, NephJC - Nephrology Journal Club commented:

      This commentary on CKD staging and Precision Medicine was discussed on December 6th and 7th in the open online nephrology journal club, #NephJC, on twitter. Introductory comments written by Tom Oates and Kevin Fowler are available at the NephJC website here and here. The journal also kindly made the commentary free to access for this month. The discussion was quite detailed, with over 100 participants, including nephrologists, fellows and patients as well as author Jonathan Himmelfarb. The highlights of the tweetchat were:

      • The authors have written a richly referenced, thoughtful and though-provoking commentary, which is a must-read for anyone interested in gaining a perspective in this area.

      • The advent of eGFR reporting and CKD staging have resulted in many advances, including improved recognition and diagnosis, planning therapy, epidemiological estimates and public messaging. Nonetheless, the categorical staging system is not perfect (groups together diverse diseases) though opinion was sharply divided on the issue of CKD in the elderly being a true phenomenon versus an ageing effect.

      • The section on precision medicine in kidney disease was also quite nuanced, with a lot of optimism and ideas discussed, such as new trial designs and personalized care.

      Transcripts of the tweetchats, and curated versions as storify are available from the NephJC website.

      Interested individuals can track and join in the conversation by following @NephJC or #NephJC on twitter, liking @NephJC on facebook, signing up for the mailing list, or just visit the webpage at NephJC.com.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 07, Katherine S Button commented:

      Thanks Erick H Turner this reference and the others you provide are very helpful.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 03, Erick H Turner commented:

      Please see... Turner EH. Publication bias, with a focus on psychiatry: causes and solutions. CNS Drugs 2013;27:457–68. doi:10.1007/s40263-013-0067-9 ...which cited earlier proposals along this line. My article proposed a related approach, but it differs in that the subject of the review is the study protocol, which is written before--not after--the study results are known.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 03, Lydia Maniatis commented:

      Could the authors please provide a citation(s) for the following introductory comments?

      "Over much of the dynamic range of human cone-mediated vision, light adaptation obeys Weber's law. Raw light intensity is transformed into a neural response that is proportional to contrast...where ϕW is the physiological response to a flash of intensity ΔI, and I is the light level to which the system is preadapted. Put another way, the cone visual system takes the physical flash intensity ΔI as input and applies to this input the multiplicative Weber gain factor to produce the neural response (Equation 1). This transformation begins in the cones themselves and is well suited to support color constancy when the illumination level varies."

      Does this statement, assuming relevant though missing citations, apply in general, or is it a description of results collected under very narrow and special conditions, and if so, what are they?

      As in many psychophysical studies, a very small number of subjects included an author. In experiments 1 and 2, one of two observers was an author. Why isn't this considered a problem with respect to bias?

      Also similar to many other psychophysical papers, the "hypothesis" being tested is the tip of a bundle of casually-made, rather complex, rather vague, undefended assumptions which the experiments do not, in fact, test. For example:

      1. "As our working hypothesis, we assume that the observer’s signal-to-noise ratio for discriminating trials in which an adapting field is presented alone from trials with a superimposed small, brief flash is [equation].

      2. "The assumption that visual sensitivity is limited by such multiplied Poisson noise has been previously proposed (Reeves, Wu, & Schirillo, 1998) as an explanation of why visual sensitivity is less than would be expected if threshold was limited by the photon fluctuations from the adapting field (Denton & Pirenne, 1954; Graham & Hood, 1992)."

      I note that the mere fact that Reeves, Wu and Schirillo proposed an assumption does not amount to an argument.

      Roughly, what researchers are doing is similar to this:

      Let's assume that how quickly a substance burns is a function of the amount of (assumed) phlogiston (possessing a number of assumed characteristics) it contains. So I burn substance "a", and I burn substance "b", and conclude that, since the former burns faster than the latter, it also contains more assumed phlogiston having the assumed characteristics. The phlogiston assumptions (and the authors here bundle together layers of assumptions) get a free ride, and they shouldn't. The title of this paper is tantamount to "Substance "a" contains more phlogiston than substance "b." It can only be valid if all of the underlying assumptions based on which the data was interpreted are valid, and that's unknown at best. We can even make the predictions a little more specific, and thus appear to test among competing models (which I think is actually what is going on here). For example, one model might predict a faster burn function than another, allowing us to "decide" between two different phlogiston models neither of which will actually have been tested. (Helping to avoid this type of fruitless diversion is what Popper's epistemology was designed to accomplish.)

      Also, it seems odd for the authors to be testing a tentative theory from the 1940's, which was clearly premature and inadequate, and apparently choosing to test a less-informed version of it:

      "In presenting the theory in this way, we have adhered more closely to the original presentation of Rose—an engineer who was interested in both biological and machine vision—than to that of de Vries, who was a physiologist and who introduced supplementary assumptions about the spatiotemporal summation parameters in human rod vision. We adopt Rose’s approach because the relevant neural parameters are still not well understood, and we wish to clearly distinguish between the absolute limits on threshold set by physics and the still incompletely understood neural mechanisms."

      In addition, the authors seem to have adopted the attitude that various selected contents of perception can be directly correlated with the activity of cells at any chosen level of the visual system (even when the neural parameters are still not well understood!), and that the rest of the activity leading to the conscious percept can be ignored, and that percepts that can't be directly correlated with the activity of the chosen cells can be ignored, via casual assumptions such as N. Graham's "the brain becomes transparent" under certain conditions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 10, Jose M. Moran commented:

      I think that authors have not correctly addressed the analysis of their results. They have correctly performed intragroup comparisons, but fail to analyze the between groups results. At the final time point, there are no statistically significant differences P=0.366 for LSS and P=0.641 for IDATE-state, between G1 (massasage+rest) and G2 (massage+reiki) so no effect of reiki intervention was detected at all in this study. Again the size effects measured also do not differ between G1 and G2 for both LSS IDATE-STATE, authors have failed in the analysis of the CI95% for the calculated Cohen’s d that are completely overlapped. Obviosly both G1 and G2 significantly differ from the G3 group (no intervention) but both in the same amount.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 12, Nikhil Meena commented:

      In our experience, Patients who don't appear to be candidates for a pleuroscopy may also be poor candidates for sclerosant therpay. http://journals.sagepub.com/doi/10.1177/1753465817721146

      Abstract BACKGROUND: Indwelling tunneled pleural catheters (TPCs) are increasingly being used to treat recurrent pleural effusions. There is also an increased interest in early pleurodesis in order to prevent infectious complications. We studied the time to removal and other outcomes for all the TPCs placed at our institution. METHODS: After institutional review board approval, records of patients who had had a TPC placed between July 2009 and June 2016 were reviewed; the catheters were placed in an endoscopy suite or during pleuroscopy with or without a sclerosant. The catheters were drained daily or less frequently and were removed after three drainages of less than 50 ml. RESULTS: During the study period 193 TPCs were placed. Of these 45 (23%) were placed for benign diseases. The commonest malignancy was lung cancer 70 (36%). Drainage 2-3 times a week without a sclerosant ( n = 100) lead to pleurodesis at 57 ± 78 days, while daily drainage after TPC + pleuroscopy + talc ( n = 41) achieved the same result in 14 ± 8 days ( p < 0.001). TPC + talc + daily protocol achieved pleurodesis in 19 ± 7 days, TPC + rapid protocol achieved the same result in 28 ± 19 days ( p = 0.013). The TPCs + sclerosant had an odds ratio of 6.01 (95% confidence interval: 2.1-17.2) of having a complication versus TPC without sclerosant. CONCLUSIONS: It is clear that TPCs when placed with a sclerosant had a significantly shorter dwell time; However, they were associated with higher odds of complications. One must be aware of these possibilities when offering what is essentially a palliative therapy.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 08, Koene Van Dijk commented:

      CAUTION: Something went terribly wrong during the peer-review process of this manuscript

      The study by Molfino et al., (2017, 10.1002/jcsm.12156) included extremely small samples (n=9 and n=4 for patient groups and n=2 for the control group).

      According to the text of the manuscript, BOLD fMRI data that was collected did not undergo any pre-processing to remove noise but raw values from a hand-drawn region of interest were exported from the Siemens scanner console and imported into Microsoft Excel.

      No statistical analysis was applied to measure contrast between different conditions, but raw BOLD values before (during time frames 0-50), during (time frames 51-261), and after (single time frame 262) nutritional ingestion values were calculated/extracted.

      I recommend the authors and readers who want to learn more about BOLD fMRI data collection and analyses to read "Handbook of Functional MRI Data Analysis" by Poldrack, Mumford, and Nichols.

      DISCLAIMER: I am an employee of Pfizer. The statements or opinions expressed on this site are my own and do not necessarily represent those of Pfizer.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 01, Ricardo Pujol-Borrell commented:

      This confirms the rare but interesting nature of autoimmune hypophysitis


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 26, Janet Kern commented:

      Bonferroni is a 'multiple comparisons adjustment' for reducing the risk of false-positive findings when engaging in statistical 'fishing expeditions' among many unrelated associations. It is appropriate only when any of the following are true: 1. those associations are equally important, likely, and expected to be zero (absent) based on external (a priori) considerations; 2. the cost of any false negative is minor compared to the cost of any false positive; and 3. the associations are independent (unrelated) to one another. In return for the reduce risk of false positives, multiple comparison adjustments, like Bonferroni, dramatically increase the risk of missing real associations (false negatives). So, even if there were no other objections, Bonferroni as used by the authors (with N = 8) is simply erroneous. Using Bonferroni in this study was wrong for several other reasons: First, the authors specifically wanted to test if influenza vaccination during pregnancy was a risk factor for ASD—this was not a 'fishing expedition" as assumed by Bonferroni (violating '1' above). Second, the overall association of influenza vaccination anytime during pregnancy depends completely on the associations within each trimester, so violates the Bonferroni assumption of independence (violates '3' above). Third, the first trimester is expected to be the period of greatest vulnerability for the developing fetus, and so is a pre-specified hypothesis. (In other words, before the study, the stakeholders expected (a priori) an association, which also violates '1') Finally, we need to be confident that vaccines are safe: the costs of wrongly concluding that the influenza vaccine is safe rivals the costs of wrongly concluding that it causes harm, which violates the Bonferroni assumption ('2') that wrongly concluding harm is more costly than wrongly concluding safety.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 30, Lydia Maniatis commented:

      Do you know what "p-hacking" means? I think that's what you're labelling as "data-driven." Are you aware that there have been six news reports amplifying the uncorroborated claims in your title? Sorry, but your title should have been, "We collected a lot of data."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 30, Antoine Coutrot commented:

      Dear Lydia,

      thank you for highlighting our limitation section, it is indeed quite important. You are absolutely right, our method is too confounded to allow us to draw any general conclusion. As are all experiments in cognitive science. They are all limited by the sample size, by the participant profile, by the task... But we try and do our best. For instance we collected 400+ participants from 58 nationalities, more than any eye-tracking experiment ever published. The main points of the paper are 1- gaze contains a wealth of information about the observer 2- with a big and diverse eye database it is possible to capture in a data-driven fashion what demographics explain different gaze patterns. Here, it happens to be the gender, hence the title.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 29, Lydia Maniatis commented:

      Anyone familiar with the vision science literature should know by now that the best place to start reading a published paper is the section at the tail end titled “Limitations of the study.” This is where we can see whether the titular claims have any connection to what the study is actually entitled to report in terms of findings.

      Compare, for example, the title of this study with its “limitations” section, quoted below in full (caps mine):

      “The authors would like to make clear that THIS STUDY DOES NOT DEMONSTRATE THAT GENDER IS THE VARIABLE THAT MOST INFLUENCES GAZE PATTERNS DURING FACE EXPLORATION in general. [i.e. our method is too confounded to allow us to draw any such conclusion in principle].Many aspects of the experimental design might have influenced the results presented in this paper. The actors we used were all Caucasian between 20 and 40 years old with a neutral expression and did not speak—all factors that could have influenced observers' strategies (Coutrot & Guyader, 2014; Schurgin et al., 2014; Wheeler et al., 2011). Even the initial gaze position has been shown to have a significant impact on the following scanpaths (Arizpe et al., 2012; Arizpe et al., 2015). In particular, the task given to the participants—rating the level of comfort they felt with the actor's duration of direct gaze—would certainly bias participants' attention toward actors' eyes. One of the first eye-tracking experiments in history suggested that gaze patterns are strongly modulated by different task demands (Yarbus, 1965). This result has since been replicated and extended: More recent studies showed that the task at hand can even be inferred using gaze-based classifiers (Boisvert & Bruce, 2016; Borji & Itti, 2014; Haji-Abolhassani & Clark, 2014; Kanan et al., 2015). Here, gender appears to be the variable that produces the strongest differences between participants. But one could legitimately hypothesize that if the task had been to determine the emotion displayed by the actors' face, the culture of the observer could have played a more important role as it has been shown that the way we perceive facial expression is not universal (Jack, Blais, Scheepers, Schyns, & Caldara, 2009). Considering the above, the key message of this paper is that our method allows capturing systematic differences between groups of observers in a data-driven fashion.”

      In other words, this is not a research paper, but a preliminary application of a method that might be useful for a research study. As far as the reported results go, it is an exercise in p-hacking. The title is purely cosmetic. As such it seems to have been rather effective, insofar as the article has already been the subject of six news stories, including reports in the Daily Mail and Le Monde.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 11, Christopher Southan commented:

      While cogently reported and as Open Access, the Journal has allowed the publication of an irreproducible study. This is because of the non-disclosure of the key inhibitor structure, DCC-3014 from Deciphera (it might be exemplified in WO2014145023, or 025 or 029)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 05, Yuwei Fan commented:

      To analyze Zr using EDS, the specimen should not be sputter coated with gold. Result in Fig. 7 seems to be questionable.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 02, Alessandro Rasman commented:

      The problem in this study comes in using a percentage for veins rather than an absolute area as a measure of physiological flow problems in veins. Please read this article (http://www.pagepressjournals.org/index.php/vl/article/view/5012) and Figure 3.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 30, Monica Green commented:

      It is useful for Pařízek and colleagues to have presented this hypothetical scenario of an alleged Caesarean section. It was surprising, however, that the study did not engage with the other published literature on the medieval history of C-section (a bibliography is available here: https://www.academia.edu/30089387/Bibliography_on_Caesarean_Section_in_the_Middle_Ages). There is also considerable literature on the history of surgery in medieval Europe and the history of anesthesia.

      What is puzzling about this study is that it is nothing but a hypothetical scenario. The authors have found no testimony contemporary with Beatrice herself to confirm that she had any complications at all with the birth, let alone that it ended in a C-section. The only hint they have found that anything was amiss is her (or her scribe's) use of the phrase salva incolumitate in referring to herself after the birth. Incolumis (and derivative forms) is not a common word in medieval medical texts, but it is not at all rare in diplomatic documents. In my searches (DuCange's Glossarium, http://www.uni-mannheim.de/mateo/camenaref/ducange.html; the Epistolae collection of medieval women's letters: https://epistolae.ccnmtl.columbia.edu/), the phrase comes up commonly simply to confirm one's general health and fitness for office. In other words, there is nothing at all unusual here. Given that obstetrical mishaps were common in the Middle Ages (Green MH, 2008), the principle of lex parsimoniae would have asked that analysis be given first to other complications. Given Beatrice's age at the time of the birth (19), obstetric fistula would likely be high on that list.

      It is a separate question why a legend surrounding Wenceslaus' birth arose, which this study has traced back no further than the 15th century, nearly 100 years after the birth itself. Stories of Caesar were very popular in royal circles at that time, and his birth (by C-section, allegedly, because of a medieval misunderstanding of classical sources) was often depicted in quite elaborately decorated manuscripts. A more interesting question, therefore, is why the legend arose, and why the vernacular histories of the Caesars might have been so influential in this imaginary.

      Finally, it may be important for readers of this post to note that most work in the history of medicine is never registered in the PubMed database. Most historians publish in Humanities venues, and those are not indexed here. So please remember to look beyond PubMed if you are researching historical questions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Dec 19, Stuart RAY commented:

      Not noted on this PubMed entry (yet, perhaps), this paper has been retracted. Of particular note, the retraction statement makes an excellent case for the authors' dedication to data sharing - it was reanalysis of the raw data by another group of scientists that revealed an unexpected finding. Without data sharing it would have been very difficult to discover the problem of mixed species in the sample. Kudos to the authors, and data sharing for reliable science!


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 08, NephJC - Nephrology Journal Club commented:

      This trial on early steroid tapering was discussed on November 29th and 30th 2016 in the open online nephrology journal club, #NephJC, on twitter. Introductory comments written by Hector Madariaga and Kevin Fowler are available at the NephJC website here and here.

      The discussion was quite detailed, with over 60 participants, including general and transplant nephrologists, fellows and patients. The highlights of the tweetchat were:

      • The authors should be commended for designing and conducting this important trial, with funding received from the industry.

      • The trial results generated a lot of discussion, though it was thought to be underpowered for the outcome of biopsy proven acute rejection, given the small difference observed (9.9% in ATG versus 10.6 to 11.2% in basiliximab arms) relative to the generous sample size assumptions (6.7and 17% respectively). The high rate of new onset diabetes observed overall (compared to lower rates in other trials such as SYMPHONY) were explained on the basis of explicit evaluation on the basis of glucose tolerance tests.

      • Overall, the trial did not change any opinions amongst the discussants: practitioners favoring steroid-free regimens were comforted with these results, but others would like to see stronger data with long term graft outcomes before embracing steroid-free regimens.

      Transcripts of the tweetchats, and curated versions as storify are available from the NephJC website.

      Interested individuals can track and join in the conversation by following @NephJC or #NephJC on twitter, liking @NephJC on facebook, signing up for the mailing list, or just visit the webpage at NephJC.com.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 31, Robin P Clarke commented:

      Nutrient deficiencies being factors in autism causation was predicted by the antiinnatia theory of autism (Clarke, 1993; Clarke, 2016), which stated that autism is caused by a high level of "antiinnatia factors" (factors tending to cause a sort of "general" reduction of gene-expression).

      It was stated therein that "Gene-expression depends on processes that have many possibilities for malfunction, with many common factors underlying (for instance) all transcription from DNA, all being dependent on, for example, supply of nutrients....".<br> And that thus the autism-causing antiinnatia would tend to result from nutrient deficiencies (though of course nutrient deficits would also produce their own specific symptoms such as bone problems in respect of vit D).

      The extent to which supplementation later in life can reverse the effects of deficiency in earlier developmental periods would depend on to what extent irreversible effects have been caused, such as perhaps neurons not migrating in neurotypical ways, or learning processes delayed too long.

      Future studies should perhaps look for the possibility that the supplementation has more effect on younger children and less (or no) effect on older ones. It would further be expected that the improvements would be relatively permanent rather than ceasing on discontinuation of the supplementation.

      Clarke RP (1993) A theory of general impairment of gene-expression manifesting as autism. Personality and Individual Differences 14,465-482.

      Clarke RP (2016) (Updated presentation of preceding) - (PDF-file:) A theory of evolution-biased reduction of gene-expression manifesting as autism.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 21, Donald Forsdyke commented:

      THE RNA WORLD AND DARRYL REANNEY

      The title of historian Neeraja Sankaran's paper in a "special historical issue" of the Journal of Molecular Evolution implies that the RNA world idea was formulated 30 years ago (i.e. 1986) by a single author, Walter Gilbert (1). Yet the paper traces the story to authors who wrote at earlier times. Missing from the author list is Darryl Reanney who, like Gilbert, documented a "genes in pieces" hypothesis in February 1978 and went on to explore the RNA world idea with the imperative that error-correcting mechanisms must have evolved at a very early stage (2). Much of Reanney's work is now supported (3).

      However, Sankaran cites the video of a US National Library of Medicine meeting organized by historian Nathaniel Comfort on 17th March 2016 (4). Here W. F. Doolittle, who had consistently cited Reanney, discusses the evolutionary speculation triggered by the discovery of introns in 1977, declaring that "several things came together at that time," things that "a guy named Darryl Reanney had been articulating before that." Furthermore, "it occurred to several of us simultaneously and to Darryl Reanney a bit before – before me anyway – that you could just recast the whole theory in terms of the RNA world."

      Gilbert himself thought that "most molecular biologists did not seriously read the evolution literature; probably still don’t." Indeed, contemporary molecular biologists writing on "the origin of the RNA world," do not mention Reanney (5). Thus, we look to historians to put the record straight.

      1.Sankaran N (2016) The RNA world at thirty: a look back with its author. J Mol Evol DOI 10.1007/s00239-016-9767-3 Sankaran N, 2016

      2.Reanney DC (1987) Genetic error and genome design. Cold Spring Harb Symp Quant Biol 52:751-757

      3.Forsdyke DR (2013) Introns first. Biological Theory 7:196-203 Paper here

      4.Comfort N (2016) The origins of the RNA world. Library of Congress Webcast. NLM Webcast

      5.Robertson MP, Joyce GF (2012) The origins of the RNA world. Cold Spring Harb Perspect Biol 4:a003608. Robertson MP, 2012


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 21, Erik Shapiro commented:

      Interesting work! It may be a subtle detail, but did you use iron pentacarbonyl or iron acac as the starting material? In the list of chemicals, you say iron pentacarbonyl, and in the methods you say iron acac. It is somewhat important because the synthesis of iron oxide using the two different starting materials is often different; one being a hot-injection method (pentacarbonyl) the other being what you described (iron acac).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 11, Jean-Pierre Bayley commented:

      Should we screen carriers of maternally inherited SDHD mutations?

      Jean-Pierre Bayley (1), Jeroen C Jansen (2), Eleonora P M Corssmit (3) and Frederik J Hes (4) 1. Department of Human Genetics, 2. Department of Otorhinolaryngology, 3. Department of Endocrinology and Metabolic Diseases, 4. Department of Clinical Genetics, Leiden University Medical Center, Leiden, the Netherlands

      We wish to comment on the above paper by Burnichon and colleagues: Burnichon N, et al. Risk assessment of maternally inherited SDHD paraganglioma and phaeochromocytoma. J Med Genet. 2017; 54:125-133. 3

      In this paper a prospective study is presented that identified and described development of pheochromocytoma in a carrier of an SDHD mutation. Although at first sight not an uncommon occurrence in carriers of these mutations, this case is unusual because the mutation was inherited via the maternal line. This is now only the third reported case of confirmed phaeochromocytoma development following maternal transmission of an SDHD mutation. (1-3) The patient in question was identified amongst a cohort of 20 maternal mutation carriers who underwent imaging surveillance. Based on the identification of one patient in this cohort (5%), the authors make recommendations for the clinical care of carriers of a maternally inherited SDHD mutation. They advise targeted familial genetic testing from the age of 18 in families with SDHD mutations, and that identified carriers undergo imaging and biochemical workup to detect asymptomatic tumours. If the first workup is negative, the authors suggest that patients be informed about paraganglioma-phaeochromocytoma (PPGL) symptoms and recommend an annual clinical examination and blood pressure measurement, with a new workup indicated in case of symptoms suggestive of PPGL. Although this paper is a meaningful contribution to the literature, we are concerned that the authors base their subsequent clinical recommendations on a relatively small cohort. In a recent study, we described one confirmed case of maternal transmission and concluded that “we consider the increase in risk represented by these reports to be negligible.” (2)

      Two reasons underlie this statement. Firstly, the somatic rearrangements underlying the maternal cases identified to date are far more complex (loss of the paternal wild-type SDHD allele by mitotic recombination, followed by loss of the recombined paternal chromosome containing the paternal 11q23 region and the maternal 11p15 region) than the molecular events seen in paternal cases (loss of whole chromosome 11). Secondly, our conclusions were based, implicitly, on many previous studies at our centre over the past three decades in which we described various aspects of the large SDHD cohort collected by us over that period. Genetic aspects of this cohort, and 601 patients with paternally transmitted SDHD mutations, were described by Hensen and co-workers in 2012. (4) As all previous studies suggest that mutations are equally transmissible via the paternal or maternal line, our identification of a single maternal case amongst this cohort suggests that the penetrance of maternally transmitted mutations is very low. Using the calculation employed by Burnichon and colleagues and assuming that at least 600 maternal mutation carriers are alive in the Netherlands, we arrive at an estimate of 0.17% (1/601 = 0.17%), rather than their figure of 5%. In addition to our own cohort, 1000’s of SDHD mutation carriers have been identified world-wide. Assuming that 1 in 20 maternally transmitted mutations result in tumours, many more maternally inherited cases would have come to our attention, even without surveillance.

      In our opinion the question of management of maternally inherited SDHD mutations comes down to a risk-benefit analysis. The most obvious implication of the recommendations made by Burnichon and colleagues in our patient population would be the institution of surveillance, with all the attendant practical, financial and psychological burdens for 600 carriers of maternally inherited SDHD mutations in order to identify a single case. Furthermore, SDHD-associated PPGL mortality rates and survival in a Dutch cohort of SDHD variant carriers was not substantially increased compared with the general population. (5) In practice, carriers of maternally inherited SDHD mutations at our centre are not advised to undergo surveillance. Instead, we reassure them that their risk of developing PPGL is exceptionally low (described three times worldwide), but that they should be aware, more so than the general population, of symptoms that are suggestive of paraganglioma or phaeochromocytoma. Many families have been in our care for over 25 years and in that time we have found no evidence to suggest that this policy should be revised.

      NB. A version of this comment has been posted on the Journal of Medical Genetics website and has been commented on in turn by Burnichon and colleagues.

      References

      1.Yeap PM, Tobias ES, Mavraki E, Fletcher A, Bradshaw N, Freel EM, Cooke A, Murday VA, Davidson HR, Perry CG, Lindsay RS. Molecular analysis of pheochromocytoma after maternal transmission of SDHD mutation elucidates mechanism of parent-of-origin effect. J Clin Endocrinol Metab 2011;96:E2009-E2013.

      2.Bayley JP, Oldenburg RA, Nuk J, Hoekstra AS, van der Meer CA, Korpershoek E, McGillivray B, Corssmit EP, Dinjens WN, de Krijger RR, Devilee P, Jansen JC, Hes FJ. Paraganglioma and pheochromocytoma upon maternal transmission of SDHD mutations. BMC Med Genet 2014;15:111.

      3.Burnichon N, Mazzella JM, Drui D, Amar L, Bertherat J, Coupier I, Delemer B, Guilhem I, Herman P, Kerlan V, Tabarin A, Wion N, Lahlou-Laforet K, Favier J, Gimenez-Roqueplo AP. Risk assessment of maternally inherited SDHD paraganglioma and phaeochromocytoma. J Med Genet 2017;54:125-33.

      4.Hensen EF, van DN, Jansen JC, Corssmit EP, Tops CM, Romijn JA, Vriends AH, Van Der Mey AG, Cornelisse CJ, Devilee P, Bayley JP. High prevalence of founder mutations of the succinate dehydrogenase genes in the Netherlands. Clin Genet 2012;81:284-8.

      5.van Hulsteijn LT, Heesterman B, Jansen JC, Bayley JP, Hes FJ, Corssmit EP, Dekkers OM. No evidence for increased mortality in SDHD variant carriers compared with the general population. Eur J Hum Genet 2015;23:1713-6.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 18, Darko Lavrencic commented:

      I believe that implantable systems for continuous liquorpheresis and CSF replacement could be successfully used also for intracranial hypotension-hypovolemia syndrome as it could be caused by decreased CSF formation. See: http://www.med-lavrencic.si/research/correspondence/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Aug 01, Daniel Quintana commented:

      We thank Dr. Grossman for his comments on our manuscript. We are cognisant of the study’s limitations, which we highlighted in our original manuscript. Despite these limitations, we believe our conclusions are still valid but also understand that Dr. Grossman may not agree with our interpretation.

      We will now address Dr. Grossman’s two comments in turn, which we have reprinted for clarity:

      • Comment 1 from Dr. Grossman: The authors report respiration frequency as the peak frequency in a band range between 0.15-0.40 Hz. Resting respiration rate (i.e. frequency), however, is rarely a constant phenomenon for most people within a resting period of several minutes: some breaths are longer, some are shorter, and the peak frequency does not necessarily reflect average breathing rate; in fact, there are very likely to be different peaks, and only the highest peak would have been used to estimate (or misestimate) average respiratory frequency. Spectral frequency analysis, therefore, is a highly imprecise method to calculate mean breathing frequency (perhaps, the difference in relations found between mentally ill vs. healthy people were merely due to increased variability of respiratory frequency among the ill individuals; see Fig. 1F). In any case, this may be be sufficient to disqualify the main conclusions of the study.

      Response: We recognize that spectral frequency may not be an optimal method to calculate mean respiration frequency given the intraindividual variation of respiration rate. However, Levene's Test of Equality of Variances shows that the variances in the clinical and healthy groups are not significantly different [F(1,202) = 1.6, p = 0.21], which suggests that we cannot discard our null hypothesis that the group variances are equal. Thus, variability in mean respiration rates between the groups are unlikely to have confounded our results.

      • Comment 2 from Dr. Grossman: However, there is may be even a more serious problem that invalidates the conclusions of this investigation. As already mentioned, the authors examined respiration frequencies only between 0.15-0.40 Hz; this corresponds to a range between 9 and 24 breaths/minute. Already in 1992, we demonstrated among a group of healthy participants that a sizable proportion of participants manifest substantial proportions of resting breathing cycles below 9 cycles per minute: among 16 healthy individuals carefully assessed for respiration rate during a 10-minute resting period, we found that half of the participants showed 1/5 of their total breathing cycles to be slower than 9 cycles/minute (cpm); over 60% of participants showed >10% of their cycles to be slower than 9 cpm (also very likely thqat a substantial proportion of breaths occurred beyond 24 cpm). Thus, accurate estimation of mean resting respiration frequency is also seriously compromised by the insufficient range of frequencies included in the analysis. See Grossman (1992, Fig. 5): Grossman, P. Biological Psychology 34 (1992) 131 -161

      Response: To confirm that mean respiration the frequency was not missed or misattributed to non-respiratory frequencies, we re-analysed the data including participants that we originally excluded as they fell outside the 0.15-0.4 Hz range. In total, 2 participants (both from the patient group) had a mean respiratory frequency < 0.15Hz and 4 participants (3 from the patient group and 1 from the healthy control group) had a mean respiratory frequency > 0.4Hz. We also re-analysed absolute high frequency HRV and adjusted the frequency bands accordingly (0.1Hz – 0.4Hz for the 2 participants with a lower than average respiratory rate and, 0.15Hz-0.5Hz for the 4 participants with a higher than average respiratory rate).

      We found that including these participants did not change the overall conclusions of the study. While we reported an estimated correlation (p) of −0.29 between HF-HRV and respiration in the patient group [95% CI (−0.53, −0.03)], our updated analysis demonstrated a slightly stronger estimated association -0.47 [95% CI (−0.66, −0.26)]. For the healthy controls, we originally reported an estimated correlation (p) of −0.04 between HF-HRV and respiration in the patient group [95% CI (−0.21, 0.12)]. Our updated analyses demonstrated a close to equivalent estimated association of -0.04 [95% CI (−0.20, −0.12)]. We also originally reported that computing the posterior difference of p between these two tests revealed a 94.1% probability that p was more negative in the clinical group compared to the control group. Running this analysis modestly increased this probability to 99.7%.

      Daniel S. Quintana (on behalf of study co-authors)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Apr 25, Paul Grossman commented:

      Quintana et al. (2016) suggest that individual difference in respiration rate is only correlated with high-frequency heart-rate variability (HF-HRV), i.e. respiratory sinus arrhythmia (RSA) among seriously mentally ill people, but not among healthy individuals. The data presented has several methodological problems that seem very likely to severely compromise the authors' conclusions:

      1. The authors report respiration frequency as the peak frequency in a band range between 0.15-0.40 Hz. Resting respiration rate (i.e. frequency), however, is rarely a constant phenomenon for most people within a resting period of several minutes: some breaths are longer, some are shorter, and the peak frequency does not necessarily reflect average breathing rate; in fact, there are very likely to be different peaks, and only the highest peak would have been used to estimate (or misestimate) average respiratory frequency. Spectral frequency analysis, therefore, is a highly imprecise method to calculate mean breathing frequency (perhaps, the difference in relations found between mentally ill vs. healthy people were merely due to increased variability of respiratory frequency among the ill individuals; see Fig. 1F). In any case, this may be be sufficient to disqualify the main conclusions of the study.

      2. However, there is may be even a more serious problem that invalidates the conclusions of this investigation. As already mentioned, the authors examined respiration frequencies only between 0.15-0.40 Hz; this corresponds to a range between 9 and 24 breaths/minute. Already in 1992, we demonstrated among a group of healthy participants that a sizable proportion of participants manifest substantial proportions of resting breathing cycles below 9 cycles per minute: among 16 healthy individuals carefully assessed for respiration rate during a 10-minute resting period, we found that half of the participants showed 1/5 of their total breathing cycles to be slower than 9 cycles/minute (cpm); over 60% of participants showed >10% of their cycles to be slower than 9 cpm (also very likely thqat a substantial proportion of breaths occurred beyond 24 cpm). Thus, accurate estimation of mean resting respiration frequency is also seriously compromised by the insufficient range of frequencies included in the analysis. See Grossman (1992, Fig. 5): Grossman, P. Biological Psychology 34 (1992) 131 -161

      https://www.researchgate.net/profile/Paul_Grossman2/publication/21689110_Respiratory_and_cardiac_rhythms_as_windows_to_central_and_autonomic_biobehavioral_regulation_Selection_of_window_frames_keeping_the_panes_clean_and_viewing_the_neural_topography/links/5731a22708ae6cca19a2d221/Respiratory-and-cardiac-rhythms-as-windows-to-central-and-autonomic-biobehavioral-regulation-Selection-of-window-frames-keeping-the-panes-clean-and-viewing-the-neural-topography.pdf?origin=publication_detail&ev=pub_int_prw_xdl&msrp=vu97U8y7CNd-ip3iK-qeQgkeqfmS6EwOYfT0BMazIWb4K9Weys1ta4uRS9rdGDRYEbtODvNOG_dr7MWpJIsjJrRkt_z8sTfSS4XmxvaEPMo.DabVyZLtsNb0XPkl_aRXgRYPgmzZVGFb4rchSD_o4vKn98sRTVYBXvo7RQOTYFxDbL7VMx9qNlfuFZvJNy8-9g.kd_GECHVk8wJ18QwWTmSdS3htJncx0qJ0Okn_km-wIHEkyXmPXbXIO-Rb_KUvz_72b5WrLKh7otlmZ6awszetQ.c3eR_WnqJ55XOex_Q4-EHpow-8RGg-Oi87AAPSljLLDtjYimkEgJ99Lu9lmclW4kkI11Jzzp2mkQ4pKenDt6BA

      It is also unfortunate that the authors merely cited a single investigation that unusually showed no relation between individual differences in respiration frequency and RSA magnitude (i.e. Denver et al., 2007), but none of the many studies that have found correlations in the range of r's= 0.3-0.5; e.g. https://www.researchgate.net/publication/279615441_Respiratory_Sinus_Arrhythmia_and_Parasympathetic_Cardiac_Control_Some_Basic_Issues_Concerning_Quantification_Applications_and_Implications

      http://journal.frontiersin.org/article/10.3389/fphys.2016.00356/Fülle

      https://pdfs.semanticscholar.org/6e44/e75dd2061a43cc69a4354171540e8a98e6a5.pdf

      The Denver et al. study, additionally, used the same methods inaccurately to calculate respiration rate.

      Paul Grossman Pgrossman0@gmail.com


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Apr 24, Paul Grossman commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 15, KEVIN BLACK commented:

      We proposed that "the most likely cause for the" excess prevalence of depression in PD was "that both syndromes arise from similar causes, with either appearing first in a given individual. ... [O]ne may reasonably search for such shared causative factors among the known risk factors for" either disease, "such as genes (probably plural), aging, chemical toxins, or psychologically stressful life events" (Black KJ, Pandya A. Depression in Parkinson disease. Pp. 199-237 in Gilliam F, Kanner AM, Sheline YI: Depression and Brain Dysfunction. New York: Taylor & Francis, 2006, at pp. 216-217).

      Arabia and colleagues (2007) found that depressive and anxious disorders were much more likely in first-degree relatives of PD patients than of controls (doi: 10.1001/archpsyc.64.12.1385). One gene that may contribute to that finding is the serotonin transporter, discussed in the review cited above. Cagni et al here identify an additional gene that may also be a shared risk factor: the G/G phenotype of the Val66Met polymorphism of the BDNF gene.

      Studies such as these may contribute useful information not only to the etiology of depression and anxiety but also to the etiology of Parkinson disease.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 04, Lydia Maniatis commented:

      Let's say we ask the proverbial “man in the street” the following question: Do you think chickens will be better at discriminating between the colors of tiled food containers if the tiles are many or large, or if they are few or small?

      I think that most, without too much thought, would answer the former. “More and bigger” of anything is generally more salient then “Fewer and smaller.” Would those who made correct guesses be licensed to claim that their pet theory about chicken vision had been corroborated? It should be obvious that predictable predictions do not constitute rigorous tests of any hypothesis. This is the type of hypothesis-testing Olsson et al (2017) engage in in this study.

      Furthermore, the hypothesis that the authors are supposed to be testing doesn’t consist of a straightforward, coherent, intelligible set of assumptions, but of a hodgepodge of uncorroborated assumptions and models spanning over fifty years. The “success” of the authors’ simple experiment implies corroboration of all of these subsidiary models and assumptions. Obviously, the experiments are being tasked with far too much heavy-lifting, and the conclusions that hinge on them are not credible.

      Here is a sampling of the assumptions and models that chickens greater sensitivity to “more and bigger” are presumed to corroborate:

      The main hypothesis: “Chickens use spatial summation to maintain color discrimination in low light intensities.”

      Supporting models/assumptions: “Color differences delta S in the unit of just-noticeable differences (JND) were calculated using the receptor noise limited (RNL) model (Vorobyev and Osorio, 1998) as….” I.e. the RNL model is assumed to be valid.

      “Spectral sensitivities, R, were derived by fitting a template (Govardovskii, Fyhrquist, Reuter, Kuzmin, & Donne, 2000)…” I.e. the model template is assumed to be valid.

      “We assumed the same standard deviation of noise for all cone types such that the Weber fraction for the L channel was 0.06, based on the color discrimination thresholds measured in a previous study (Olsson et al 2015)” Note that, according to a Pubmed Commons comment by the lead author, Olsson et al (2015) “figure out the equivalent Weber fraction which describe these limits. Whether that is actually caused by noise or not we can not say from our experiment..." Yet the main hypothesis of Olsson et al (2017) uncritically assumes a “noisy” process.

      “The same simple model (SM) of calculating the absolute quantum catch as in a previous study (Olsson et al 2015)” Again, the authors cannot say that the results of that previous study were “actually caused by noise or not”, i.e that the “simple model” is actually modeling what they are claiming.

      “We modeled increasing levels of spatial summation, assuming that absolute quantum catches….are summed linearly…” Should we even ask why?

      “From ….cone densities in the dorso-temporal retina of chickens (Kram et al, 2010) we estimated the number of cones that viewed a single color tile of a stimulus.” This last assumption obviously doesn’t consider the fact of chicken eye movements, which would make the number of cones involved much larger. The idea of simple pooling is also problematic from the point of view that chickens do exhibit constancy under varying illumination, so in the context of sunshine and shadow, pooling across an illumination boundary would arguably produce unreliable estimates that would undermine constancy.

      “We derived intensity thresholds by fitting a logistic psychometric function to the choice data of each experimental group of chickens and individual chickes using the Matlab toolbox palamedes (Prins & Kingdom, 2009).” We assume that Prins and Kingdom’s hypothesized quantitative link between choice and thresholds, as well as all of those authors’ underlying assumptions, e.g. that signal detection theory is an appropriate model for vision, are valid.

      I would note, finally, that the authors’ current description of the findings of Olsson, Lind and Kelber (2015) differs significantly from those implied by the title of the latter publication (“Bird color vision: behavioral thresholds reveal receptor noise”). As mentioned above, the lead author of that study has acknowledged that the title goes further than was licensed by experiment. Here, the Olsson et al (2015) study is described as having shown that “the intensity threshold for color discrimination in chickens depends on the chromatic contrast between the stimuli and on stimulus brightness.” This result, i.e. that “higher contrast, brighter = more salient”, is, if anything, even more predictable than the prediction of Olson et al (2017).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 01, Trevor Bell commented:

      The source code of the pipeline described in this paper is now available online at the following address:

      https://github.com/DrTrevorBell/CuratedGenBank


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jul 25, Trevor Bell commented:

      Multiple sequence alignments containing only full-length sequences, for each genotype, are now also available for download from the alignments page. These sequences are a subset of the alignments already available.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jan 26, Trevor Bell commented:

      A comma character has inadvertantly been added to the query provided under the "GenBank download" section during post-production of this article. The comma in the number "99,999" should be deleted, so that the number reads "99999".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 Jan 26, Trevor Bell commented:

      The authors would like to clarify a point made in the abstract. Although multiple sequence alignments of HBV are publicly available, as far as we are aware, ours is the first to include both full length and subgenomic fragments of HBV in the same alignment.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 08, Raha Pazoki commented:

      Supplementary Table 6 of this article is supposed to provide GERA results for previously identified blood pressure loci. This table is however exactly the same as Supplementary Table 3 and does not include SNPs flagged as "P" for previously identified in Supplementary Table 4.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 14, David Mage commented:

      The Sudden Infant Death Syndrome (SIDS) and other causes of human respiratory failures appear to be X-linked (PMID 27188625). OMIM shows human TASK-1 to be autosomal, that would imply if Task-1 is involved in SIDS that an interaction with an X-linkage might also be considered.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 14, Pavel Nesmiyanov commented:

      What strain and what waas the exact procedure for the microbiological assessment? Disc diffusion method is not the most accurate method. Authors should have used standard strains and dilution method for MIC.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 20, Alessandro Rasman commented:

      Bernhard HJ. Juurlink MD, Dario Alpini MD, Giampiero Avruscio MD, Miro Denislic MD, Attilio Guazzoni MD, Laura Mendozzi MD, Raffaello Pagani MD, Adnan Siddiqui MD, PierluigI Stimamiglio MD, Pierfrancesco Veroux MD snd Pietro Maria Bavera MD

      We read with interest the consensus statement titled "The central vein sign and its clinical evaluation for the diagnosis of multiple sclerosis: a consensus statement from the North American Imaging in Multiple Sclerosis Cooperative" (1). We wonder why the authors haven't cited in the notes any paper of Dr. Paolo Zamboni from University of Ferrara, Italy. Particulary, his oldest paper titled "The big idea: iron-dependent inflammation in venous disease and proposed parallels in multiple sclerosis" published in November 2006 (2). In that paper he readily showed the histology of the CVS, and explicitly reported the possibility to image it by the means of MR, as well.

      References: 1) Sati, Pascal, et al. "The central vein sign and its clinical evaluation for the diagnosis of multiple sclerosis: a consensus statement from the North American Imaging in Multiple Sclerosis Cooperative." Nature Reviews Neurology (2016). 2) Zamboni, Paolo. "The big idea: iron-dependent inflammation in venous disease and proposed parallels in multiple sclerosis." Journal of the Royal Society of Medicine 99.11 (2006): 589-593.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 13, Kiyoshi Ezawa commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 24, Kiyoshi Ezawa commented:

      [Alert by the author]

      The web-page (or XML) version of this Erratum (at the BMC Bioinformatics web-site: https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-016-1282-4) contained errors during two periods: one from its initial publication on Nov 10th, 2016 till around Nov 18th, 2016, and the other from around March 3rd, 2017 till the release of the latest version on April 7th, 2017. The latest version does not contain these errors.

      In consequence, the Erratum at the PubMed Central web-page (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5105235/) also contained the same errors, since its initial release till around April 13th, 2017, when it was updated.

      (It should be noted that, since its initial publication, the PDF version of the Erratum has been nearly error-free, containing at most one relatively harmless error in Eq.(R4.6) before the correction.)

      Therefore, if you visited the Erratum only around any of the aforementioned periods but did not download its PDF, I strongly urge you to re-visit the Erratum, and hopefully to download the PDF.

      And I would be grateful if you could inform your colleagues of the release of the new version of this Erratum, so that these errors in its previous versions will be eradicated eventually.

      Incidentally, most of the errors discussed in this Erratum apply only to the web-page (or XML) version of the original article (PMID: 27638547; DOI: 10.1186/s12859-016-1105-7).

      There are only two exceptions: one is the error in Eq.(R5.4), and the other is the update on the reference information (reference [2] in the Erratum, or PMID: 27677569); they apply to both the XML and the PDF.

      (In the proofreading process, I was allowed to proofread only the PDF but not the web-page version. Therefore, I had no control over those errors in the web-page that were not in the PDF.)

      Kiyoshi Ezawa, Ph.D, the author of the Erratum (PMID: 27832741).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 24, Lydia Maniatis commented:

      As should be evident from the corresponding PubPeer discussion, I have to disagree with all of Guy’s claims. I think logic and evidence are on my side. Not only is there not “considerable evidence that humans acquire knowledge of how depth cues work from experience,” the evidence and logic are all on the opposite side. The naïve use of the term “object” and reference to how objects change “as we approach or touch them and learn about how they change in size, aerial perspective, linear perspective etc” indicates a failure to understand the fundamental problem of perception, i.e. how the proximal stimulus, which does not consist objects of any size or shape, is metamorphosed into a shaped 3D percept. Perceiving 3D shape presupposes depth perception. As Gilchrist (2003) points out in a critical Nature review of Purves and Lotto’s book, “Why we see what we do:” “Infant habituation studies show that size and shape are perceived correctly on the first day of life. The baby regards a small nearby object and a distant larger object as different even when they make the same retinal image. But newborns can recognize an object placed at two different distances as the same object, despite the different retinal size, or the same rectangle placed at different slants. How can the newborn learn something so sophisticated in matter of hours?” Gilchrist also addresses the logical problems of the “learning” perspective (caps mine): “In the 18th C, George Berkeley argued that touch educates vision. However, this merely displaces the problem. Tactile stimulation is even more ambiguous than retinal stimulation, and the weight of the evidence show that vision educates touch, not vice versa. Purves and Lotto speak of what the ambiguous stimulus “turned out to signify in past experience.” But exactly how did it turn out thus? WHAT IS THE SOURCE OF FEEDBACK THAT RESOLVES THE AMBIGUITY?” “Learning” proponents consistently fail to acknowledge, let alone attempt to answer, this last question. As I point out on PubPeer, if touch helps us to learn to see, then the wide use of touchscreens by children should presumably compromise 3D perception, since the tactile feedback is presumably indicative of flatness at all times.

      The confusion is evident in Guy’s reference to the “trusted cue – occlusion implying depth.” Again, there is a naïve use of the term “occlusion.” Obviously, the image observers see on the screen isn’t occluded, it’s just a pattern of colored points. With respect to both the screen and the retinal stimulation, there is no occlusion because there are no objects. Occlusion is a perceptual, not a physical, fact as far as the proximal stimulus is concerned. So the cue itself is an inferred construct intimately linked to object perception. So we’re forced to ask, what cued the cue…and so on, ad infinitum. Ultimately, we’re forced to go back to brass tacks, to tackle figure ground organization via general principles of organization. Even if we accepted that there could (somehow) be unambiguous cues, we would still have the problem that each retinal image is unique, so we would need a different cue - and thus an infinite number of cues- to handle all of the ambiguity. Which makes the use of “cues” redundant.

      So the notion that “one might not need much to allow a self-organising system of cue to rapidly ‘boot-strap’itself into a robust system in which myriad sensory cues are integrated optimally” is clearly untenable if we try to actually work through what it implies. The concept of ‘cue recruitment’ throws up a lot of concerns only because even its provisional acceptance requires that we accept unacceptable assumptions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 21, Guy M Wallis commented:

      Lydia raises an important question. Surely we can't learn everything! We need something to hang our perceptual hat on to get the ball rolling. After all, in the experiments described in our paper, Ben and I relied on the presence of a trusted cue - occlusion implying depth - to allow the observer to harness the new cue which we imposed - arm movement. But where did knowledge of the trusted depth cue come from? Did we have to learn that too? Well there is considerable empirical evidence that humans do acquire knowledge of how depth cues work from experience. We observe objects as we approach or touch them and learn about how they change in size, aerial perspective, linear perspective etc. But it also seems likely that some cues have been acquired in phylogentic time due to their reliability and utility. The apparently in-built assumption that lighting in a scene comes from above and the left may be an example of this. In the end though, one might not need much to allow a self-organising system of cues to rapidly 'boot-strap' itself into a robust system in which myriad sensory cues are integrated optimally.

      Lydia and my co-author, Benjamin Backus, have been engaged in a lively and informative exchange on PubPeer which I recommend to those interested in this debate. The concept of cue recruitment throws up a lot of concerns and queries.

      https://pubpeer.com/publications/2622B45C885243AFCB5C604CB0638B


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 12, Lydia Maniatis commented:

      It occurs to me that the "cue recruitment theory" is susceptible to the problem of infinite regress. If percepts are by their nature ambiguous, and require "cues" to disambiguate, then aren't the cues, which are also perceptual articles, also in need of disambiguation? Don't we need to cue the cue? And so on....


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Nov 12, Lydia Maniatis commented:

      Two (probably) final points regarding the authors' conclusion quoted below:

      "In conclusion, the present study presents evidence that a voluntary action (arm movement) can influence visual perceptual processes. We suggest that this relationship may develop through an already functional link between motor behavior and the visual system (Cisek & Kalaska, 2010; Fagioli et al., 2007; Wohlschläger & Wohlschläger, 1998). Through the associative learning paradigm used here, this relationship can be modified to enable arbitrary relationships between limb movement and perceived motion of a perceptually ambiguous stimulus. "

      First, most stimuli are not perceptually ambiguous (i.e. they are not bistable or multistable), so the relevance of this putative finding is questionable in practice, and would require much more development in theory.

      Second, the claim that it is possible to construct "arbitrary relationships between limb movement and perceived motion of a perceptually ambiguous stimulus" is a radical behaviorist claim, of a type that has consistently been falsified both logically and empirically.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Nov 12, Lydia Maniatis commented:

      The degree of uncertainty incorporated into this study in the form of confounds means that the claims at the front end carry no weight.

      Essentially, the authors apparently are employing a forced choice paradigm. (They don’t refer to it as such, but rather as a “dichotomous perceptual decision.” Their stimulus is bistable, unstable, briefly presented, temporally decaying and the response relies on memory as it occurs after the image has left the screen. Their training procedure likely produces expectations that may bias outcomes.

      The highly unstable nature of the Necker cube, even in static form, is self-evident. I don’t know if this is mitigated by motion, but I doubt it. I would expect the uncertainty to be even greater when the square face of the figure isn’t in a vertical/horizontal orientation.

      In their discussion, the authors address the possibility of response bias in their study: “Firestone and Scholl (in press)…include a section on action-based influences on perception. The authors argue that much of this literature is polluted with response bias and that suitable control studies have undermined many of the earlier findings.”

      Wallis and Backus counter this possibility with a straw man. “If participants were trying to respond in a manner they thought we might expect, there is no reason why they would not have done so in the passive conditions…”

      However, the question isn’t only whether participants were trying to meet investigator expectations, but whether they had developed expectations of their own based on the “training” procedures.

      In the so-called passive training condition, an arrow, either congruent or incongruent, was associated with the rotation of a disambiguated Necker cube. However, in this condition observers have no incentive to pay attention to this peripheral form and its connection with the area of interest. In the active condition, in contrast, it is necessary attend the arrows and to act on them. This obligation to act on the arrows while observing the figure ensures that attention is paid to their connection with cube rotation.

      The conceptual and methodological uncertainty is compounded by the fact that the authors themselves can’t explain (though they presumably expected it) the failure of the arrows alone to produce a perceptual bias. As with the previous issue, they dispense too casually with the problem:

      “So why did the participants in the passive conditions show little or no cue recruitment? As mentioned in the Introduction, Orhan et al. (2010) have argued that there must be a mechanism for determining which cues can be combined to create a meaningful interpretation of the sensory array. In the context of this study it would appear that passive viewing of the rotating object and the contingent arrows, does not satisfy this mechanism's requirements. This is perhaps because the arrows are regarded as extrinsic to the stimulus and hence unfavored for recruitment (Jain et al., 2014).

      This is as weak and evasive an argument as could possibly be made in a scientific paper. The authors as why the arrow “cue” itself didn’t have an effect. They answer that it didn’t have an effect because it doesn’t satisfy the unknown requirements of an unknown mechanism that is nevertheless presumed to exist. So if a putative cue “works,” it proves the mechanism exists, and if a putative cue doesn’t work, it shows the mechanism is uninterested in it. Thus the cue theory is a classic case of an unfalsifiable, untestable proposition. It is merely assumed and data uncritically interpreted in that light.

      The bottom line here is that the failure of the arrows to act as “cues” contradicts the investigators predictions, and they don’t know why. Which begs the question of why they planned an experiment containing what at the beginning they must have considered a serious confound? The failure of the arrows to cue the percept constitutes a serious challenge to their underlying assumptions, and needs to be addressed.

      The authors’ further rationalization, that “This is perhaps because the arrows are regarded as extrinsic to the stimulus and hence unfavored for recruitment” begs the question, regarded as extrinsic by whom? The conscious observer? This leads, again, to the possibility of response bias.

      But Wallis and Backus have their own response bias to the suggestion of response bias in their subjects: “We regard cue-recruitment as a cognitively impenetrable, bottom-up process….”

      Thinking this is one thing, corroborating it another. The use of perceptually unstable stimuli producing temporally limited effects reliant on memory and forced choice responses isn’t a method designed to guard against potential response bias, but rather one that offers fertile ground for it. The convenience of dichotomous responses for data analysis can’t offset these disadvantages.

      Short version: The possibility of response bias has in no way been excluded.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 01, Amy Donahue commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 01, Amy Donahue commented:

      Using the information in this article for a presentation on assisted reproduction technology, it's very helpful. Appreciate that it's open access, too! But as a medical librarian, I couldn't help but note that the search strategy could be improved. Even excluding articles published after 8/1/16, this strategy retrieves more articles than the authors noted finding:

      ((("Oocytes"[Majr] OR oocyte*[tiab])) AND ("Cryopreservation"[Majr] OR freez*[tiab] OR "Vitrification"[Majr] OR vitrif*[tiab])) AND ("Pregnancy"[Mesh] OR pregnan*[tiab] OR survival[tiab] OR birth[tiab] OR "quality embryo"[tiab] OR "quality embryos"[tiab] OR "embryo quality"[tiab] OR "viable embryo"[tiab] OR "viable embryos"[tiab])

      Not limiting to humans (which is helpful, but does limit to only Medline articles; maybe that was the authors' intent) yields roughly 1500; limiting to humans (which does exclude some human studies that just aren't indexed as such) brings it down to almost 1,000.

      Additionally, searches should probably be done in other databases, not just PubMed (and note that Medline is the subset of articles in PubMed that are indexed with MeSH terms), for the sake of being comprehensive, although that certainly adds time and effort to screening and deduplicating the results (but librarians can also help with that). There should be librarians at some of the authors' institutions, if not all - getting some search help next time would make your work even stronger.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 08, Christopher Tench commented:

      Could you possibly provide the coordinates analysed otherwise it is difficult to interpret the results.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 18, Jack Gilbert commented:

      We have been following some of the comments about this paper and accept that the wording of parts of our paper could be interpreted in ways we did not intend and that do not reflect the work performed.  We want to make it clear that for this paper we made predictions about nitrate, etc based on analysis of rRNA amplicon sequences and matching them to known genomes.  We did not directly measure these genes involved in nitrate metabolism (nitrate reductase, nitrite reductase, etc.), or know for certain that the strains present in the samples have such functions (although they are widely distributed in the matching phylogenetic groups).  Some of the wording (e.g., of the title in the abstract) did not come across as we intended, and could be interpreted as implying that we made direct measurements.  We want to note that we believe the predictions we made are useful, but acknowledge that they have limitations. We also want to stress that to test these hypotheses and advance clinical practice, we would need to perform extensive validation through intervention studies in carefully controlled clinical populations, which is obviously considered beyond the scope of the Observation format. However, we are currently performing ongoing studies that we believe will advance this research including some work based on public comments made about the lack of validation of the specific claims of the paper.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 28, Lydia Maniatis commented:

      Cherniawsky and Mullen’s (2016) article lies well within the perimeter of a school of thought that, despite its obvious intellectual and empirical absurdity, is popular within the vision science community.

      The school persists, and is relentlessly prolific, because it has insulated itself from the possibility of falsification, mainly by ignoring both fact and reason.

      Explanatory schemes are concocted with respect to a narrow set of stimuli and conditions. Data generated under this narrow set of conditions are always interpreted in terms of the narrow scheme of assumptions, via permissive post hoc modeling. When, as here, results contradict expectation, additional ad hoc assumptions are made with reference to the specific, narrow type of stimuli used, which then, of course, may subsequently be corroborated, more or less, using those same stimuli or mild variants thereof.

      The process continues ad infinitum via the same ad hoc route. This is the reason that, as Kingdom (2011) has noted, the study of lightness, brightness and transparency (and I would add, vision science in general) is divided into camps “each with its own preferred stimuli and methodology” and characterized by “ideological divides.“ The term “ideological” is highly appropriate here, as it indicates a refusal to face facts and arguments that contradict or challenge the preferred view. It is obviously antithetical to the scientific attitude and, unfortunately, very typical of virtually all of contemporary vision science.

      The title of this paper ”The whole is other than the sum...” indicates that a prediction of “summation” failed even under the gentle treatment it received. The authors don’t quite know what to make of their results, but a conclusion of “other” is enough by today’s standards.

      The ideological camp to which this article belongs is a scandal on many counts. First, it adopts the view that there are certain figures whose retinal projections trigger visual processes such that the ultimate percept directly reflects local “low-level” processes. More specifically, it reflects “low-level” processes as they are currently (and crudely) understood. The figures supposed to have this quality are those for which the appropriate “low-level” story du jour has been concocted.

      The success of the method is well-described by Graham (1997, discussed in PubPeer), who notes that countless experiments were "consistent" with the behavior of V1 neurons at a time when V1 had only begun to be explored and when researchers were unaware not only of the complexities of V1 but also of the many hierarchically higher-level and processes that intervene between retina and percept. This amazing success is rationalized (if we may use the term loosely) by Graham, who with magical thinking reckons that under certain conditions the brain becomes “transparent” down to the initial processing levels. Teller (1984) had earlier (to no apparent effect) described such a view as “the nothing mucks it up proviso,” and pointed out the obvious logical problems.

      Cherniawsky and Mullen premise their article on this view with their opening sentence: “Two-dimensional orthogonal gratings (plaids) are a useful tool in the study of complex form perception, as early spatial vision is well described by responses to simple one-dimensional sinusoidal gratings…” In fact, the “one-dimensional sinusoidal gratings” in question typically produce 3D percepts of light and shadow, and the authors’ plaids in Figure 1 appear curved and partially obscured by a foggy overlay. So as illogical as the transparent brain hypothesis is to begin with, the stimuli supposed to tap into lower level processes aren’t even consistent with a strictly “low-level” interpretive process.

      The uninitiated might wonder why the authors use the term “spatial vision.” It is because they have uncritically adopted the partner of the transparent brain hypothesis, the view that the early visual processes perform a Fourier analysis on the retinal projection. It is not clear that this is at all realistic at the physiological level, but there is also no apparent functional reason for such a challenging process, as it would in no way further the achievement of the goal of organizing the incoming light into figures and grounds as the basis for further interpretation leading to a (usually) veridical representation of the environment. The Fourier conceit is, of course, maintained by employing sinusoidal gratings while ignoring their actual perceptual effects. That is, the sinusoidal gratings and combinations thereof are said to tap into the low-level frequency channels, which then determine contrast via summation, inhibition, etc, (whatever post hoc interpretation the data of any particular experiment seem to require). These contrast impressions, though experienced in the context of, e.g. impressions of partially-shadowed tubes, are never considered with respect to these complex 3D percepts. Lacking necessary interpretive assumptions, investigators are reduced to describing their results in terms of “other,” precisely described, but theoretically unintelligible and tangled effects.

      The idea that “summation” of local neural activities can explain perception is contradicted by a million cases, and counting, including the much-loved sinusoidal gratings and their shape-from-shading effects. But ideology is stronger and, apparently, good enough for vision science today.

      Finally, the notion of “detectors” is a staple of this school and the authors’ discussion; for a discussion of why this concept is untenable, please see Teller (1984).

      p.s. As usual, I’ll ask why its ok for an author to be one of a small number of subjects, the rest of whom are described as “naïve.” If it’s important to be naïve, then…

      Also, why use forced choices, and thus inject more uncertainty than necessary into the results? It’s theoretically possible that observers never see what you think they’re seeing…Obviously, if you’re committed to interpreting results a certain way, it’s convenient to force the data to look a certain way…

      Also, no explanation is given for methodological choices, e.g. the (very brief) presentation times.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 25, Christopher Miller commented:

      The clonEvol package has changed slightly, requiring an update to the "run.R" example script contained in Additional File 2. The updated script can be found here: https://gist.github.com/chrisamiller/f4eae5618ec2985e105d05e3032ae674


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 11, Martin Mayer commented:

      Cut the fat: Putting the risks of hypertriglyceridemia into context

      A brief response to “Nonfasting mild-to-moderate hypertriglyceridemia and risk of acute pancreatitis”

      In their article, Pedersen and colleagues present findings from their prospective cohort study on hypertriglyceridemia and its association with both acute pancreatitis and myocardial infarction.<sup>1</sup> With a median follow-up of 6.7 years (interquartile range, 4.0 to 9.4 years) among 116,550 "white individuals of Danish descent from the Danish general population"<sup>1(p1835)</sup> selected randomly from two similar prospective studies (the Copenhagen City Heart Study and the Copenhagen General Population Study), this is a sizable study with respectable follow-up, even if generalizability of the findings might be at least somewhat limited. They rightly note “there is no consensus on a clear threshold above which triglycerides are associated with acute pancreatitis,”<sup>1(p1835)</sup> and others have highlighted important issues with the evidence base.<sup>2</sup> Pedersen and colleagues also cite a review<sup>3</sup> on triglycerides and cardiovascular disease, but here too the evidence is not entirely clear; the review only concludes evidence “is increasing”<sup>3(p633)</sup> and recommends high-intensity statin therapy. The review also considers the future potential of add-on triglyceride-lowering therapy for those already on a statin, pointing to two ongoing trials of ω-3 fatty acids (REDUCE-IT and STRENGTH). However, the currently-available evidence - particularly that with patient-relevant outcomes - does not support such a strategy for ω-3 fatty acids or other agents that can substantially lower triglycerides (such as fibrates and niacin).<sup>2,4,5</sup>

      Even if their study reflects an underlying truth, Pedersen and colleagues unfortunately demonstrate a relative inattention to absolute risks and the implications thereof. They devote a small amount of text to absolute risks and report absolute numbers in the figures, but they repeatedly state their findings show “high risk” for acute pancreatitis, a perspective seemingly driven by the magnitude of the hazard ratios (HRs). In their concluding statements, they even remark: “Mild-to-moderate hypertriglyceridemia at 177 mg/dL (2 mmol/L) and above is associated with high risk of acute pancreatitis in the general population, with HRs higher than for myocardial infarction.”<sup>1(p1841)</sup>

      When caring for individual patients, relative metrics such as HRs are most useful when appropriately applied to corresponding baseline absolute risks. Conversely, disproportionate focus on relative metrics or failure to adequately contextualize relative metrics with corresponding absolute risks is considerably less informative and can contribute to a distorted sense of reality. Even if one accepts research findings as being likely reflective of an underlying truth, one must always carefully appraise absolute risks to gain a finer appreciation of the quantitative implications of the research findings. This practice is still useful even if one finds weaknesses in methodology, as one can simply consider the estimates increasingly uncertain in a manner qualitatively proportional to the weaknesses in methodology. A tool customized for this study is available here (TinyURL: http://tinyurl.com/JAMAIMhypertrigcalctool).

      According to their own data, comparing the lowest triglyceride level group (<89 mg/dL or <1 mmol/L) to the highest triglyceride level group (≥443 mg/dL or ≥5 mmol/L), one finds an absolute risk difference (ARD) for acute pancreatitis of 0.93% over 10 years if using the absolute numbers reported in Figure 1 to estimate absolute risks, and an ARD of 2.05% over 10 years (95% confidence interval [CI], 0.73% to 4.99%) if using the absolute risk in the lowest triglyceride level group and the multivariable-adjusted HR estimate for the highest triglyceride level group (HR 8.7; 95% CI, 3.7 to 20). Repeating this for myocardial infarction, one finds an ARD of 5.6% over 10 years or an ARD of 5.08% (95% CI, 3.00% to 7.73%) over 10 years. This demonstrates at least one reason why it is important to put relative metrics into context: Although the HRs for acute pancreatitis may be “higher than for myocardial infarction”,<sup>1(p1841)</sup> the absolute risks and absolute risk differences are higher for myocardial infarction. Additionally, it is more informative to provide risk estimates in absolute terms than in relative terms. Indeed, as aforementioned, absolute risks give better insight into what research might mean for a patient if one accepts the findings as being reflective of an underlying truth. Unfortunately, The New York Times' coverage of the study exacerbates the issue, with the only attempt to contextualize the relative metrics being a quote from one of the study’s authors. (Such mishandling of evidence is not uncommon in the media, but that is not the focus of this commentary. Including The New York Times’ coverage is not meant to single them out as uniquely bad or good in this regard; it simply serves as an example.) It is ultimately a disservice to say the risk of pancreatitis was 770% higher in patients with triglycerides ≥443 mg/dL (≥5 mmol/L) compared to patients with triglycerides <89 mg/dL (<1 mmol/L) without contextualizing such a metric with absolute risks. More technically, and as discussed in the tool, HRs are also not quite the same as relative risks.

      Lastly, while management was not a focus Pedersen and colleagues’ article, sensible lifestyle changes should be emphasized wherever poor lifestyle factors exist. As for interventions beyond lifestyle changes, a medication that can reduce cardiovascular risk – such as a statin – might be instituted after shared decision-making concerning a person’s cardiovascular risk estimate; importantly, however, a person’s cardiovascular risk estimate is not dependent on triglyceride levels, and pharmaceutical intervention targeted at lowering triglycerides per se is not clearly supported by currently-available evidence examining cardiovascular, pancreatic, or other patient-relevant outcomes.  

      References

      (1) Pedersen SB, Langsted A, Nordestgaard BG. Nonfasting mild-to-moderate hypertriglyceridemia and risk of acute pancreatitis. JAMA Intern Med. 2016 Dec 1;176(12):1834-1842. doi: 10.1001/jamainternmed.2016.6875.

      (2) Lederle FA, Bloomfield HE. Drug treatment of asymptomatic hypertriglyceridemia to prevent pancreatitis: where is the evidence? Ann Intern Med. 2012 Nov 6;157(9):662-664. doi: 10.7326/0003-4819-157-9-201211060-00011.

      (3) Nordestgaard BG, Varbo A. Triglycerides and cardiovascular disease. Lancet. 2014;384(9943):626-635.

      (4) Rizos EC, Ntzani EE, Bika E, Kostapanos MS, Elisaf MS. Association between omega-3 fatty acid supplementation and risk of major cardiovascular disease events: a systematic review and meta-analysis. JAMA. 2012 Sep 12;308(10):1024-1033. doi: 10.1001/2012.jama.11374.

      (5) Keene D, Price C, Shun-Shin MJ, Francis DP. Effect on cardiovascular risk of high density lipoprotein targeted drug treatments niacin, fibrates, and CETP inhibitors: meta-analysis of randomised controlled trials including 117,411 patients. BMJ. 2014 Jul 18;349:g4379. doi: 10.1136/bmj.g4379. (Note about this reference: Although the title implies focus on HDL as a therapeutic target, this study nevertheless provides meaningful insight into whether there is any cardiovascular or mortality benefit from adding either niacin or a fibrate to statin therapy, and both these agents can substantially lower triglycerides.)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 26, Su-Fang Lin commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 26, Su-Fang Lin commented:

      Now the link in Oncotarget is back.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Jan 18, Stephen Maher commented:

      Comments for this article in PubPeer (https://pubpeer.com/publications/27816970) suggest that the statistical data and some of the figures in this article are exactly the same and therefore unsubstantiated. As of late November 2016, the article can no longer be found on the Oncotarget website. No retraction has been reported.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 08, Peter Hajek commented:

      One problem with interpretation is that in these studies, very few if any people actually stopped smoking. The provision of stop smoking treatments (as opposed to actually stopping smoking) does not seem to undermine concurrent substance use treatments, but the question of whether actually stopping smoking helps with or undermines concurrent efforts to stop using other drugs, and whether sequential treatments yield better results than doing this concurrently, have not been well answered so far.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 23, Harri Hemila commented:

      Vitamin E may increase and decrease all-cause mortality in subgroups of males

      Galli F, 2017 claimed that supplementation with vitamin E may have no effect on all-cause mortality even at supra-nutritional doses. They did not consider the strong evidence from the ATBC Study, which indicates that the effects of vitamin E on all-cause mortality appear to be heterogeneous.

      The ATBC Study investigated 29 133 male smokers, and Hemilä H, 2009 showed that the effect of vitamin E on all-cause mortality was simultaneously modified by age and dietary vitamin C intake with P = 0.0005 for the test of heterogeneity. Vitamin E had no influence on mortality in males who had a low dietary intake of vitamin C. However, among males who had a high intake of vitamin C, supplementation with vitamin E increased mortality by 19% among those who were 50-62 years at the baseline of the trial, whereas it decreased mortality by 41% among those who were 66 years and older. The decrease in mortality amongst the oldest participants suggested that vitamin E might increase life span, and indeed, men that were administered vitamin E lived for half a year longer at the upper end of the follow-up age range, see Hemilä H, 2011.

      Galli F, 2017 further also claimed that vitamin E intake is unlikely to affect mortality regardless of dose, and they referred to the Bayesian meta-analysis on vitamin E by Berry D, 2009. However, Galli et al. overlooked that the Bayesian meta-analysis was based on between-trial analysis, whereas the evidence for heterogeneity in vitamin E effect in the ATBC Study was based on individual participant level analysis, a much more reliable analysis Hemilä H, 2009. Between-study analysis may suffer from ecological fallacy. Galli et al. also disregarded other detailed criticisms of the Berry et al. meta-analysis on vitamin E by Greenland S, 2009 and Miller ER 3rd, 2009.

      Galli F, 2017 concluded that since an indiscriminate vitamin E supplementation is not supported by the available evidence, future efforts are necessary to establish biomarkers and selection criteria to predict who is likely to benefit from vitamin E supplementation. However, the ATBC Study analyses indicate that age and responses to life style questionnaires may characterize people who benefit of vitamin E administration. It seems illogical therefore that the variables already identified in the ATBC Study analyses were not considered in the review by Galli et al.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 18, Mohamed Fahmy commented:

      not commonly recognised, but significantly important anomalies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 09, Yang K Xiang commented:

      In the their commentary, Dr. Santulli has noted the differences in glucose tolerance tests between two studies (1, 2). In our study, 5-6 week old WT and β2AR -/- were fed a high fat diet (60% fat) for 6 months; both strains develop diabetes and glucose intolerance when compared to animals of same genotypes fed with a control chow (10% fat). We did not observe differences in glucose homeostasis between WT and β2AR -/- fed with the control chow. This contrasts with the Santulli study, in which β2AR -/- strain used in their studies develop diabetes and glucose intolerance at 6-months of age when fed a chow diet. Several factors may contribute to the differences in diabetic phenotypes observed.

      1. In the Stanlulli study, β2AR -/- were backcrossed to the C57Bl6/N strain. In our study, the β2AR -/- is backcrossed into the C57Bl6/J strain.
      2. Our study used a defined control chow with 10% fat whose composition with the exception of fat and sucrose content matched that of the high-fat diet. The possibility therefore exists that the “chow” diet in the Santulli study, whose composition is not described in detail could contribute in part to some of the metabolic changes observed. In addition, our study does not exclude that β2AR -/- mice may have metabolic issues relative to WT after feeding with the defined control chow.

      The primary focus of our work was to understand the cardiac response to obesity and long-term hyperinsulinemia. In this regard the β2AR -/- mice on a high fat diet developed hyperglycemia and hyperinsulinemia, which therefore enabled us to determine if the absence of β2ARs in the heart could modulate the cardiac maladaptation that develops in wild type animals. We reported fasted insulin concentrations to demonstrate the existence of hyperinsulinemia in response to high fat feeding. However, we did observe in data not presented in the manuscript that insulin concentrations in β2AR -/- mice after intraperitoneal administration glucose were statistically lower than those in high fat fed WT, suggesting a reduced insulin release from islets, consistent with the conclusions of the in Santulli study. The Muzzin study, mentioned in the commentary is an animal with complete absence of all three β adrenergic receptors and as such caution is advised in comparing that model to mice with selective loss of the β2AR.

      A study published by Jiang and colleagues was also discussed, which reported that β2AR -/- mice display a diabetic retinopathy phenotype. Although the authors of this study did not provide background information of glucose and insulin levels, they suggest that β adrenergic signaling is essential for maintaining retinal muller cell viability. Thus the observed retinopathy might not be related to diabetes per se. Taken together, these data suggest that β2AR signaling is associated with glucose metabolism and complications that may be modulated in a tissue-specific manner in different tissues in diabetes. Ultimately, transgenic approaches with tissue-specific deletion of β2AR may offer more insight into the underlying mechanism of these tissue-specific phenotypes.

      Reference

      1) Inhibiting Insulin-Mediated β2-Adrenergic Receptor Activation Prevents Diabetes-Associated Cardiac Dysfunction. Circulation. 2017;135:73-88.

      2) Age-related impairment in insulin release: the essential role of β2-adrenergic receptor. Diabetes. 2012;61:692-701.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 28, Gaetano Santulli commented:

      In the present article, Wang, Liu, Fu and colleagues report that β2-adrenergic receptor (β2AR) plays a key role in hyperinsulinemia-induced cardiac dysfunction (1). Overall, the data are very interesting and compelling. However, we noticed that in this paper β2AR-/- mice do not exhibit glucose intolerance; in fact, they seem to have a response to intraperitoneal glucose that is even better than wild-type mice (though a statistical analysis comparing these two groups is not provided). Although surprisingly not reported by the Authors, mounting evidence indicates that the deletion of β2AR has detrimental effects on glucose metabolism (2-4). Indeed, we have demonstrated that β2AR-/- mice display impaired insulin release and significant glucose intolerance (2). Muzzin and colleagues found that the ablation of βARs mechanistically underlies impaired glucose homeostasis (3). Other groups have confirmed these results, also showing that β2AR-/- mice develop diabetic-related microvascular complications (i.e. retinopathy)(4). Nonetheless, the Authors fail to at least discuss previous relevant literature describing the alterations in glucose metabolism observed in β2AR-/- mice and do not accurately circumstantiate their findings. Furthermore, the Authors do not provide any measurement (not in vivo nor in isolated islets) of insulin levels following glucose challenge, showing just baseline serum levels. We believe that for the sake of scientific appropriateness the Readers of Circulation will appreciate a clarification, in particular regarding the fact that pertinent literature in the field has been overlooked.

      A formal e-Letter has been published by Circulation.

      Competing Interests: None.

      References 1) Inhibiting Insulin-Mediated beta2-Adrenergic Receptor Activation Prevents Diabetes-Associated Cardiac Dysfunction. Circulation. 2017;135:73-88.

      2) Age-related impairment in insulin release: the essential role of β2-adrenergic receptor. Diabetes. 2012;61:692-701.

      3) The lack of beta-adrenoceptors results in enhanced insulin sensitivity in mice exhibiting increased adiposity and glucose intolerance. Diabetes. 2005;54:3490-5.

      4) Beta2-adrenergic receptor knockout mice exhibit a diabetic retinopathy phenotype. PLoS One. 2013;8:e70555.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 09, Serina Stretton commented:

      Prasad and Rajkumar’s editorial on conflicts of interest (COI) published in the Blood Cancer Journal [1] explores how management of financial COI in academic oncology influences treatment decisions away from best patient care. We share Prasad and Rajkumar’s concerns about the potential negative influence of COI, irrespective of its source, but disagree that banning industry-funded PMWs is a reasonable or practical solution.

      This year (2017) three leading professional organizations, the International Society for Medical Publication Professionals (ISMPP), the American Medical Writers Association (AMWA), and the European Medical Writers Association (EMWA), released a joint position statement reaffirming PMWs’ obligations to be transparent about their contributions and sources of funding, and to clearly delineate the respective roles of authors and PMWs [2]. Prasad and Rajkumar claim that publications written with assistance from industry-funded PMWs may not reflect authors’ views and that authors may feel unable to challenge inappropriate sponsor influence. These statements undermine the clear responsibilities and accountability that authors should uphold when publishing clinical data [3,4]. For example. as required by the International Committee of Medical Journal Editors [3] and upheld by the AMWA-EMWA-ISMPP joint position statement, authors must provide all of the following: early intellectual input to a publication, be involved in the drafting, approve the final version for publication, and agree to be accountable for all aspects of the work. It is the latter two requirements that counter Prasad and Rajkumar’s premise that authors have little opportunity to control the content of the manuscript. In contrast, PMWs who often do not meet authorship criteria, assist authors to disclose findings from clinical studies in a timely, ethical, and accurate manner; ensure that authors and sponsors are aware of their obligations; and document author contributions to the development of a publication [2,4]. To contribute value in these roles, PMWs regularly receive mandatory training on ethical publication practices from their employers and industry funders [5-7].

      Of concern, Prasad and Rajkumar’s present misleading data to support their arguments for banning industry-funded PMWs. First, they state that “writing assistance” is common, citing prevalence data from a survey of honorary or ghost authorship by Wislar et al [8]. Honorary or ghost authorship occurs when an individual who merits authorship is excluded from the author byline. This is quite distinct from medical writers who not meet authorship criteria and (i) declare their involvement in the acknowledgements (PMWs) [3] or (ii) keep their involvement hidden (ghostwriters) [9]. Indeed, the prevalence of ghostwriting in the Wislar et al survey was 0.2% of articles, far lower than the 21% cited by Prasad and Rajkumar for ghost authorship. Second, Prasad and Rajkumar state that ghost authorship in industry-funded trials is far worse, citing a study by Gøtzsche et al [10]. However, Gøtzsche et al used a nonstandard definition of ghost authorship by extending the definition to undeclared contributions (either as authors or in the acknowledgments) from individuals who wrote the trial protocol and those who conducted the statistical analyses.

      As acknowledged by Prasad and Rajkumar, there are multiple benefits to engaging a PMW in terms of time and readability [1]. More importantly, publications involving PMWs are of higher quality – they have a shorter acceptance time [11], are more compliant with international reporting guidelines [12, 13], contain significantly fewer non-prespecified outcomes [14], and have a lower rate of retraction due to misconduct [15] than publications without PMWs or with those that are not funded by industry. As such, it is entirely unreasonable to exclude PMWs as an option on the basis of their funding. We strongly advocate that PMWs are selected on the basis of a proven track record and commitment to ethical and transparent publication practices. In addition, we strongly recommend that authors become familiar with reporting guidelines and be aware of, and fully comply with their obligations and roles as authors.

      The Global Alliance of Publication Professionals (www.gappteam.org)

      Serina Stretton, ProScribe – Envision Pharma Group, Sydney, NSW, Australia; Jackie Marchington, Caudex – McCann Complete Medical Ltd, Oxford, UK; Cindy W. Hamilton Virginia Commonwealth University School of Pharmacy, Richmond; Hamilton House Medical and Scientific Communications, Virginia Beach, VA, USA; Art Gertel, MedSciCom, LLC, Lebanon, NJ, USA

      GAPP is a group of independent individuals who volunteer their time and receive no funding (other than website hosting fees from the International Society for Medical Publication Professionals). All GAPP members have held, or do hold, leadership positions at associations representing professional medical writers (eg, AMWA, EMWA, DIA, ISMPP, ARCS), but do not speak on behalf of those organisations. GAPP members have, or do provide, professional medical writing services to not-for-profit and for-profit clients.

      REFERENCES [1] Prasad V, Rajkumar SV. Blood Cancer J 2016;6:e489 [2] www.ismpp.org/assets/docs/Inititives/amwa-emwa-ismpp joint position statement on the role of professional medical writersjanuary 2017.pdf 2017 [accessed 08.06.17] [3] www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html; 2016 [accessed 08.06.17] [4] Battisti WP et al. International Society for Medical Publication Professionals. Good Publication Practice for communicating company-sponsored medical research: GPP3. Ann Intern Med. 2015;163(6):461-4 [5] www.ismpp.org/ismpp-code-of-ethics [accessed 08.06.17] [6] www.amwa.org/page/Codeof_Ethics [accessed 08.06.17] [7] Wager E et al. BMJ Open. 2014;4(4):e004780 [8] Wislar JS et al. BMJ 2011;343:d6128.4-7 [9] Stretton S. BMJ Open 2014;4(7): e004777. [10] Gøtzsche PC et al. PLoS Med 2007;4:0047-52 [11] Bailey, M. AMWA J 2011;26(4):147-152 [12] Gattrell W et al. BMJ Open. 2016;6:e010329 [13] Jacobs A. Write Stuff 2010;19(3):196-200 [14] Gattrell W et al. ISMPP EU Annual Meeting 2017 [15] Woolley KL et al. Curr Med Res Opin 2011;27(6)1175-82


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 28, Robert West commented:

      This is a very useful article that will be of value to those working in other disciplines and interdicisplinary fields such as addiction. We are just at the beginning of a era in which we use ontologies and AI to build behavioural science. The Human Behaviour Change Project (www.humanbehaviourchange.org) is an ambitious attempt to take this forward, headed up by Prof Susan Michie with collaboration of IBM, computer and information scientists at UCL, and behavioural scientists in the Universities of Aberdeen and Cambridge.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 10, Lydia Maniatis commented:

      The authors assert that their findings: “support the idea that there are density-selective channels in the visual system, and that perceived density is in part based on a comparison of these channel responses across space. “

      Looking at the stimuli, I would suggest another interpretation. The areas with more closely grouped dots tend to be seen as figure, and the more dilute ones as ground. This is my impression. If this is the case, we should expect the dense areas to appear even denser, and the less dense areas to appear even less dense because: It has been understood since Rubin that figure appears more dense and ground less so: “According to Rubin (1915/1921), figures…adhere or cling together (are compact)…In comparison, the ground…has a “loose” structure....” (Wertheimer/Spillman Ed. 2012 On perceived motion and figural organization, MIT Press).

      There is no obvious functional rationale for positing “density channels” that compare the densities (how is this evaluated?) of adjacent or overlapping surfaces.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 04, Alessandro Rasman commented:

      Increased levels of coagulation factors in Multiple Sclerosis: a defending mechanism from microbleedings?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 12, Martine Crasnier-Mednansky commented:

      The statement by Vincent Detours "Our interest, as scientists and citizens, is to reject the totalitarian reduction of human activities to numbers, and adopt policies acknowledging the diversity of human talents and promoting individuals’ autonomy." is echoing the statement by Lewis Mumford "The test of maturity, for nations as well as individuals, is not the increase in power, but the increase of self-understanding, self-control, self-direction, and self-transcendence. For in a mature society, man himself and not his machines or his organizations is the chief work of art."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 09, Donald Forsdyke commented:

      Marketing in science

      In it ironic that Vincent Detours insightful analysis of the "managers" who outdo the "competent" comes at a time when the triumph of marketing over ability is so evident on the political scene. For any who might think this could not happen in science, two accounts of the career of Niels Jerne will perhaps provide helpful reading (1, 2).

      1.Soderqvist T (2003) Science as Autobiograph: the Troubled Life of Niels Jerne (Yale Univ. Press, New Haven).

      2.Eichmann K (2008) The Network Collective: Rise and Fall of a Scientific Paradigm (Birkhauser, Berlin).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 07, Vincent Detours commented:

      Who wants to be a number?

      Sinatra et al. present another metrics to predict scientific impact. Like their predecessors, they fail to discuss the wider consequences of (i) equating impact and scientific quality, and (ii) reducing scientists' activity to a number.

      The number of times a paper is cited—the basic quantity behind impact metrics—is by essence a measure of popularity, not a direct measure of truth and novelty. Intelligent thinking together with the availability of resources required to work, effective communication and access to high circulation venues all contribute to popularity. Building impact often means less work in the lab and more networking with those in charge of science funding and dissemination, presence on social media, etc. In this context, the form of communication increasingly takes precedence over its content, aggravating the current reproducibility crisis.

      Impact metrics routinely guide hiring and funding. It spares decision makers the hurdle and risk of exerting sound scientific judgment: they simply promote the most popular folks. It conveniently shortens debates arising from diverse expert viewpoints. And who can argue against it? Aren’t the winners ‘elected’ by the community? As a result, scientifically unremarkable managers gain control at the expense of competent active scientists and, incidentally, the riches get richer. Yet, the vast majority of scientists are loosing control over resources and over their own destiny, being herded to the same 'high-impact' topics, for example.

      The proliferation of impact metrics, social networks ‘likes’ and other audience measures fuel the increasing tyranny of rankings in society. While alienating and isolating individuals in narcissism and permanent competition, rankings ultimately benefit those who aggregate information and control communication. Our interest, as scientists and citizens, is to reject the totalitarian reduction of human activities to numbers, and adopt policies acknowledging the diversity of human talents and promoting individuals’ autonomy.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 25, Lydia Maniatis commented:

      "A simple alternative model is developed that is consistent with the results."

      I.e. an ad hoc, data-fitted proposal is submitted that, as such, is experimentally uncorroborated and carries no theoretical weight. This is because experimental conditions must always be informed by theoretical assumptions (which they are supposed to test), so that the experiment can select which variables to hold constant, and which to allow to vary so as to test their hypothesized role, Post hoc explanations cannot distinguish between causal variables and confounds, and thus the experiments which they are evaluating in hindsight cannot serve as tests of a hypothesis so-derived.

      I suppose the authors will test their proposal in the future, but at the the present stage it was not worth reporting. Testing it would, of course, require much more extensive and detailed conceptual development, precisely so that anyone wanting to test it could exercise the necessary experimental control over the presumed causal variables, and minimize researcher degrees of freedom, i.e. the degree of uncertainty, in interpreting results.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 08, Seung Gyu Yun commented:

      First, I agree with your opinion, such as definition of saliva. We seem to have misrepresented a saliva that was confused it with the oral fluid.

      ASAP, I will request a correction to JCM.

      Second, we don't exaclty know that the levels of virus in oral fluid would be sufficient to cause infection. Because we did not test virus culture test in oral fluid. However, we generally found that virus in oral fluid had low level. Perhaps your opinion is correct, but unfortunately I have not found accurate data about resiratory viral culture in oral fluid

      Thank for your comment.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jan 26, Ronald Eccles commented:

      Dear Authors

      I was very pleased to read your recent article in Virology- a very interesting project.

      I am often asked if saliva can be a source of common cold and influenza infections and I usually reply NO

      The respiratory viruses such as rhinovirus and influenza replicate in the respiratory epithelium of nose, larynx and trachea and do not replicate in the oral mucosa.

      However, your study clearly demonstrates that these respiratory viruses can be found in fluid samples taken from the oral cavity- I say fluid samples because saliva in the mouth may be contaminated with respiratory mucus on sneezing an coughing. The fluid you samples in the mouth was not pure saliva.

      I agree that adenoviruses may replicate in the oral cavity but the respiratory viruses by definition do not.

      I believe that the levels of virus you have detected in oral fluid with your very sensitive pcr technique demonstrate the presence of viruses in oral fluid rather than saliva- it is a fine point but in order to get pure saliva you would need to cannulate the parotid duct and I doubt if you would find any respiratory viruses in this pure saliva.

      I found your research very interesting and congratulate you on a very important project.

      One final point- do you believe the levels of virus you found in oral fluid would be sufficient to cause infection? I am doubtful as your pcr technique would detect very small amounts of RNA or DNA and the viral titre of infectious particles would be very low

      I would be interested in your comments

      Kind Regards

      Professor Ron Eccles Director Common Cold Centre & Healthcare Clinical Trials Cardiff School of Biosciences Sir Martin Evans building Cardiff University Museum Avenue Cardiff CF10 3AX United Kingdom

      Ronald Eccles Cyfarwyddwr Treialon Gofal lechyd a Chanolfan Anywyd Caerdydd Ffordd yr Amgueddfa Caerrdydd CF10 3AX


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 19, Harri Hemila commented:

      Shortcomings in the meta-analysis on vitamin C and atrial fibrillation

      In their meta-analysis on vitamin C and postoperative atrial fibrillation (POAF), Baker WL, 2016 stated that they restricted to randomized trials; however, they included the Carnes CA, 2001 study which was not randomized. Furthermore, their meta-analysis did not include data for three US trials, two of which were substantially larger than the included POAF trials, see Hemilä H, 2017.

      The conclusion by Baker WL, 2016 that vitamin C does have effects on POAF is reasonable, yet their analysis did not reveal the significant heterogeneity in the effects. Although 5 trials studies in the USA found no benefit, several studies in less wealthy countries have found significant benefit of vitamin C against POAF indicating that further research should be carried in less wealthy countries, see Hemilä H, 2017.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 07, Peter Hajek commented:

      There was no significant association between respiratory symptoms and current vaping when controlling for smoking status, and this is the key analysis which should have been reported, rather than unadjusted results that reflect the fact that most vapers are smokers.

      Past experimentation with vaping remained linked to some symptoms but it also significantly reduced wheezing. With the absence of any effects of current vaping, these links are likely to be flukes as there is no obvious mechanism for them. Here is a link to a more detailed critique of the way these findings were reported: http://www.ecigarette-research.org/research/index.php/whats-new/whatsnew-2016/248-bronch


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 21, MICHAEL SIEGEL commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 19, Helmi BEN SAAD commented:

      The correct name of the last author is: "Ben Saad H".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 24, Helmi BEN SAAD commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 30, Yuichi Hongoh commented:

      Dear Dr.Price, your comment is very helpful to us. I agree with your suggestions. Thank you very much.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Sep 25, Morgan Price commented:

      The authors suggest that D. trichonymphae cannot synthesize threonine or methionine, but I think pathways for synthesizing both amino acids are present. First, the only gap in threonine synthesis appears to be a missing homoserine kinase. In Desulfivibrio vulgaris Miyazaki F, a misannotated shikimate kinase (DvMF_0971) provides the missing homoserine kinase activity (https://www.biorxiv.org/content/early/2017/09/23/192971). RSDT_0983 from D. trichonymphae is over 50% identical to DvMF_0971 and is also probably a homoserine kinase. Second, I found plausible candidates for all steps in methionine synthesis. D. trichonymphae has putative genes converting aspartate to homoserine (RSDT_0709, RSDT_1035, RSDT_0549), for activating homoserine (RSDT_0485), for sulfhydrylation of the activated homoserine (RSDT_0816), and for B12-dependent methionine synthase (RSDT_0316). Thus, D. trichonymphae contains the genes to synthesize threonine and methionine.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 06, Christopher Southan commented:

      The structure mapping analysis, including SciFinder intersects, has now been updated, one year on https://cdsouthan.blogspot.se/2017/09/osm-antimalarial-series-1-findability.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 02, Christopher Southan commented:

      Blog post on PubChem mappings for the structures https://cdsouthan.blogspot.se/2016/09/series-1-antimalarials-publication.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 23, Joshua L Cherry commented:

      NO EVIDENCE THAT SELECTION DRIVES SWITCHING

      This article claims that “the great majority of codon set switches proceed by two consecutive nucleotide substitutions…and are driven by selection”. The data in fact support a predominance of simultaneous switches that are not driven by selection. Even if we assume that sequential switches predominate, the implication that selection increases their rate by a factor of ~50 is unjustified. Moreover, selection against the non-serine intermediate is expected to decrease the rate of sequential switches, not to drive them. The authors’ argument to the contrary is analogous to arguing that a mountain range between two locations speeds the journey between them because it accelerates the downhill portion of the trip.

      Inappropriate standard of comparsion

      The usual way to establish that an evolutionary process is driven by selection is show that it is faster than some process that is largely unaffected by selection. The authors instead made comparisons to “expectations” derived from nonsynonymous substitution rates, which are greatly decreased by selection. This comparison cannot establish that switching is driven by selection, and would greatly overestimate any such effect.

      A more appropriate comparison would be to expectations derived from synonymous rates. These are several-fold higher than nonsynonymous rates, and “expectations” involve products of two rates. Thus, the claimed acceleration by selection mostly or entirely disappears with a proper standard of comparison.

      Unjustified rejection of simultaneous switching

      The authors reject a significant role for simultaneous double mutation in switching because the rate of switching is higher than the rate of analogous double changes in noncoding sequences by a factor of 5-10. This argument would be valid if non-coding regions were evolving nearly neutrally, but this is far from the case: rates of non-coding transversions (Fig. 3) are comparable to the rates of some nonsynonymous transversions (Fig 2).

      I have determined that the rates of the relevant synonymous transversions are higher than the corresponding single-base non-coding changes by a factor of >5. This presumably reflects purifying selection in non-coding regions. The effect of selection on simultaneous tandem changes is expected to be larger. Thus, the excess of serine switches over non-coding tandem changes can easily be explained by selection in non-coding regions. Put differently, we can estimate a lower limit on the rate of simultaneous serine switches, and it corresponds to the majority of the observed switches.

      Slow- vs. fast-evolving genes

      The article claims to have shown that the rate of switching is “higher in conserved genes than in nonconserved genes in full agreement with the selection hypothesis”. The results (Fig. 6) in fact demonstrate just the opposite: the rate of switching in “nonconserved” genes (0.0032 or 0.0022) is about three times higher than that in “conserved” genes (0.0010 or 0.0008).

      The authors considered the ratio of the switching rate to a sum of products of nonsynonymous rates. This ratio is higher in “conserved” genes only because the nonsynonymous rates are lower in “conserved” genes. This is true almost by the definition of “conserved” (low dN/dS), and has nothing to do with serine codon switches.

      Theoretical expectation

      Under the simple selection scheme considered by the authors, selection will actually decrease the rate of sequential switching. After fixation of a deleterious Ser->Thr or Ser->Cys mutant, selection will indeed increase the fixation probability of a mutant that restores Ser. However, selection always decreases the fixation probability of the initial deleterious mutant by a larger factor. As illustrated here, the product of the two relative fixation probabilities, and hence the relative probability of a switch during a short interval, is always less than one (selection slows switching) for nonzero s, and it decreases monotonically and approaches zero as the strength of selection (|Nes|) increases.

      The above implicitly assumes weak mutation, but the same conclusion holds outside of this regime (Kimura, 1985).

      Conclusion

      Neither the data nor the authors’ model supports the claim that serine codon switching is driven by selection or has an especially “high frequency”. In fact, both data and theory point to the opposite conclusion.

      References

      Kimura, M (1985) The role of compensatory neutral mutations in molecular evolution. Journal of Genetics 64(1):7-19.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 03, Donald Forsdyke commented:

      CONSIDERATION OF NUCLEIC ACID LEVEL SELECTION?

      The authors set out "to investigate the evolutionary factors that affect serine codon set switches" (i.e. between TCN and AGY). Their "findings imply unexpectedly high levels of selection" (1). Indeed, the data strongly support the conclusion that codon mutations "are driven by selection." It is conjectured that the codon mutation "switch would involve as an intermediate either threonine ACN or cysteine TGY, amino acid residues with properties substantially different from those of serine, so that such changes are unlikely to be tolerated at critical functional or structural sites of a protein."

      However, it does not follow that the unsuitability of the interim amino acids drove the rapid tandem substitutions. Choice of "coincident codons" has long been seen as influenced by pressures acting at the nucleic acid level (2-4). These pressures evolve in parallel with, and sometimes dominate, protein pressures. One example is purine-loading pressure (3). If this cannot be satisfied by changes at third codon positions, then sometimes the organism must accept a less favorable amino acid. With serine codons, a change from TCN to AGY (i.e. first and second codon positions) can increase purine-loading pressure without compromising the amino acid that is encoded see Ref. 3.

      1.Rogozin IB, Belinky F, Pavlenko V, Shabilina SA, Kristensen DM, Koonin EV (2016) Evolutionary switches between two serine codon sets are driven by selection. Proc Natl Acad Sci USA www.pnas.org/cgi/doi/10.1073/pnas.1615832113 Rogozin IB, 2016

      2.Bains W. (1987) Codon distribution in vertebrate genes may be used to predict gene length. J Mol Biol 197:379-388. Bains W, 1987

      3.Mortimer JR, Forsdyke DR (2003) Comparison of responses by bacteriophage and bacteria to pressures on the base composition of open reading frames. Appl Bioinf 2: 47-62. Mortimer JR, 2003

      4.Forsdyke DR (2016) Evolutionary Bioinformatics, 3rd edition (Springer, New York).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 15, Darko Lavrencic commented:

      There are three levels of research on intracraniospinal anatomy and fluids:

      1) Research at the basic level: intracraniospinal anatomy, cells, barriers, biochemical fluids exchange, flow patterns, composition, etc.

      2) Physiological intracraniospinal hydrodynamics

      3) Pathological hydrodynamic changes and anatomic adaptations

      Contemporary stage of research is predominantly still at the first level. The problems of the first level limit the solutions to the second and the third level.

      Hypothesis "The Intracraniovertebral Volumes, the Cerebrospinal Fluid Flow and the Cerebrospinal Fluid Pressure, Their Homeostasis and Its Physical Regulation" is at the second level of research: http://www.med-lavrencic.si/research/the-intracraniovertebral-volumes/.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 18, David Mage commented:

      Any candidate model for SIDS causation must explain the 50% male excess in SIDS and its 4-parameter lognormal age distribution. OMIM shows orexin, pPERK and ATF4 are all autosomal with no X-linkage involved. Therefore this line of research appears to be "barking up the wrong tree." Do the authors have any other explanation for the universal 0.61 male fraction of SIDS other than a recessive X-linkage or pure happenstance?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 27, Louise B Andrew MD JD commented:

      Medscape has published a synopsis of this article at www.medscape.com/viewarticle/869777. The principal author of this study has more recently published a formal study of licensure application questions which is Open Access, and can be found at https://www.ncbi.nlm.nih.gov/pubmed/28633174 They should really be read together. I have written a comprehensive article on Physician Suicide for Medscape http://emedicine.medscape.com/article/806779. Medscape requires a free subscription. However, all materials and many more resources can be accessed freely through www.Physiciansuicide.com. If you care about mental health and physicians, please learn more and help to publicize and address it.<br> A doctor a day (in the US alone) is too many to lose to an eminently treatable disease.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 11, Tania M O Abe commented:

      After reviewing the entire paper, we noticed an error in data of the last column of Table 2. During the registration of information in Table 2, the last column mistakenly recorded incorrect monthly number of deaths for myocardial infarction. The correction will be done this week. Once this is an government data, it can be found in http://tabnet.datasus.gov.br/cgi/tabcgi.exe?sim/cnv/obt10SP.def


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 27, Clive Bates commented:

      The reporting of this study is highly misleading. The abstract confidently asserts:

      RESULTS: We observed a reduction in mortality rate (-11.9% in the first 17 months after the law) and in hospital admission rate (-5.4% in the first 3 months after the law) for myocardial infarction after the implementation of the smoking ban law.

      In fact, those with access to the full study will not find these observed reductions anywhere. Both myocardial infarction (heart attack) deaths and hospital admissions increased substantially after the smoking ban came into effect in August 2009. See graphs of the study data for myocardial infarction deaths and hospital admissions courtesy of Chris Snowden's blog post on these findings: Brazilian Smoking Ban Miracle.

      The supposed 'decrease' emerges from modelling that adjusts for other factors that may influence heart attacks to create a counterfactual (what would have been expected to happen without the smoking ban). These factors give predicted rates of heart attack deaths and hospital admissions over the period studied. But why would the predicted rates suddenly shoot up to levels unprecedented in the dataset and so high that the observed large observed increases represent a decline compared to the even-higher prediction?

      The explanation given in the paper is as follows:

      The Autoregressive Integrated Moving Average with exogenous variables (ARIMAX) method was used to analyse the effect of the smoking ban law, modelled as a dummy variable, in the mortality rate and hospital admission rate data for myocardial infarction. The ARIMAX models were also adjusted to other parameters, including ‘total hospital admission’, CO, minimum temperature and air relative humidity. The ARIMAX method allows to estimate lag effects of input series and to forecast output series, as a function of a linear filter of the input series (transfer function) and of the noise (ARIMA filter) and by controlling for the autocorrelations. It enables us to compare the predicted rate of hospital admission and mortality with the real observed rate.

      But how these corrections are made and whether they are valid is barely justified in the paper - they are buried in the black-box model used and reported uncritically. Given that these adjustments reverse the observed effects - turning a sharp rise into a decline - then surely the authors should have asked themselves harder questions and not just trusted the model and their choice of inputs to it. However, they don't even remark on this change of the sign of the effect in the paper as if the actual observations are an embarrassment to be ignored rather than discussed. Yet this is the most striking feature of the paper. Had the authors wished to explain their work transparently, they could have plotted the counterfactual (the predicted values for deaths and admissions with no ban) and the actual emissions and shown the decrease that way. But that would have begged the question: what is causing the very steep predicted rise? Or raised the possibility of modelling error or rogue assumptions.

      Surely, confronted with this highly counterintuitive result, the editors and peer-reviewers should have demanded more explanation for the choice of confounding variables, a sensitivity analysis to flex whatever opaque assumptions have been made, publication of the data used to make adjustments, and a plausible narrative to explain the reversal of an increase to a decrease and the implicit massive underlying increase in background hospital admission and MI mortality rate that apparently coincided with the smoking ban. Finally, whatever the methodology, it is highly misleading to report these adjusted figures as an observed reduction in the abstract, especially with the faux precision of one decimal point.

      I would like to suggest the following rewording of the results for inclusion a revised abstract:

      RESULTS: We observed a substantial increase in mortality and hospital admissions for myocardial infarction after the implementation of the smoking ban law in Sao Paulo in August 2009. However, it is possible that other factors are responsible for this increase. After hand-picking a small number of possible confounding variables, and applying opaque statistical adjustments to account for their effect though without providing the data necessary for verification, we have been able to demonstrate that these increases could represent a modelled reduction in mortality attributable the smoking ban (−11.9% in the first 17 months after the law) and in hospital admission rate (−5.4% in the first 3 months after the law).

      CONCLUSIONS: Hospital admissions and mortality rate for myocardial infarction were increased in the first months after the comprehensive smoking ban law was implemented. However, it is possible that factors other than the smoking ban accounted for some or all of this.

      One must be concerned about the role of the journal Tobacco Control. Is this journal really an easy conduit for admitting studies of dubious quality to the peer-reviewed literature simply because the findings appear to provide support for certain tobacco control policies? I would welcome the editors' comments as well as that of the authors.

      Please see original commentary from Dr Michael Siegel, Professor in the Department of Community Health Sciences, Boston University School of Public Health on his blog here and here.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 17, José L Oliver commented:

      We are glad to announce that, after correcting the serious problem suffered by the backend on the past weeks, NGSmethDB is now running again: http://bioinfo2.ugr.es/NGSmethDB. Sorry by the inconveniences the downtime may have caused.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Aug 11, José L Oliver commented:

      The main backend of NGSmethDB, a no-SQL database coupled to an API-server, has suffered a serious, and by the moment unrecoverable, problem, being therefore currently unavailable. Sorry by the inconveniences this may cause. We follow applying our best efforts to recover it as soon as possible.

      In the meantime, we want to remember that NGSmethDB implemented a second mode to access the data: the NGSmethDB track hubs at UCSC, which are fully operative these days: http://bioinfo2.ugr.es:8080/NGSmethDB/data-access/ Track hubs, together with the coupled Table Browser and Data Integrator tools, provide standard and efficient ways for visualizing, retrieving, combine and compare NGSmethDB data to any other third-part annotation.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Aug 03, Julia Romanowska commented:

      The link seems to lead to an unexisting page. Is it just a short maintenance issue?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 03, Darren L Dahly commented:

      Because BMC has not actually linked the original reviews to this paper on their site, I have posted my original review here. Some but not all of my concerns were addressed in the final manuscript.

      Major Compulsory Revisions

      The paper is entirely exploratory and this should be emphasized by the authors. There are no specific hypotheses being tested here, nor are there any theory based predictions that the observed results can be compared against. While exploratory analyses can of course be helpful, the utility here is seriously limited by the observational nature of the data, the crudeness with which key covariates are measured (by survey) and represented (by dichotomization), and the use of modelling technique that will be unfamiliar to most readers. Consequently, almost any result obtained from this analysis could be explained, or explained away, fairly easily. Given the results obtained, the authors claim the research can inform out understanding of the aetiology and prevention of obesity, but notably fail to provide even one concrete example of how.

      My concerns are amplified in light of how the modelling of BMI SD scores has been reported in the paper. First and foremost, the paper doesn’t report the exact form of the growth mixture model (i.e. exactly what is the set of parameters being estimated), or the values of any of the estimates obtained (nor any indication of the uncertainly in these estimates). It is thus not possible to fully evaluate the work that has been done, and this must be corrected before any final decision on the paper could be made.

      Based on what I can infer about the model from the text, I have some additional concerns for the authors that I hope are useful. The authors only state that the variances of the latent growth factors are “fixed.” This could mean they are fixed to any specific value, or that they are estimated but fixed to be equal across classes. Based on the text, I will assume they were fixed as zero. This means that 100% of the variance in BMI SD scores is explained by group membership, and the final model reported includes two groups, each with similar intercepts but different slopes (one increasing and one decreasing). The model being reported (assuming I am correctly guessing the exact form of the model) precludes any variation in the degree of these changes. Thus, if the model was a faithful, complete representation of how these children are growing, then the exact same grouping could be discovered by simply dividing the sample into children with increasing BMI SDs and those with decreasing scores. I find this hard to believe. Ironically, to justify the use of growth mixture modelling, the authors state that it is useful for better understanding heterogeneity in growth – but they then go on to describe a model that describes all of that heterogeneity with a binary classification. I predict that the variability in the BMI SD scores at any single time point is more informative than the binary classification resulting from the “complex” model being reported.

      The authors should report how differences in the exact ages of measurement were handled. There are several options. The authors might have assumed everyone was measured at the same age at times 1, 2, and 3, which could be a considerable source of error depending on the variances of the ages of measurement. It seems more likely they have used the Mplus time-scores option, and if so, this needs to be described along with other details of the model. The authors might have also smoothed the individual curves prior to modelling, in which case the details of the procedure used should be reported.

      The use of BMI SDs scores could be problematic. One the one hand, it has the advantage of normalising the BMI measures and simplifying the functional form of the growth curve (though BMI change is fairly linear from the age of 4 years anyway). However, it introduces a new challenge to interpreting the result, as it’s hard to distinguish the degree to which the model is describing changes in BMI within children over time, and differences between the observed sample and the reference population.

      The 2 stage modelling process employed is sub-optimal. It ignores the uncertainty inherent in the classification and thus subsequent standard errors of the estimated relationships between class memberships and other covariates are artificially reduced. Mplus is very capable of estimating the models being reported here in a single model that avoids this limitation (and has several other options for relating class membership to covariates that are also likely more appropriate).

      The authors state that the “clinical interpretation” of the models was an important factor in determining the number of classes included in the final, reported models, but give no indication or example of what this term means in this specific context. This should be clarified.

      There are some serious limitations, independent of the growth mixture modelling, that bear consideration by the authors. The first is that there is no consideration of the role of puberty, or recognition of the distinction between developmental time vs calendar time. Second, there is no consideration of the children’s heights, and even if BMI is defined as a measure of mass that is roughly independent of height, it’s hard to say anything useful about a child’s growth while ignorant of how tall he/she is.

      The description of how missing data was handled is insufficient and I would point the authors to several guidelines that I hope they find helpful (doi 10.1186/1471-2288-12-96).

      The sample is not described in sufficient detail. At the very least, the overall response rate should be provided. I would suggest that the authors refer to an established reporting guideline (e.g. STROBE) to help avoid this kind of reporting error.

      The paper overemphasizes the novelty of this analysis based solely on the use of growth mixture models. There the hundreds of existing population based studies of BMI in children and no reason to think this particular analysis is more informative than many/most of these.

      The term “confounding” is found nowhere in the paper. To have any utility regarding identified risk factors, the authors should have something to say about the exchangeability of groups being compared.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 10, Richard Holliday commented:

      As highlighted by the previous comments there are several weaknesses with this study. I would like to add a few further observations:

      -There is inadequate choice of e-liquids and controls. The cells were exposed to three conditions: air control, e-liquid [tobacco, 16mg] and e-liquid [menthol, 0mg nicotine]. There are two variables here (nicotine concentration and flavour) making any analysis of the results impossible.

      -In the methods, a menthol, 13-16 mg nicotine e-liquid is mentioned but this is not mentioned anywhere else in the paper, nor presented in the results.

      -The main conclusion of this paper is that ‘flavoured e-cigs’ gave a ‘greater response’. As there was no unflavoured control this conclusion is invalid. Likewise, if the authors are trying to say ‘menthol flavour’ gave a greater response than ‘tobacco flavour’ this is again invalid as nicotine concentration is a confounding factor in their study design.

      The UK E-cigarette Research Forum (an initiative developed by Cancer Research UK in partnership with Public Health England and the UK Centre for Tobacco and Alcohol Studies) recently reviewed this paper. The full review can be found here (see review 5 with further comments in the final paragraph of the overview).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 27, Clive Bates commented:

      There are several weaknesses in this paper and the surrounding commentary, and it caused unwarranted alarmist news coverage, which has not been corrected: New e-cigarettes alert as experts warn that the exposure to chemicals could trigger severe gum disease and even increase the risk of mouth cancer, Daily Mail, 17 December 2016.

      Comparators. The study lacks adequate comparators - for example, coffee and/or cigarette smoke could have been used. That would have allowed these observations to be placed in a meaningful comparative context. Given that the vast majority of users or potential users are smokers, the comparison with smoking is most relevant. Further, if the effect of vaping is no different to that arising from an everyday habit like drinking coffee then we would be correspondingly reassured.

      Interpretation of cell study. These effects on cells in vitro will not necessarily translate into material oral health risks in live subjects and the authors do not show evidence that the effects observed at the magnitudes measured are a realistic proxy for human gum disease risk. If human cells are exposed to any aversive environment, some effect is likely. But it is heroic to interpolate that to a human disease risk associated with normal use of the product. Ames (Ames BN, 2000) explains why:

      Humans have many natural defenses that buffer against normal exposures to toxins and these are usually general, rather than tailored for each specific chemical. Thus they work against both natural and synthetic chemicals. Examples of general defenses include the continuous shedding of cells exposed to toxins. The surface layers of the mouth, esophagus, stomach, intestine, colon, skin and lungs are discarded every few days; DNA repair enzymes, which repair DNA that was damaged from many different sources; and detoxification enzymes of the liver and other organs which generally target classes of chemicals rather than individual chemicals.

      Professor Brad Rodu elaborates further: Imaginary Hobgoblins From E-Cigarette Liquid Lab Tests, February 2016.

      Methodology The lead author's commentary speaks of 'burning' e-liquid and 'smoking' e-cigarettes. This does not inspire much confidence that the authors understand this non-combustible technology or that they have operated the device in realistic conditions for humans. There have been other studies where the devices have been operated at higher temperature than would be possible for human users, and then measurements of unrealistic levels of thermal decomposition products reported - see for example, Jensen RP, 2015. The authors provide little reassurance that have not fallen into the same methodological pit. The discussion of the methodology followed is thin and insufficient to allow replication.

      Overpromoting results. So in the absence of useful comparators and no link from these observations to disease risk it hard to see what this study adds. Further, it is unclear how it justifies the alarmist over-confident commentary First-ever Study Shows E-cigarettes Cause Damage to Gum Tissue that accompanied it and led to the news coverage cited above. This promotes the implicit claim (in the absence of an explicit caveat) that e-cigarette use would damage the gums in the mouth of a living person.

      Failure to provide a rounded view when communicating with the public. Other studies suggest that switching from smoking to vaping has a beneficial effect on oral health. For example, see Tatullo M, 2016:

      At the end of the study, we registered a progressive improvement in the periodontal indexes, as well as in the general health perception. Finally, many patients reported an interesting reduction in the need to smoke.In the light of this pilot study, the e-cigarette can be considered as a valuable alternative to tobacco cigarettes, but with a positive impact on periodontal and general health status.

      And this study, Wadia R, 2016 which found an increase in gingival inflammation when tobacco smokers switched from smoking to vaping for two weeks but noted that this was similar to the effects observed when people quit smoking:

      The clinical findings from the current study are similar to those that occur following verified smoking cessation. For example, during a successful period of quitting smoking, gingival bleeding doubled from 16% to 32% in a group of 27 smokers followed for 4–6 weeks, even though there were some improvements in the subjects’ plaque control. Results from this study are also consistent with studies that suggest a fairly rapid recovery of the inflammatory response following smoking cessation. (emphases added)

      The benefits of switching from smoking to vaping are pervasive and include improvements in oral health.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 09, Lydia Maniatis commented:

      "The former observation was expected, as human vision prioritizes processing as a function of distance from fixation (23⇓–25). "

      Reference 25 is to Rovamo & Virsu (1979) "An estimation and application of the human cortical magnification factor. Exp Brain Res 37(3):495–510.. ." The authors seem to be making a statement of fact, based on this and two other references, that "human vision prioritizes processing as a function of distance from fixation."

      However, according to Strasburger, Rentschler and Juttner, (2011):

      "The strong, all-embracing hypothesis put forward by Rovamo & Virsu (1979) is hardly, if ever, satisfied." In other words, to the extent that it rests on the Rovamo citation, the first statement quoted above appears to me to be false. At best, it requires qualification. I don't know how reliable the other two citations are, I haven't checked.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 12, Konstantinos Fountoulakis commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 12, Konstantinos Fountoulakis commented:

      This is a study on 70 patients suggesting that depressed people exposed to high early life stress (ELS) had a greater likelihood of remission when their amygdala showed hyperreactivity to socially rewarding stimuli, whereas for those with low-ELS exposure, amygdala hyporeactivity to both rewarding and threat-related stimuli predicted remission (1) The main problem with this study is that the data are uncontrolled. It is important to note that depressed patients have a high placebo response rate (up to 40%) in the short term (2) We also know that acute response to placebo is equal to up to 80% or more to the response to active drugs (3, 4), meaning that if we accept the additivity hypothesis (this is of course a matter of debate) (5, 6), most patient who acutely responded to active drug might be in reality placebo responders. Therefore the results of the study under discussion should not be accepted without caution. Adding to the above concerns is the report the STAR-D study, that one third of patients who remitted after step 1 will relapse within 4-5 months, while up to 50% will relapse from later steps (7). It is unknown what these patients stand for, however one could argue that at least some of them are patients unresponsive to antidepressants who however manifested a placebo response. Additional problems are that the patients included had low depression severity (mean HDRS=21) and low dosages of antidepressants used (on average 10 mg of escitalopram, 50-62.5 mg of sertraline and 87.5-90.8 mg of venlafaxine). Also, of prime importance is that the determining of early life stress (ELF) with the use of a self-report questionnaire alone could be misleading, since many depressed patients especially with character pathology might tend to over-report such events.<br> These problems exist mainly because the study under discussion tries to elucidate an issue concerning a mechanism of action without using an adequately controlling methodology. What that study suggests is that patients who acutely remit (no matter the reason) might present with the characteristics reported, but any inference concerning the role of pharmacotherapy per se is problematic.

      References

      1. Goldstein-Piekarski A, et al. (2016) Human amygdala engagement moderated by early life stress exposure is a biobehavioral target for predicting recovery on antidepressants. PNAS.
      2. Furukawa TA, et al. (2016) Placebo response rates in antidepressant trials: a systematic review of published and unpublished double-blind randomised controlled studies. The lancet. Psychiatry.
      3. Khan A, Leventhal RM, Khan SR, & Brown WA (2002) Severity of depression and response to antidepressants and placebo: an analysis of the Food and Drug Administration database. Journal of clinical psychopharmacology 22(1):40-45.
      4. Gibbons RD, Hur K, Brown CH, Davis JM, & Mann JJ (2012) Benefits from antidepressants: synthesis of 6-week patient-level outcomes from double-blind placebo-controlled randomized trials of fluoxetine and venlafaxine. Archives of general psychiatry 69(6):572-579.
      5. Yang H, Novick SJ, & Zhao W (2015) Testing drug additivity based on monotherapies. Pharmaceutical statistics 14(4):332-340.
      6. Lund K, Vase L, Petersen GL, Jensen TS, & Finnerup NB (2014) Randomised controlled trials may underestimate drug effects: balanced placebo trial design. PloS one 9(1):e84104.
      7. Rush AJ, et al. (2006) Acute and longer-term outcomes in depressed outpatients requiring one or several treatment steps: a STAR*D report. The American journal of psychiatry 163(11):1905-1917.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 30, Lydia Maniatis commented:

      Comment 3: The speciousness of the interpretation may perhaps be better grasped if we imagine that the centroids contained patches differing in ways other than color. If, for example, some had been shaped like rectangles and others stars, would we have been justified in concluding that we were measuring the activities of rectangle and star "filters"? Or if some had been x's and some had been o's...etc. Color might seem like a simpler property than shape, but given that it is wholly mediated by the organization of the visual field and the resulting shape properties, this intuition is in error (the tendency of vision science publications to refer to color as a "low-level" property notwithstanding.)

      In fact, while we're talking about shape, there can be little doubt that the arrangement (e.g. symmetrical vs asymmetrical) of the differently colored patches in the present type of experiment will affect the accuracy of the responses. The effects might, perhaps, be averaged out, but this doesn't mean that these "high-level" effects of organization aren't mediating the purportedly "low-level" effects of color at all times.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 29, Lydia Maniatis commented:

      Comment 2:

      There seems to have been a kind of natural selection in vision science (and of course not only vision science) in which the following practice has come to dominate: The results of ad hoc measurements made under arbitrary (poorly rationalized) conditions and fitted to “models” with the help of ad hoc post hoc mathematical adjustments are treated as though they amounted to, or could amount to, functional principles.

      Thus, here, the data generated by a particular task are reified; the data patterns are labelled “attention filters,” and the latter are treated as though they corresponded to a fundamental principle of visual processing. But principles have generality, while the "attention filter" moniker is here applied in a strictly ad hoc fashion:

      First, the model is based on an arbitrary definition of color in terms of isolated ""colors" on a "neutral" background (i.e. conditions producing the perception of particular color patches on a neutral background), whose attributes we are told are “fully described by the relative stimulation of long, medium and short wavelength sensitive retinal cones.” These conditions and, thus, the specific patterns of stimulation correlated with them, constitute only one of an infinite number of possible conditions and thus of patterns of stimulation. (The naive equating of cone activity with color perception is a manifestation of the conceptual problems discussed in my earlier comment.)

      Second, the model is ad hoc (“particularized”); “The inference process is illustrated by the model of selective attention illustrated in Fig. 1B particularized for the present experiments.” What would the generalized form of the model look like?

      Third, the results only apply to individual subject/context combinations: “The model’s optimally predictive filter fk(i)fk(i) is called the observed attention filter. It typically is a very good [post hoc] predictor of a subject’s observed centroid judgments.† Therefore, we say for short that fk(i)fk(i) is the subject’s attention filter for attending to color CkCk in that context.”

      It is the case that different colors vary in their salience. We could perform any number of experiments under any number of conditions with any number of observers, and generate various numbers that reflected this fact. Our experiments would, hopefully, succeed in reproducing the general facts, but the actual numbers would differ. Unless underpinned by potentially informative rationalizations guiding experimental conditions, none of these quantifications would carry any more theoretical weight than any of the others (the value-added via quantification would be zero). There is, in other words, nothing special about the numbers generated by Sun et al (2017). They make no testable claims; their specific "predictions" are all post hoc. Their results are entirely self-referential.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 May 27, Lydia Maniatis commented:

      "The visual images in the eyes contain much more information than the brain can process. An important selection mechanism is feature-based attention (FBA)."

      There is something very wrong here. There are no visual images in the eyes, if by images we mean organized percepts - shaped figures with features such as colors, relative locations, etc. Such things are the products of the whole perceptual process, i.e. the products of a process that begins with the effects of the point stimulation of light striking photoreceptors on the retina, setting into motion dynamic interactions of the integrated neural elements of the visual system, ultimately leading to conscious percepts.

      Thus, the mechanism being referenced (if it exists) is selecting from features of the conscious products of these perceptual processes, not from the initial point information or early stages of processing in the retina.

      "...a color-attention filter describes the relative effectiveness with which each color in the retinal input ultimately influences performance."

      Again, there are no colors in the retinal input, color being a perceptual property of the organized output. So we are missing a retinal-state-based description of what the proposed "filters" are supposed to be attending to. This is a problem since, as is well-known, the physical (wavelength) correlate of any perceived color can have pretty much any composition, because what is perceived locally is contingent on the global context.

      The use of the term filter here seems inappropriate, its misuse linked to the failure to distinguish between the proximal stimulation and perceptual facts. The implication seems to be that we are dealing with a constraint on what will be perceived, whereas on the contrary we are dealing with selection from available perceptual facts, .

      The theoretical significance of measuring jnd's is not clear, as they are known to be condition-sensitive in a way not predictable on the basis of available theory. The failure to discriminate between physical/perceptual facts also means it isn't clear which of these potential differences is being referred to.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 17, Matthew Romo commented:

      In their article, Kelling and colleagues identified clinical preventive services recommended by the US Preventive Services Task Force (USPSTF) that can be offered by community pharmacists, including folic acid supplementation, smoking cessation, and screening for osteoporosis and HIV (1). Clinical services are typically thought of in the context of individual patient-provider interaction, but adapting a population-level approach is particularly important for pharmacy-based preventive care services. Specifically, pharmacies located in high poverty areas have a tremendous opportunity to not only meet their communities’ public health needs, but to also reduce health disparities.

      New York City exemplifies the need for this type of population-level thinking because health differs greatly by neighborhoods, which often have both poverty and racial/ethnic distinctions. For example, there are major disparities in HIV incidence, with a median annual HIV diagnosis rate of 75.6 per 100,000 population in the highest poverty neighborhoods vs. 13.7 per 100,000 population in the lowest poverty neighborhoods (2). Pharmacy-based HIV testing is indeed feasible in the city (3) and could be a service targeted to residents of high-risk neighborhoods. Nonprescription syringe sales by pharmacies to injection drug users present a propitious opportunity to promote screening for HIV.

      Smoking prevalence in the highest poverty neighborhoods of New York City is more than double that of the lowest poverty neighborhoods (29.7% vs. 14.3%) (4). Availability of tobacco in pharmacies is germane if community pharmacies are to be regarded by the public as health promoting institutions. Tobacco bans in pharmacies are, of course, strongly advisable but do not appear to have a real impact on tobacco availability in poorer neighborhoods where smoking prevalence is highest. In an analysis of 240 census tracts in Rhode Island (5), tobacco retail outlet density was positively associated with neighborhood poverty and when excluding pharmacies as tobacco retailers, this association did not change. Of course, the availability of nicotine replacement therapy on pharmacy shelves allows pharmacists to counsel patients on their use. However, as mentioned by Kelling and colleagues, simple frameworks like “Ask, Advise, Refer” can help pharmacists connect patients to a telephone quitline, which can provide counseling and linkage to programs offering free or low cost nicotine replacement therapy and medication. Linkage to quitlines could be coupled with existing services, such as administering seasonal influenza vaccines or screening for drug-tobacco interactions (which are numerous). These opportunities could give a non-intrusive opportunity to “ask” and “advise.”

      Pharmacy services, like other healthcare services, can differ by neighborhood. In New York City, higher poverty neighborhoods are characterized as having significantly more independent (vs. chain) pharmacies and pharmacies that are more likely to have medications out of stock (6). Nevertheless, it appears that community pharmacists support providing clinical preventive services, regardless of the neighborhood poverty level where their pharmacy is located. This was suggested by a study assessing New York City pharmacists’ attitudes about providing vaccinations to their patients when state legislation was passed allowing them to do so (7).

      As highlighted by Kelling and colleagues, community pharmacists are highly accessible (and often underutilized) healthcare professionals who are clearly capable of implementing USPSTF recommendations, among others. Pharmacies are also attractive conduits for improving public health, as demonstrated by successes in immunization uptake and most recently with expansion of non-prescription naloxone access. Because of their focus on population health, local health departments should partner with community pharmacists and pharmacy owners, if they are not doing so already, to better meet the public health needs of their communities. Community pharmacists not only have the potential to positively impact public health, but because of where they work in the community, they are ideally positioned to reduce health disparities.

      Matthew L. Romo, PharmD, MPH

      Department of Epidemiology and Biostatistics, CUNY Graduate School of Public Health and Health Policy; CUNY Institute for Implementation Science in Population Health; matthew.romo@sph.cuny.edu

      REFERENCES 1. Kelling SE, Rondon-Begazo A, DiPietro Mager NA, Murphy BL, Bright DR. Provision of clinical preventive services by community pharmacists. Prev Chronic Dis 2016;13:160232. DOI: http://dx.doi.org/10.5888/pcd13.160232. 2. Wiewel EW, Bocour A, Kersanske LS, Bodach SD, Xia Q, Braunstein SL. The association between neighborhood poverty and HIV diagnoses among males and females in New York City, 2010-2011. Public Health Rep. 2016;131(2):290-302. 3. Amesty S, Crawford ND, Nandi V, Perez-Figueroa R, Rivera A, Sutton M, et al. Evaluation of pharmacy-based HIV testing in a high-risk New York City community. AIDS Patient Care STDS. 2015;29(8):437-44. 4. Perlman SE, Chernov C, Farley SM, Greene CM, Aldous KM, Freeman A, et al. Exposure to secondhand smoke among nonsmokers in New York City in the context of recent tobacco control policies: Current status, changes over the past decade, and national comparisons. Nicotine Tob Res. 2016;18(11):2065-74. 5. Tucker-Seeley RD, Bezold CP, James P, Miller M, Wallington SF. Retail pharmacy policy to end the sale of tobacco products: What is the impact on disparity in neighborhood density of tobacco outlets? Cancer Epidemiol Biomarkers Prev. 2016;25(9):1305-10. 6. Amstislavski P, Matthews A, Sheffield S, Maroko AR, Weedon J. Medication deserts: survey of neighborhood disparities in availability of prescription medications. Int J Health Geogr. 2012;11:48. 7. Crawford ND, Blaney S, Amesty S, Rivera AV, Turner AK, Ompad DC, et al. Individual- and neighborhood-level characteristics associated with support of in-pharmacy vaccination among ESAP-registered pharmacies: pharmacists' role in reducing racial/ethnic disparities in influenza vaccinations in New York City. J Urban Health. 2011;88(1):176-85.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 15, Victoria MacBean commented:

      Plain English summary:

      Parasternal intercostal electromyography (EMGpara) is a method used to measure breathing function by monitoring signals sent from the brain, to the parasternal intercostal muscles (muscles between the ribs). These muscles, together with the diaphragm (a thin muscle under the lungs) and some others, move together to control your breathing. EMGpara can be used to measure a person's neural respiratory drive (NRD), which is an indication of the strength of the respiratory (breathing) muscles under a certain amount of strain (how hard the muscles may have to work when they are coping with different diseases or environments). This method is an alternative to other more traditional practices that, for example, may involve the use of needles. Therefore, EMGpara is less invasive and ideal for monitoring the breathing muscles in many groups of people.

      In the case of this study EMGpara was measured in healthy adults in order to discover what factors determine normal EMGpara readings. The participants were over the age of 18, and were of different body types and sexes.

      In preparation for the EMGpara tests, each participant's body size, shape and composition was measured – this included taking note of their height, weight, hip and waist size, body fat percentage and body mass index – as well as tests to confirm that each person had normally functioning lungs.

      Electrode stickers were placed on the chest to measure the EMGpara signals as the subjects breathed normally and effortfully. The tests were repeated at a later date to make sure the results could be reproduced, thereby checking that the EMGpara technique is consistent.

      The study suggests that sex is the most important factor in determining EMGpara; a higher value for EMGpara was observed in the women who took part. This may be because in general, woman have smaller lungs and narrower airways compared to men and their respiratory muscles are usually not as strong. Age did not seem to have a significant effect on the readings; however this could have been because the average age of those involved was only 31, and those who were older were quite athletic, meaning their respiratory health was very good.

      The results of this study can be used as a reference for what a normal EMGpara reading is, and therefore they can be used when assessing patients in the future. The study included many people from different backgrounds, so it is quite representative of the population. The study was also important in working out which methods and techniques are best for measuring EMGpara, as well as for highlighting possible areas of further research for future studies.

      This summary was produced by Djenné Oseitwum-Parris, Year 12 student from Burntwood School, London, UK, as part of the authors' departmental educational outreach programme.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 18, Martin Hofmeister commented:

      Physical activity is an underestimated modifiable factor

      I thank Wat et al. for their very interesting review article "Associations between diabetic retinopathy and systemic risk factors" in the December 2016 issue of the Hong Kong Medical Journal. I agree with the authors but there is one lifestyle aspect worth mentioning. Recent studies suggest that regular physical activity can be a protective factor for the development of diabetic retinopathy (DR) and microvascular diabetes-related complications [1-6]. Physical activity and exercise is characterized by a simultaneous antihyperglycemic (reduction in haemoglobin A1c of -0.6%), antihyperlipidemic, antihypertensive, antioxidative, anti-inflammatory, and cardioprotective effects.

      An example of neurobiological adaptations to exercise is the increased expression of neuroprotective factors such as brain-derived neurotrophic factor (BDNF) in the brain, blood, and muscles. Decreased plasma levels of BDNF were detected as an independent risk factor for DR and vision-threatening DR in Chinese type 2 diabetic patients [7-8]. The downregulation of BDNF probably has an important role in the complex and multifactorial pathogenesis of DR [9].

      In a first study, Loprinzi also observed a positive association between sedentary behavior and DR [10]. The American Diabetes Association has recently updated its evidence-based recommendations on physical activity and exercise for diabetic patients. The main innovation is that diabetics should minimize the total amount of daily sedentary time. Prolonged sitting time should be interrupted every 30 min with brief (≤5 min) bouts of standing or light activity to improve the glycemic control [11].

      In the case of severe nonproliferative and unstable proliferative retinopathy, it is recommended that vigorous-intensity activities be avoided [11].

      REFERENCES

      1) Dirani M, Crowston JG, van Wijngaarden P. Physical inactivity as a risk factor for diabetic retinopathy? A review. Clin Exp Ophthalmol 2014;42(6):574-81. Dirani M, 2014

      2) Loprinzi PD, Brodowicz GR, Sengupta S, Solomon SD, Ramulu PY. Accelerometer-assessed physical activity and diabetic retinopathy in the United States. JAMA Ophthalmol 2014;132(8):1017-9. Loprinzi PD, 2014

      3) Gutiérrez Manzanedo JV, Carral San Laureano F, García Domínguez G, Ayala Ortega C, Jiménez Carmona S, Aguilar Diosdado M. High prevalence of inactivity among young patients with type 1 diabetes in south Spain. Nutr Hosp. 2014;29(4):922-8. Gutiérrez Manzanedo JV, 2014

      4) Loprinzi PD. Concurrent healthy behavior adoption and diabetic retinopathy in the United States. Prev Med Rep. 2015;2:591-4. Loprinzi PD, 2015

      5) Li Y, Wu QH, Jiao ML, Fan XH, Hu Q, Hao YH, Liu RH, Zhang W, Cui Y, Han LY. Gene-environment interaction between adiponectin gene polymorphisms and environmental factors on the risk of diabetic retinopathy. J Diabetes Investig. 2015;6(1):56-66. Li Y, 2015

      6) Praidou A, Harris M, Niakas D, Labiris G. Physical activity and its correlation to diabetic retinopathy. J Diabetes Complications. 2016 Jun 29. pii: S1056-8727(16)30256-2. doi: 10.1016/j.jdiacomp.2016.06.027. [Epub ahead of print]. Praidou A, 2017

      7) Liu SY, Du XF, Ma X, Guo JL, Lu JM, Ma LS. Low plasma levels of brain derived neurotrophic factor are potential risk factors for diabetic retinopathy in Chinese type 2 diabetic patients. Mol Cell Endocrinol 2016;420:152-8. Liu SY, 2016

      8) Guo M, Liu H, Li SS, Jiang FL, Xu JM, Tang YY. LOW SERUM BRAIN-DERIVED NEUROTROPHIC FACTOR BUT NOT BRAIN-DERIVED NEUROTROPHIC FACTOR GENE VAL66MET POLYMORPHISM IS ASSOCIATED WITH DIABETIC RETINOPATHY IN CHINESE TYPE 2 DIABETIC PATIENTS. Retina. 2016 Jun 27. [Epub ahead of print]. Guo M, 2017

      9) Behl T, Kotwani A. Downregulated Brain-Derived Neurotrophic Factor-Induced Oxidative Stress in the Pathophysiology of Diabetic Retinopathy. Can J Diabetes. 2016 Nov 29. pii: S1499-2671(16)30079-X. doi: 10.1016/j.jcjd.2016.08.228. [Epub ahead of print]. Behl T, 2017

      10) Loprinzi PD. Association of Accelerometer-Assessed Sedentary Behavior With Diabetic Retinopathy in the United States. JAMA Ophthalmol 2016;134(10):1197-8. Loprinzi PD, 2016

      11) Colberg SR, Sigal RJ, Yardley JE, Riddell MC, Dunstan DW, Dempsey PC, Horton ES, Castorino K, Tate DF. Physical activity/exercise and diabetes: a position statement of the American Diabetes Association. Diabetes Care 2016; 39(11): 2065-79. Colberg SR, 2016


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 04, Alessandro Rasman commented:

      Increased levels of coagulation factors in Multiple Sclerosis: a defending mechanism from microbleedings?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 08, P P Wolkow commented:

      It is true that culture negative blood samples are difficult to work with and the interpretation of the obtained results is not easy. We believe that we have taken every care to avoid contamination and to analyze our data accordingly. However, we understand that you might not fully concur with our results.

      Indeed NTC and blood samples contain the same taxa as viewed on Fig. 3a. However based on this sole statement, one cannot claim that these samples are similar. In fact they are different, what can be seen below based on following examples at the Order level. Bifidobacteriales in the healthy blood constituted 73.0% of reads vs. 12.8% in NTC samples (p = 2.76 x 10-7). In our opinion this level of significance confirms that the groups are different. Few other examples based on Figure 4 data: healthy vs NTC vs sepsis: Actinomycetales: 2.0% vs 7.7% vs 30.9%- p=0.04; Pseudomonadales: 6.7% vs 0.0% vs 4.4% - p=0.006; Sphingomonadales: 0.2% vs 11.4% vs 7.3%-p=3 x 10-7.

      It is true that NTC samples cluster with clinical samples in PCoA analysis, however they cluster with septic samples but not with healthy ones. Clustering with the latter could potentially mean that the results for the healthy people are untrue due to sequencing a contamination only. Please note that only 3 out of 5 NTC samples passed the analytical threshold and only these are depicted on Fig. 2. In the two NTC samples filtered out from further analysis, the numbers of reads were very low, 480 and 73, respectively. Also, contamination of simultaneously processed NTC samples should result in similar abundance of phyla in these samples, which is not the case (Fig. 3).

      PCR conditions are provided in Table 1. Negative control procedure was exactly the same for all samples analyzed and should have no impact on the results. All samples were prepared in one batch so we did not expect a batching effect.

      An idea of retrospective contaminant read removal is in our opinion controversial. There are deep inter-individual differences between the NTC samples. Removal of the reads based on mean number of reads would lead to skewed results and negative read numbers in some samples.

      We expected that results depicted on Fig. 5 would be self-explanatory. However, we admit that providing qPCR data would be superior. We are grateful for drawing our attention to this issue which will have implications for our future work.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Oct 27, Susannah Salter commented:

      The paper by Gosiewski et al (PMID27771780) draws strange conclusions from the data. Culture-negative blood samples are certainly difficult to work with as they are more susceptible to the influence of DNA contamination.

      In the text attention is drawn to the sequenced controls (water) but some assertions are patently untrue: for example stating that Bifidobacterales were a noteworthy constituent of healthy blood when it also makes up >10% proportion of the negative controls (Fig 4), or stating that the control profiles are "completely different" to the blood samples despite appearing to contain most of the same taxa (Fig 4) and clustering with the clinical samples on PCoA (Fig 2).

      The authors provide no detailed information about the number of PCR cycles, the negative control procedure, kit batching of samples, retrospective contaminant read removal etc, which would lend confidence that the described patterns are not artefacts of sample processing. qPCR would also help to clarify the background contaminant DNA levels and allow more robust conclusions to be drawn.

      The supplemental figure has the most potentially interesting information but unfortunately it is not labelled or described in text. If the contaminant taxa are removed, there may be some nice signals hiding in there.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 24, Peter Hajek commented:

      The title and the conclusion are misleading because they imply that the study followed up everyone who tried e-cigarettes (EC) to stop smoking. In fact, it only followed up people who tried but failed. Only people who tried e-cigarettes but reverted to smoking were included at baseline, successful quitters were no longer smokers and so were excluded.

      Here is an analogy: Football scouts go round 100 schools and remove talented kids. Some time later they go round the same schools and also 100 new ones. The old schools now produce significantly less talent than the new ones. This is not because the earlier scout visit somehow damaged talent, but because talent was simply removed, in the same way successful quitting with EC removed good quitting prospects here.

      The conclusion does not mention that the same effect applied to past use of stop-smoking medications, that result is also predictable for the same reasons.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Dec 26, Elena Gavrilova commented:

      The detailed answers to all E.V. Dueva’s comments regarding this article have been already published and could be found here: https://www.ncbi.nlm.nih.gov/pubmed/28036118 Briefly, Anaferon for children is not homeopathic drug; the manufacturing process of the drug preparation was described in the article concisely; the detailed answers to the concerns regarding experimental design have been provided; information about financial support was presented in the article.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Dec 13, Evgenia V Dueva commented:

      The Anaferon for children (AC) is homeopathic preparation. The authors state that «AC contained RAF of Abs to IFN-γ is a mixture of 12, 30,and 50 centesimal dilutions of antibodies to IFN-γ». RAF(release active form) of Abs (antibodies) is not an accepted scientific concept and the term appears only in the articles involving the commercial products of «MATERIA MEDICA HOLDING». According to Avogadro's law, 12 or more centesimal dilutions lead to a lack of any active substance in any amount of solution that a mouse can drink. It seems that AC is a disguised version of homeopathy and the authors have confused the reviewers with their vague description of AC.

      Given the fact that there is no accepted mechanism of action for any treatment with such dilutions as in the case of AC and composition of initial AC solutions is unknown the simpler explanation for the observed antiviral effects is bias introduced by lack of proper randomization and blinding or the influence of undeclared contaminants.

      In addition, this statement contradicts itself: «The authors declare no conflict of interest. Four authors have an affiliation to the commercial funders of this research study (OOO «NPF «MATERIA MEDICA HOLDING»)». «MATERIA MEDICA HOLDING» produces and markets AC, so the authors do have a conflict of interest. Even more, Oleg I. Epstein is the CEO of OOO «NPF «MATERIA MEDICA HOLDING».

      The critical comment on this paper was published and can be found here: http://onlinelibrary.wiley.com/doi/10.1002/jmv.24761/full


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 25, Anders von Heijne commented:

      Among the suggestions of candidate measurements an established mechanism for providing feedback to previous clinicians about their diagnoses should provide feedback on all diagnoses, not only when a significant change in diagnosis has occured, in order to improve the sense of diagnostic accuracy for the idividual, the team and the caregiver. We need both positive and negative feedback!


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 27, Richard Boyce commented:

      The code is indeed available on github: https://github.com/OHDSI/StudyProtocols/tree/master/PGxDrugStudy

      The SQL code is in a subfolder: https://github.com/OHDSI/StudyProtocols/tree/master/PGxDrugStudy/inst/sql/sql_server

      It is an R package that should works with data in the OMOP common data model V4.5 or V5. Please contact me through github if you have questions about the code.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 21, Vojtech Huser commented:

      This is an interesting study. The code to execute OHDSI studies are sometimes available on github. Are there any plans to release the study. The solution to excude topical drug is something our team could and the OHDSI collaborative re-use for other studies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 20, Daniel Mietchen commented:

      The article has been annotated as part of a journal club: https://via.hypothes.is/http://journals.plos.org/plosntds/article?id=10.1371/journal.pntd.0005023 .


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Oct 24, Sean Ekins commented:

      This article was written to assist in finding additional collaborators that could participate. Please tweet us @openzika @collabchem @carolinahortago or email etc..


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Oct 24, Sean Ekins commented:

      More details and press release are here http://www.collabchem.com/2016/05/19/zika-open-becomes-openzika-on-ibm-world-community-grid/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Oct 24, Sean Ekins commented:

      These attached slides also bring this project upto date http://www.slideshare.net/ekinssean/open-zika-presentation


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 22, Atanas G. Atanasov commented:

      It was a great conference, compliments to the organizers.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 30, Zvi Herzig commented:

      The Holm at al. study was cited to show that Scandinavian smokeless tobacco (snus) delivers nicotine at speeds and quantities similar to those of smoking. Meta-analyses indicating that snus is not a significant cause of smoking-related disease are cited further below.

      These should indicate that nicotine is not among the top risk compounds in tobacco smoke.

      MOE may be a standard approach to minimize potential toxicological risks. But where there is epidemiological evidence from NRT and certain forms of smokeless tobacco showing that nicotine is not one of the major toxicological issues with cigarette smoke, then there is less reason to rely on MOEs.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Oct 27, Dirk Lachenmeier commented:

      Sorry, but I cannot find any evidence in your cited studies that would refute our conclusions. Especially the Holm et al. (1992) study is absolutely unsuitable to make such a conclusion. This was not even a short term trial, but blood nicotine was studied on a single day. Holm et al. (1992) conclude: "The snuff takers and cigarette smokers reported similar levels of subjective dependence on tobacco. Epidemiological study of Swedish snuff users could clarify whether the cardiovascular risks of tobacco are attributable to nicotine or to other smoke components". The long-term and chronic effects of nicotine were obviously not studied. As toxicologist it is also difficult to accept why a potentially toxic substance with a clear dose-response effect such as nicotine may not be assessed using internationally accepted indicators such as the margin of exposure. Obviously, benchmark dose data from epidemiology would be preferrable over animal data, but as we have detailed in our article, none of the epidemiology studies provided suitable dose-response information.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Oct 26, Zvi Herzig commented:

      The cited sources refute the report's conclusion that "nicotine is among the top risk compounds in tobacco smoke". They show that nicotine consumption in the context of snus—which delivers equal or more nicotine than smoking, and at similar absorption speeds Holm H, 1992—is not significantly associated with smoking-related disease

      The MOE approach shouldn't be used to estimate risk where direct epidemiological evidence for the relevant dose is available. This should be self-evident.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Oct 24, Dirk Lachenmeier commented:

      Thank you for providing some further references on nicotine. However, none of these provide dose-response information required for quantitative comparative risk assessment. We have carefully screened through the literature to include any usable study, including human data (see table 1). It should also be noted that the European Food Safety Authority (EFSA) also used the Lindgren et al. study (which we have included) as point of departure for their risk assessment of nicotine.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Oct 22, Zvi Herzig commented:

      It is puzzling that the authors use the indirect margin of exposure (MOE) approach to evaluate harms, when epidemiological evidence is available, showing minimal risk of nicotine (at levels of consumption) in relation to cancer Lee PN, 2009, cardiovascular diseases Hansson J, 2012 Hansson J, 2014, other diseases Lee PN, 2013 or acute poisonings Royal College of Physicians, 2016.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 30, Erick H Turner commented:

      Only published trials were examined in this study. This restriction seems problematic, considering the outcome of interest is time to publication. This departure from several past (and cited) studies on this topic may be a key reason for the authors' obtaining different conclusions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 23, University of Kansas School of Nursing Journal Club commented:

      Team Members: Miranda Hanchett, Katie Bolin, Lizzy Lothamer, Molly Meagher, Kathryn Noble, Alisa Schemmel, Amy Toth. [Class of 2017]

      Background

      In class, we learned that shared governance encompasses four main principles for nurses and other professionals. These principles include partnership, accountability, equity, and ownership. Each of these are necessary for team-based decision making in the realms of research, clinical matters, quality improvement, and others. However, one topic we did not explore in class was the impact that the level of nurse engagement in shared governance has on patient outcomes. Kutney-Lee et al’s (2016) article explored the different levels of nurse engagement across a variety of hospitals and how these levels affected patient satisfaction through appraisal of the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey data. This information is valuable to our knowledge as new graduate registered nurse (RN) professionals in order to understand that nurse involvement within the organization (beyond patient care) is vital to improving patient outcomes. This study also adds another dimension to our understanding of the four principles of shared governance and whom they affect.

      Methods

      Our team decided to select this article because it described the effects shared governance has on our patients in addition to how it affects nurses. Knowing these effects enables nurses to understand that involvement in hospital affairs can influence the quality of care and outcomes of our patients. The purpose of this study is to, “examine differences in nurse engagement in shared governance across hospitals and to determine the relationship between nurse engagement and patient and nurse outcomes” (Kutney-Lee et al., 2016, p. 605). This article is a cross-sectional observational study of three secondary data sources: the 2006-2007 Penn Multi-State Nursing Care and Patient Safety Survey of RNs from four states (California, New Jersey, Pennsylvania, and Florida), the 2007 American Hospital Association (AHA) Annual Survey of Hospitals, and the HCAHPS patient survey data from October 2006 to June 2007. The nurse survey was collected from large, random samples of RNs that were licensed in the four previously mentioned states. The AHA survey provided information about hospital characteristics and the HCAHPS survey provided information about overall patient experiences during hospitalization. Shared governance was measured using three items from the, “Participation in Hospital Affairs,” subscale of the Practice Environment Scale of the Nursing Work Index (PES-NWI). Nurse job outcomes and quality of care was measured using the Penn Multi-State Nursing Care and Patient Safety Survey of RNs. The HCAHPS survey provided information on patient measures while the AHA Annual Survey provided information about hospital characteristics, such as population density, teaching status, ownership, technology status, and size. Magnet recognition status was acquired from the American Nurses Credentialing Center (ANCC) website.

      Findings

      Shared governance has a large impact on patient outcomes. This study found that not only did shared governance contribute to more positive patient outcomes, but they also received a higher level of quality of care from their nurses. Nurses are in direct contact with the patients for the most amount of time and thus, are intimately in touch with patients’ wants and needs. They forge a personal relationship with the patient that no other health care provider can or will. Nurses are a valuable asset to hospital administration to improve the direction of resources and strategies to increase flow efficiency.

      With the inclusion of floor nurses within the shared governance model, it has been shown that the reporting of poor patient outcomes, safety, and quality of care has declined. In effect, this has lowered the costs of penalties against the hospital for patient care problems, mismanaged discharge instructions and subsequent homecare, and readmission, and has increased reimbursement for quality of care. Hospitals have also saved millions of dollars by lowering their rate of nurse turnover (Kutney-Lee et al. 2016). In addition, the shared governance model has been shown to increase nurse satisfaction and reduce nurse burnout and the intent to leave. By incorporating floor nurses into the shared governance model, they feel more invested to contribute towards the system-level approach to improving both patient and nurse outcomes (Kutney-Lee et al. 2016).

      Nursing Implications

      Shared governance is extremely important to the nursing profession because it has been shown to increase employee engagement, which is related to an increase in job satisfaction, retention, profitability, and performance (Kutney-Lee et al. 2016). This study shows that nurses who worked at institutions where they had a greater opportunity to be engaged in shared governance were more likely to report better patient experiences and superior quality of care (Kutney-Lee et al. 2016). In our program, professionalism has been emphasized as a key factor in being a BSN-prepared nurse. A huge factor that relates to being a professional nurse is being actively involved in nursing boards, policies, and interprofessional teams, along with being leaders at the bedside. These factors help hospitals attain Magnet Recognition because they lead to structural empowerment for nurses in the workplace (American Nurses Credentialing Center, 2017). Through improving patient outcomes and satisfaction, the nursing satisfaction and reimbursement rates increase as well. From a micro- and macrosystem level, this is important because nurse turnover and low patient satisfactions scores tends to increase hospital cost. Hiring and training new staff is expensive and time consuming and can lower patient outcomes. These lower patient outcomes and satisfaction levels can result in lower HCAHPS scores, which reduces reimbursement amounts to the hospital. The study shows that nurses at hospitals where shared governance was promoted were less likely to report, “poor confidence in their patients’ ability to manage their care after discharge,” thus reducing readmission cost on the hospital (Kutney-Lee et al. 2016, p. 610). By increasing shared governance in hospitals, it is more fiscally responsible for the hospital and the nursing profession as a whole.

      This information can benefit us as future nurses by helping us realize the importance of shared governance when looking at future employers. Being able to have a say and make decision in how we are allowed to practice gives us greater autonomy. This ability to practice autonomously and feel empowered to practice in a meaningful way leads to structural empowerment and greater job satisfaction (Laschinger, Finegan, Shamian, & Wilk, 2001). It will be important as new graduate nurses and as future nurse leaders to keep this information in mind for the well-being of our patients, employees, and self.

      References

      American Nurses Credentialing Center (2009). Announcing a new model for ANCC’s magnet recognition program. Retrieved from http://www.nursecredentialing.org/Magnet/NewMagnetModel.aspx

      Kutney-Lee, A., Germack, H., Hatfield, L., Kelly, S., Maguire, P., Dierkes, A. Del Guidice, M., & Aiken, L. H. (2016). Nurse engagement in shared governance and patient and nurse outcomes. The Journal of Nursing Administration, 46(11), 605-612. doi:10.1097/nna.0000000000000412

      Laschinger, H.K.S., Finegan, J., Shamian, J., & Wilk, P. (2001). Impact of structural and psychological empowerment on job strain in nursing work settings: Expanding Kanter’s model. The Journal of Nursing Administration, 31(5), 260-272. doi: 10.1097/NNA.0000000000000080


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 13, David Reardon commented:

      This analysis of perinatal psychiatric episodes by Munk-Olsen T, 2016<sup>1</sup> is flawed by the failure to examine the effects of prior pregnancy losses. Numerous studies have shown that prior fetal loss, either from miscarriage, stillbirth, or induced abortion, increases the risk psychiatric disorders during and after subsequent pregnancies.<sup>2-7</sup> There is even a dose effect, with multiple losses associated with elevated rates compared to a single loss.<sup>2</sup>

      Notably, the heightened risk of mental illness following miscarriage and abortion have also been confirmed by several of Munk-Olsen’s own studies.<sup>8-10</sup> Unfortunately, while abortion was used as a control variable in two cases, the effects were not described.<sup>8,10</sup>

      In light of the literature, Munk-Olsen T, 2016's conclusion that it is not possible to “predict which women will become ill postpartum”<sup>1</sup> is an overstatement. There is strong evidence that prior fetal loss is risk factor.

      It is strongly recommended that the authors of this most recent study<sup>1</sup> should publish a reanalysis showing the effects of prior pregnancy loss relative to (a) one or more abortions and (b) one or more miscarriages or other natural losses. These results could lead to improved screening to identify women who may benefit from additional care.

      Editors and peer reviewers should be alert to the recommendation that all studies relative to the intersection between mental and reproductive health should always consider the effects of prior pregnancy loss.<sup>11-13</sup> In particular, both the Royal College of Psychiatrists<sup>14</sup> and the American Psychological Association<sup>15</sup> have lamented the lack of high quality studies examining the statistical associations between abortion and mental health. Record linkage studies from national data sets, such as that examined by Munk-Olsen, can help to fill this gap of knowledge . . . but only if they include analyses examining these effects.

      References

      1) Munk-Olsen T, Maegbaek ML, Johannsen BM, et al. Perinatal psychiatric episodes: a population-based study on treatment incidence and prevalence. Transl Psychiatry. 2016;6(10):e919. doi:10.1038/tp.2016.190.

      2) Giannandrea SAM, Cerulli C, Anson E, Chaudron LH. Increased risk for postpartum psychiatric disorders among women with past pregnancy loss. J Womens Health (Larchmt). 2013;22(9):760-768. doi:10.1089/jwh.2012.4011.

      3) Gong X, Hao J, Tao F, et al. Pregnancy loss and anxiety and depression during subsequent pregnancies: data from the C-ABC study. Eur J Obstet Gynecol Reprod Biol. 2013;166(1):30-36. doi:10.1016/j.ejogrb.2012.09.024.

      4) Blackmore ER, Côté-Arsenault D, Tang W, et al. Previous prenatal loss as a predictor of perinatal depression and anxiety. Br J Psychiatry. 2011;198(5):373-378. doi:10.1192/bjp.bp.110.083105.

      5) Räisänen S, Lehto SM, Nielsen HS, Gissler M, Kramer MR, Heinonen S. Risk factors for and perinatal outcomes of major depression during pregnancy: a population-based analysis during 2002-2010 in Finland. BMJ Open. 2014;4(11):e004883. doi:10.1136/bmjopen-2014-004883.

      6) Montmasson H, Bertrand P, Perrotin F, El-Hage W. Facteurs prédictifs de l’état de stress post-traumatique du postpartum chez la primipare. J Gynécologie Obs Biol la Reprod. 2012;41(6):553-560. doi:10.1016/j.jgyn.2012.04.010.

      7) McCarthy F, Moss-Morris R, Khashan A, et al. Previous pregnancy loss has an adverse impact on distress and behaviour in subsequent pregnancy. BJOG An Int J Obstet Gynaecol. 2015;122(13):1757-1764. doi:10.1111/1471-0528.13233.

      8) Munk-Olsen T, Bech BH, Vestergaard M, Li J, Olsen J, Laursen TM. Psychiatric disorders following fetal death: a population-based cohort study. BMJ Open. 2014:1-6. doi:10.1136/bmjopen-2014-005187.

      9) Meltzer-Brody S, Maegbaek ML, Medland SE, Miller WC, Sullivan P, Munk-Olsen T. Obstetrical, pregnancy and socio-economic predictors for new-onset severe postpartum psychiatric disorders in primiparous women. Psychol Med. 2017:1-15. doi:10.1017/S0033291716003020.

      10) Munk-Olsen T, Agerbo E. Does childbirth cause psychiatric disorders? A population-based study paralleling a natural experiment. Epidemiology. 2015;26(1):79-84. doi:10.1097/EDE.0000000000000193.

      11) Reardon DC. Lack of pregnancy loss history mars depression study. Acta Psychiatr Scand. 2012;126(2):155. doi:10.1111/j.1600-0447.2012.01880.x.

      12) Sullins DP. Abortion, substance abuse and mental health in early adulthood: Thirteen-year longitudinal evidence from the United States. SAGE Open Med. 2016;4(0):2050312116665997. doi:10.1177/2050312116665997.

      13) Coleman PK. Abortion and mental health: Quantitative synthesis and analysis of research published 1995-2009. Br J Psychiatry. 2011;199(3):180-186.

      14) National Collaborating Centre for Mental Health. Induced Abortion and Mental Health: A Systematic Review of the Mental Health Outcomes of Induced Abortion, Including Their Prevalence and Associated Factors. London, UK: Academy of Medical Royal Colleges; 2011. http://www.aomrc.org.uk/wp-content/uploads/2016/05/Induced_Abortion_Mental_Health_1211.pdf.

      15) Major B, Appelbaum M, Beckman L, Dutton MA, Russo NF, West C. Report of the APA Task Force on Mental Health and Abortion. Washington, DC: American Psychological Association; 2008. http://www.apa.org/pi/women/programs/abortion/mental-health.pdf.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 06, David Reardon commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Dec 05, David Reardon commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 23, Christopher Southan commented:

      With a reported IC50 of 28 μM, this compound can be neither potent nor selective


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 24, GARRET STUBER commented:

      • This post-publication peer review was written for a group assignment for NBIO733: Circuits and Behavior – A journal club course organized by Dr. Garret Stuber at the University of North Carolina at Chapel Hill. This critique was written by students in the course, and edited by the instructor.

      The recent work by Kim, J. et al. provides excellent evidence of genetically, spatially, and functionally distinct cell types in the basolateral amygdala (BLA). Studies aimed at molecular marker discovery often turn to less precise experiments such as bulk mRNA sequencing or proteomics in heterogeneous tissues to profile cell markers. In this study the authors first identify changes in the transcriptional profiles of fos+, activated cells following controlled delivery of appetitive or aversive stimuli to correlate with transcriptional markers. The authors convincingly demonstrate that two distinct cell populations marked by the expression of unique genes (Ppp1r1b+ and Rspo2) are preferentially activated by reward-related or aversion-related stimuli.

      In open discussion we discussed the use of Cartpt-Cre to represent Ppp1r1b+ cells (as opposed to using/generating a Ppp1r1b-Cre mouse). While the supplementary figures demonstrate that this Cre-driver mouse also labels several Ppp1r1b- cells (~23%), the authors demonstrate consistent functional properties of Cartpt-Cre cells across multiple behavioral and electrophysiological paradigms. This provides strong evidence that the use of this animal is justified and informative of their proposed circuit. Additional experiments to further verify the use of this animal could test positive and negative valence in the context of other sensory modalities (olfactory, gustatory, etc.) comparable to the initial c-Fos expression experiments.

      An additional concept that would be interesting to explore is the relative ‘strength’ each population of neurons appears to have with respect to positive and negative valences. Behaviorally, it appears Rspo2 neurons may have a greater influence on their respective valence (Figure 4b-e) as well as having a larger antagonistic effect (Figure 5a-f). This is a difficult claim to make considering the opposing behavioral paradigms cannot be considered to have equal strength in their respective valence. However, stronger antagonistic silencing illustrated by c-Fos (Figure5g-i) and cell recordings (Figure 6a-h) bring up the possibility. Importantly, the fact that Rspo2 neurons outnumber Ppp1r1b neurons (Table1) in the BLA may contribute to this. The potential for antagonistic microcircuits through local inhibitory interneurons is an additional avenue to explore. Another factor in this circuitry is that Rspo2 cells form direct synapses with other Rspo2 cells and vice versa in to form a synchronous circuit upon stimulation. Establishing whether these cell types share connectivity with neurons of the same (or similar) identity would be informative of the dynamics of the circuit.

      An important note the authors highlight is the likelihood that different cell subpopulations reside within the Rspo2+ and Ppp1r1b+ neuron groups. Further exploration of markers found in the initial microarray analysis may shed light on these subpopulations and provide insight as to how BLA cells are programmed to function. Additionally, next generation single cell RNA-sequencing techniques could provide the necessary acuity in transcriptomic profiling of these potentially heterogeneous cell types.

      The diversity of projection termini from these neurons also suggest cell heterogeneity and highlight what are possibly the most interesting findings of this article. Kim, J. et al. build on previous evidence that diverging circuits and cell populations encode positive and negative valence information separately in the BLA1, 2. Here, the authors successfully label and characterize these cell populations, but note important differences in projection targets not realized in previous studies. Firstly, Kim, J. et al. found that positive valence-associated neurons project to the medial nucleus of the central amygdala (CeM), contrary to previous findings that found projections to this area are largely associated with aversive stimuli 1, 2. Secondly, Kim et al., found that both positive and negative valence BLA neurons project to the nucleus accumbens (NAc). This builds on previous by Namburi, P. et al. showed that inhibiting NAc projecting BLA neurons did not affect fear or rewarding behavior in the context of conditioned learning1. The proposed heterogeneity of NAc projecting BLA neurons described in the current paper may account for this.

      To conclude, the recent findings of structurally, spatially, and functionally antagonistic neurons in the BLA provide an interesting and important avenue to further dissect circuit architecture underlying complex behaviors as well as providing a genetic entry point into better understanding these circuits.

      [1] Namburi, P. et al. (2015). A circuit mechanism for differentiating positive and negative associations. Nature. 520, 675-678. [2] Beyeler, A. et al. (2016). Divergent routing of positive and negative information from the Amygdala during memory retrieval. Neuron. 90, 2, 348-361.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 15, Stephen Strum commented:

      Important paper needing follow-up re durability of PSA response post RT. Would be helpful to have seen predictions of local vs systemic disease based on neural nets, nomograms & how they related to Axumin findings.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 29, Steve Alexander commented:

      Oleamide (https://pubchem.ncbi.nlm.nih.gov/compound/5283387, called 9-octadecenamide here) has previously been investigated as a ligand at all three PPARs in vitro https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3414753/ Interesting to see the presence of other primary amides in the brain, notably palmitamide (https://pubchem.ncbi.nlm.nih.gov/compound/69421, called hexadecanamide here). An essential next couple of steps will be to identify the synthetic and degradative pathways associated with these ligands, and how/whether these compounds change with patho/physiological influences.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 24, Sean Ekins commented:

      And here is post about why it took so long to reach Pubmed http://www.collabchem.com/2016/10/18/zika-homology-models-paper-makes-it-to-pubmed-6-months-after-publishing-in-f1000research/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Oct 24, Sean Ekins commented:

      more details and press release are here http://www.collabchem.com/2016/05/19/zika-open-becomes-openzika-on-ibm-world-community-grid/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Oct 24, Sean Ekins commented:

      Here is an update to the article in the form of slides - also goes into far more detail http://www.slideshare.net/ekinssean/open-zika-presentation


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Oct 18, Christopher Southan commented:

      Ancilliary information https://cdsouthan.blogspot.se/2016/02/med-chem-starting-points-for-zika.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 08, Jeffrey Gross commented:

      Please be informed that the submission of our manuscript was cancelled within 24 hours of its initial submission to Experimental and Molecular Pathology. Unfortunately, and despite our many subsequent reminders, the journal went ahead and sent our paper for review and then published it online without our consent.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 12, Meg Waraczynski commented:

      Thank you for your insights. In hindsight, we should have been much clearer in indicating that we were only strongly inferring the involvement of CaV1.3 channels in our behavioral results, but that we have no direct evidence for this. The inference was based on the facts reviewed in the Introduction, that (1) only CaV1 type channels have been linked to the activity of basal forebrain medium spiny neurons (reference 4); (2) of the CaV1 family, only 1.2 and 1.3 type channels are abundant in the brain (reference 3); and (3) the activation dynamics of 1.3 channels correspond much more closely to the activity state dynamics of medium spiny neurons than do the activation dynamics of 1.2 channels (reference 26). We hoped to use 1.3-specific drugs but, as noted in the Introduction, all such drugs we could find required the use of brain-toxic solvents. The dosages we used were selected based on dosages used by others who intracerebrally injected these drugs to affect behavior (references 1, 6, 7, and 16). We did not intend to focus on implicating CaV1.3 channels specifically in our observations, nor did we intend to have others use our paper as evidence that diltiazem and verapamil are CaV1.3-specific. We regret if this occurs. Our tentative conclusions as to the reward-relevant function of the system we are studying would remain the same even if it were found that our drug injections acted on mechanisms other than CaV1.3 channels specifically. We will be much more cautious when referring to this work in the future to emphasize these functional conclusions and not to implicate CaV1.3 channels specifically.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jan 03, Joerg Striessnig commented:

      From a pharmacological point of view this is a very poor paper. It is common knowledge that verapamil and diltiazem have never been shown to be selective for Cav1.3 channels. Differential interaction with subdomains of the channel (their ref 19) has been studied with skeletal muscle channels and later with Cav1.2 (class C) L-type channels. Moreover, due to the higher concentrations for L-type channel block, verapamil and diltiazem also tend to inhibit other ion channels, such as Cav2 channels (PMIDs: 8574653, 10385261), at concentrations also inhibiting L-type channels. Here concentrations of 5 micrograms drug/0.5 microliter were infused, corresponding to about 20 mM concentrations, 10 times higher than the extracellular calcium concentration. There is no published evidence justifying their interpretation of a specific involvement of Cav1.3, as misleadigly mentioned even in the title. This interpretation by the authors and the failure by the reviewers to point out these limitations confuses readers and may even trigger further misleading experiments citing this paper as evidence for Cav1.3-selectivity of diltiazem and verapamil.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 26, Lydia Maniatis commented:

      Natural scenes

      The use of “natural scene statistics” is popular in current vision science and directly linked to its conceptual confusion.

      In the words of the authors, “Biological systems evolve to exploit the statistical relationships in natural scenes….”

      I want to first address the authors’ use of the term “natural scenes” and its implications, and move on to the problem of the validity and implications of the above quote in a subsequent comment.

      “Natural scenes” is a very broad category, even broader given that the authors include in it man-made environments. In order to be valid on their own terms, the “statistics” involved – i.e. the correlations between “cues” and physical features of the environment – must hold across very different distances and orientations of the observer to the world, and across very different environments, including scenes involving close-ups of human faces.

      Describing 96 photographs taken of various locations on the University of Texas campus from a height of six feet, a camera perpendicular to the ground, at distances of 2-200 meters as a theoretically meaningful, representative sample of “natural scenes” seems rather flakey. If we include human artifacts, then what count as “non-natural scenes” ?

      The authors themselves are forced to confront (but choose to sidestep) the sampling problem when they note that “previous studies have reported that surfaces near 0° of slant are exceedingly rare in natural scenes (Yang & Purves, 2003), whereas we find significant probability mass near 0° of slant. That is, we find—consistent with intuition—that it is not uncommon to observe surfaces that have zero or near-zero slant in natural scenes (e.g., frontoparallel surfaces straight ahead).”

      (Quite frankly, the authors’ intuition is causing them to confuse cause and effect, since we have a behavioral tendency to orient ourselves to objects so that we are in a fronto-parallel relationship to surfaces rather than in an oblique relationship to them, thus biasing the “statistics” in this respect).

      They produce a speculative, technical and preliminary rationalization for the discrepancy between their distributions and those of Yang and Purves, leaving clarification to “future research.”

      What they don’t consider is the sampling problem. Is there any doubt WHATSOEVER that different “natural scenes” - or different heights, or different angles of view, or different head orientations - will produce very different “prior probabilities”? If this is a problem, it isn’t a technical one.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Oct 26, Lydia Maniatis commented:

      Orientation isn’t prior to shape

      The idea that slant/tilt are judged prior to shape, which Burge et al have adopted, suffers from the problems discussed above, and from empirical evidence that contradicts it.

      The notion dates back at least to Marr’s 2.5D sketch. As Pizlo (2008) observes, “Marr assumed that figure-ground organization was not necessary for providing the percept of the 3D shape of an object” and asks “How could he get a way with this so long after the Gestalt psychologists had revolutionized perception by demonstrating the importance of figure-ground organization?”

      Pizlo references experiments using wire objects (e.g. Rock & DiVita, 1987) that have shown that figure-ground organization is key to the shapes that actual 3D objects produce in perception, “even when binocular disparity or other depth cues are available.”

      In general, if Marr had been correct in assuming that depth orientations of edges are prior to, and sufficient or necessary for, shape perception, then monocular perception would have no objective content, and 3D pictorial percepts would not occur, unless some special mechanisms had evolved just for this purpose.

      In short, tilt-to-shape is not a principled, credible premise.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Oct 26, Lydia Maniatis commented:

      the right angle, and is linked to each of the other two, and each of the latter sit at the apex of acute angles, and are connected to each other. (What is being grouped are all the points inside of the perceptually constructed triangular outline).

      If we now add a single point to the group, so that the set is compatible with a square, our rule will require that the connection between the two latter points be discarded, as both become the apex of right angles bounding a square and both are linked to the new point. This is, in fact, what happens in perception. So applying a rule to the three points, and a rule to the single point, locally, would not add up to the square contour, in principle; and does not, in perception.

      Thus, when the authors assert that…

      “the visual system starts with local measurements then combines those local measurements into the global representations; the more accurate the local measurements, the more accurate the global representation,”

      …they are making an assertion that might sound simple and commonsensical to a layman (which is perhaps why it is so tenacious) but which is not justified for a vision scientist, any more than it is to say that we can build a house of cards one card at a time, or hear the sound of one hand clapping.

      The use of “local cues” is a contemporary version of the reductionist approach to perception known as structuralism/introspectionism with its “sensory elements.” This approach couldn’t address the logical problem of organization of the visual field into shaped objects, discussed above, without invoking “experience” in a paradoxical and inconsistent fashion. “Cues” are similarly impotent.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Oct 25, Lydia Maniatis commented:

      As discussed, the best the authors could achieve in predicting measured “3D tilt” using their cues was not very good. Nevertheless, they describe their results as “complex,” “rich” and “detailed” in the sense that they feel able to discern some patterns in the generally inaccurate data that might be theoretically important or useful. For example, they say performance was often better when the three cues were in agreement. They propose to go on to compare performance of the model to performance of humans in psychophysical experiments. It seems to me that an important step to take prior to psychophysical testing is to test the model on its own terms; that is, to take a second set of “natural” images (perhaps of a different campus, or a national park) and test whether the ad hoc model derived from the first set will produce a qualitatively similar dataset. Will the two datasets, in all their richness and complexity, be mutually, statistically consistent? How will the authors compare them? If the data do not prove qualitatively repeatable, then p/p experiments would seem premature.

      p.s. The open-endedness of the term "natural scene," in which the authors include man-made environments, imposes quite a serious replicability burden on the model. (The sampling problem (assuming the inductive approach was viable) includes the fact that arguably more time is spent by humans looking at human faces and bodies than at trees and shrubs). How many "scenes" should we test? Nevertheless, at least one attempt seems a minumum.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Oct 22, Lydia Maniatis commented:

      Part 1 This paper is all too similar to a large proportion of the vision literature, in which fussy computations thinly veil a hollow theoretical core, comprised of indefensible hypotheses asserted as fact (and thus implicitly requiring no justification), sometimes supported by citations that only weakly support them, if at all. The casual yet effective (from a publication point of view) fashion in which many authors assert popular (even if long debunked) fallacies and conjure up other pretexts for what are, in fact, mere measurements without actual or potential theoretical value is well on display here.

      What is surprising in, perhaps, every case, is the willful empirical agnosia and lack of common sense, on every level – general purpose, method, data analysis - necessary to enable such studies to be conducted and published. A superficial computational complexity adds insult to injury, as many readers may wrongly feel they are not competent to understand and evaluate the validity of a study whose terms and procedures are so layered, opaque and jargony. However, the math is a distraction.

      Unjustified and/or empirically false assumptions and procedures occur, as mentioned, at every level. I discuss some of the more serious ones below (this is the first of a series of comments on this paper).

      1. Misleading, theoretically and practically untenable, definitions of “3D tilt” (and other variables).

      The terms slant and tilt naturally refer to a geometrical characteristic of a physical plane or volume (relative to a reference plane). The first sentence of Burge et al’s abstract gives the impression that we are talking about tilt of surfaces: “Estimating 3D surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues …should be combined to estimate 3D tilt in natural scenes.” As it turns out, the authors perform a semantic but theoretically pregnant sleight of hand in the switch from the phrase “3D surface orientation (slant and tilt)” to the phrase “3D tilt” (which is also used in the title).

      The obvious inference from the context is that the latter is a mere short-hand for the former. But it is not. In fact, as the authors’ finally reveal on p. 3 of their introduction, their procedure for estimating what they call “3D tilt” does not allow them to correlate their results to tilt of surfaces: “Our analysis does not distinguish between the tilt of surfaces belonging to individual objects and the tilt (i.e. orientation [which earlier was equated with “slant and tilt”]) of depth discontinuities…We therefore emphasize that our analysis is best thought of as 3D tilt rather than 3D surface tilt estimation.”

      “3D tilt” is, in effect, a conceptually incoherent term made up to coincide with the (unrationalised) procedure used to arrive at certain measures given this label. I find the description of the procedure opaque, but as I am able to understand it, small patches of images are selected, and processed to produce “3D tilt” values based on range values collected by a range finder within that region of space. The readings within the region can be from one, two, three, four, or any number of different surfaces or objects; the method does not discriminate among these cases. In other words, these local “3D tilt values” have no necessary relationship to tilt of surfaces (let alone tilt of objects, which is more relevant (to be discussed) and which the authors don’t address even nominally). We are talking about a paradoxically abstract, disembodied definition of “3D tilt.” As a reader, being asked to “think” of the measurements as representing “3D tilt” rather than “3D surface tilt” doesn’t help me understand either how this term relates, in any useful or principled way, to the actual physical structure of the world, nor to the visual process that represents this world. The idea that measuring this kind of “tilt” could be useful to forming a representation of the physical environment, and that the visual system might have evolved a way to estimate these intrinsically random and incidental values, is an idea that seems invalid on its face - and the authors make no case for it.

      They then proceed to measure 3 other home-cooked variables, in order to search for possible correlations between these and “3D tilt.” These variables are also chosen arbitrarily, i.e. in the absence of a theoretical rationale, based on: “simplicity, historical precedence, and plausibility given known processing in the early visual system.” (p. 2). Simplicity is not, by itself, a rationale – it has to have a rational basis. At first glance, at least the third of these reasons would seem to constitute a shadow of a theoretical rationale, but it is based on sparse, premature and over-interpreted physiological data primarily of V1 neuron activity. Furthermore, the authors’ definitions of their three putative cues: disparity gradient, luminance gradient, texture gradient, are very particular, assumption-laden, paradoxical, and unrationalised.

      For example, the measure of “texture orientation” involves the assumption that textures are generally composed of “isotropic [i.e. circular] elements” (p. 8). This assumption is unwarranted to begin with. Given, furthermore, that the authors’ measures at no point involve parsing the “locations” measured into figures and grounds, it is difficult to understand what they can mean by the term “texture element.” Like tilt, reference to an “isotropic texture element” implies a bounded, discrete area of space with certain geometric characteristics and relationships. It makes no sense to apply it to an arbitrary set of pixel luminances.

      Also, as in the case of “3D tilt” the definition of “texture gradient” is both arbitrary and superficially complex: “we define [the dominant orientation of the image texture] in the Fourier domain. First, we subtract the mean luminance and multiply by (window with) the Gaussian kernel above centered on (x, y). We then take the Fourier transform of the windowed image and comute the amplitude spectrum. Finally, we use singular value decomposition ….” One, two, three….but WHY did you make these choices? Simplicity, historical precedence, Hubel and Wiesel…?

      If, serendipitously, the authors’ choices of things to measure and compare had led to high correlations, they might have been justified in sharing them. But as it turns out, not surprisingly, the correlations between “cues” and “tilt” are “typically not very accurate.” Certain (unpredicted) particularities of the data which to which the authors speculatively attribute theoretical value (incidentally undermining one of their major premises) will be discussed later.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 19, Jonathan Eisen commented:

      Note - I am one of the authors of this paper.

      There is a sentence in the paper that could be worded more carefully. I was pointed to this by a colleague. The sentence is "Gordonia sp. strain UCD-TK1 contains 5,032 coding sequences, and 64 noncoding RNAs."

      It would be more accurate to say "The annotation of Gordonia sp. strain UCD-TK1 contains predicted 5,032 coding sequences, and 64 putative noncoding RNAs."

      This would make it more clear that we do not have experimental evidence regarding how many coding sequences or ncRNAs are present in this organism.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 02, Karsten Suhre commented:

      I think these PubMed pages should be a bit more self-explanatory - I came to this page while searching for "Metabolomic profiles delineate potential role for sarcosine in prostate cancer progression."

      On first view this looked to me as if there was a research misconduct problem with this paper. However, after reading through the linked PMC document, I learned that the contrary was the case: The papers listed here have been plagiarized in an NIH grant application.

      These papers are the victims, not the perpetrators - but on first glance this web site suggests otherwise.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 02, Eric Robinson commented:

      The method and analysis adopted in this research are flawed and the conclusions are incorrect.

      Action before commenting on pubmed: I contacted the journal and the authors in question with my concerns. The journal ignored a number of my emails and after an internal review decided not to issue a correction or retraction. I requested the results of the internal review, but the journal has not responded. The first author appears to have been a Masters student at the senior author’s institute and I was unable to contact the first author. The senior author (BM Corfe) informed me on the telephone that he was not prepared to discuss the research and instead advised me to raise any points in a public domain. The senior author also advised me that he would not be considering a correction or retraction of the work and that the research team stood by all of the conclusions made. Given the journal’s position on the study and my frustration with their handling of this, I was not prepared to support the journal by publishing a letter to the editor.

      Research question: The researchers wanted to examine whether there is a link between self-reported appetite (self-reported subjective feelings of hunger) and energy intake; do participants who report feeling hungry eat more than participants who report feeling less hungry? They conducted what they described as a ‘systematic’ review and examined over 400 articles.

      Conclusions drawn: The authors conclude that self-reported appetite (e.g. subjective feelings of hunger) ‘does not predict energy intake’ (title of article) and an associated University press release stated that ‘there is no link between how hungry we feel and the amount of calories we consume’.

      Is this a spoof or hoax article? At first I thought this article may be a hoax, because the conclusion that self-reported hunger is in no way predictive of how much a person eats, is odd. Previous research shows that self-reported hunger/appetite does predict how much a person eats, but as you might expect, the correlation between self-reported hunger and energy intake is not perfect. Recently, Sadoul et al. (1) show this to be the case in an analysis of 23 studies that assessed self-reported appetite and ad-libitum meal energy intake. Robinson et al. (2) show this to the case in an analysis of 31 studies that assessed self-reported hunger and ad-libitum intake of snack foods.

      Flawed method: There are a number of texts on best practice for conducting systematic reviews and synthesising data from multiple studies. Rather than using standard meta-analytic methods (e.g. combining weighted correlation coefficients between self-reported hunger and energy intake from studies), the researchers scored each study in their review as either providing evidence of a ‘link’ or evidence of ‘no link’ between self-reported hunger and energy intake. The scoring system used was inappropriate in a number of ways. For example, if an experimental manipulation in a study led to a change in energy intake without a change in self-reported hunger, in the present review this constituted evidence that self-reported hunger does not predict energy intake. This line of reasoning is a logical fallacy because it is based on the premise that a) energy intake can only be affected by self-reported hunger and b) energy intake being affected by anything other than subjective feelings of hunger proves that subjective appetite is in no way related to energy intake. Energy intake can be increased or decreased by a multitude of factors and many of these will not act on energy intake by altering self-reported hunger.

      Flawed analysis: The authors’ main analysis was dependent on counting studies that provided statistically significant findings vs. those that did not, which is not considered best practice as it ignores considerations of sample size, statistical power and how heavily each study should be weighted in analyses. The above points aside, the authors went on to report that approximately 49% of studies they surveyed found a ‘link’ and 51% found ‘no link’ between subjective self-reported hunger and energy intake. This is actually highly convincing evidence that there is a link between self-reported hunger and energy intake, because if there was ‘no link’ we would expect to see closer to only 5% of all studies finding a ‘link’ (typical alpha level of .05), as opposed to the 49% reported.

      Invalid conclusions: A combination of flawed methods and analyses results in incorrect conclusions.

      A sobering experience: I noticed this study because of the bizarre conclusion it made; hunger in no way relates to how much we eat. I reached out to the authors several times over email. In the end I had to ring the senior author’s office phone to speak to him, but as noted the senior author was not prepared to discuss his research or revise his position on this research. This to me is direct experiential evidence that some scientists do not appear to care about the quality and accuracy of research they conduct and publish.

      (1) Sadoul BC, Schuring EA, Mela DJ, Peters HP. The relationship between appetite scores and subsequent energy intake: an analysis based on 23 randomized controlled studies. Appetite 2014; 83: 153-159

      (2) Robinson E, Haynes A, Hardman CA, Kemps E, Higgs S, Jones A. The bogus taste test: Validity as a measure of laboratory food intake. Appetite 2017; 116: 223-231.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 20, Prajak Barde commented:

      One of the objective of the meta-analysis by Bai B et al is to assess the relationship between human cytomegalovirus and colorectal cancer (CRC) risk. Author analyzed four studies to demonstrate that tumor tissues had a significantly higher rate of HCMV infection (OR = 6.59, 95% CI = 4.48–9.69), thus confirming that CRC tissue is significantly burdened with HCMV DNA as compared to adjacent normal tissues. However, it is not clear how the conclusion of an increased risk of CRC due to HCMV infection is drawn. As detection of HCMV DNA in CRC tissues by itself don’t provide adequate evidence to prove the causal role of HCMV in CRC, further explanation is needed to justify increased risk of CRC due to HCMV infection. In addition, in light of multiple etiological factors responsible for causation of CRC (e.g. insufficient activity, high-fat diets, smoking and living in a developed country)2, the increased risk associated with HCMV in relation to these risk factors also need to be ascertained and established.

      There are diagnostic challenges in detecting HCMV, due to “hit and run” mechanism of virus, though author considered PCR technique that showed higher positive rate than In situ hybridization (ISH) and immunohistochemistry (IHC),3 the validity of “negative results” of studies considered for meta-analysis should be ascertained before drawing any conclusion based on these results.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 18, Daniel Corcos commented:

      Welch et al. assume wrongly that breast cancer incidence has been stable after the advent of screening mammography, although they find 132 more cases of BC a year per 100,000 women in the period following screening implementation. To justify this strange assumption they argue: “Those who postulate such substantial increases in underlying incidence, however, must explain why the increase coincides temporally with the introduction of screening, and why the incidence of the most aggressive form of the disease — metastatic breast cancer — remains essentially unchanged”. It is perfectly possible to explain both observations by the fact that early treatment is protective against metastasis, as expected and known for a century, and by the fact that mammography screening induces breast cancer at a much higher rate and with a shorter delay than usually expected.

      References

      Bleicher RJ, Ruth K, Sigurdson ER, et al. Time to Surgery and Breast Cancer Survival in the United States. JAMA Oncol 2016;2:330-9. Mathews FS. The Ten-Year Survivors of Radical Mastectomy. Ann Surg 1933;98:635-43.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 20, Miguel Lopez-Lazaro commented:

      Pancreatic cancer formation is gradual

      It is widely accepted that cancer development requires the sequential accumulation of DNA changes over years or decades. In this article, Notta et al. challenge this dogma. They analysed the genomes of more than 100 pancreatic tumours and found that many DNA changes occur simultaneously as a consequence of massive genomic rearrangements associated with catastrophic mitotic events. The authors discuss that the formation of advanced pancreatic cancers is not gradual, and propose a new model in which the simultaneous accumulation of genetic alterations arising from mitotic errors rapidly leads to the development of invasive disease. However, cancer incidence data by age indicate that the time frame required for the formation of invasive pancreatic cancers is similar from other cancers in which these mitotic errors are rare, thereby indicating that the high frequency of catastrophic mitotic events in pancreatic tumours may be a consequence of the disease rather than a cause. In addition, the extremely low rates of pancreatic cancer in young people and the striking increase in its incidence with age strongly suggests that the formation of most invasive pancreatic cancers requires the gradual accumulation of DNA changes over several decades. This means that there is time and opportunity to detect and stop pancreatic carcinogenesis before the development of advanced disease.

      Full text at http://dx.doi.org/10.13140/RG.2.2.16865.92009


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 28, William Davies commented:

      Further evidence for a link between STS activity and Ctgf/Ccn2 expression has recently been obtained from in vivo and in vitro models of colorectal cancer (Gilligan et al., Estrogen Activation by Steroid Sulfatase increases Colorectal Cancer proliferation via GPER J Clin Endocrinol Metab. 2017 doi: 10.1210/jc.2016-3716)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 08, Christopher Tench commented:

      Could you possibly provide the coordinates analysed otherwise it is difficult to interpret the results.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 02, Suzy Chapman commented:

      ICD-11 Beta draft: Rationale for Proposal for Deletion of proposed new category: Bodily distress disorder

      March 8, 2017

      Full text:

      http://wp.me/pKrrB-4dc

      References:

      1 Creed F, Guthrie E, Fink P, Henningsen P, Rief W, Sharpe M, White P. Is there a better term than “medically unexplained symptoms”? J Psychosom Res. 2010 Jan;68(1):5-8. doi:10.1016/j.jpsychores.2009.09.004. [PMID: 20004295]

      2 Fink P, Schröder A. One single diagnosis, bodily distress syndrome, succeeded to capture 10 diagnostic categories of functional somatic syndromes and somatoform disorders. J Psychosom Res. 2010 May;68(5):415-26. [PMID: 20403500]

      3 Creed F, Gureje O. Emerging themes in the revision of the classification of somatoform disorders. Int Rev Psychiatry. 2012 Dec;24(6):556-67. doi: 10.3109/09540261.2012.741063. [PMID: 23244611]

      4 Gureje O, Reed GM. Bodily distress disorder in ICD-11: problems and prospects. World Psychiatry. 2016 Oct;15(3):291-292. doi: 10.1002/wps.20353. [PMID: 27717252]

      5 American Psychiatric Association. (2013). Somatic Symptom and Related Disorders. In Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC: Author.

      6 Frances A, Chapman S. DSM-5 somatic symptom disorder mislabels medical illness as mental disorder. Aust N Z J Psychiatry. 2013 May;47(5):483-4. [PMID: 23653063]

      7 Lam TP, Goldberg DP, Dowell AC, Fortes S, Mbatia JK, Minhas FA, Klinkman MS. Proposed new diagnoses of anxious depression and bodily stress syndrome in ICD-11-PHC: an international focus group study. Fam Pract. 2013 Feb;30(1):76-87. doi: 10.1093/fampra/cms037. Epub 2012 Jul 28. [PMID: 22843638]

      8 Ivbijaro G, Goldberg D. Bodily distress syndrome (BDS): the evolution from medically unexplained symptoms (MUS). Ment Health Fam Med. 2013 Jun;10(2):63-4. [PMID: 24427171]

      9 Goldberg DP, Reed GM, Robles R, Bobes J, Iglesias C, Fortes S, de Jesus Mari J, Lam TP, Minhas F, Razzaque B et al. Multiple somatic symptoms in primary care: A field study for ICD-11 PHC, WHO’s revised classification of mental disorders in primary care settings. J Psychosom Res. 2016 Dec;91:48-54. doi:10.1016/j.jpsychores.2016.10.002. Epub 2016 Oct 4. [PMID: 27894462]

      10 Medically Unexplained Symptoms, Somatisation and Bodily Distress: Developing Better Clinical Services, Francis Creed, Peter Henningsen, Per Fink (Eds), Cambridge University Press, 2011.

      11 Frances Creed and Per Fink. Presentations, Research Clinic for Functional Disorders Symposium, Aarhus University Hospital, May 15, 2014.

      12 Rief W, Isaac M. The future of somatoform disorders: somatic symptom disorder, bodily distress disorder or functional syndromes? Curr Opin Psychiatry September 2014 – Volume 27 – Issue 5 – p315–319. [PMID: 25023885]

      13 Chalder, T. An introduction to “medically unexplained” persistent physical symptoms. Presentation, Department of Psychological Medicine, King’s Health Partners, 2014. [Accessed 27 February 2017]

      14 Schumacher S, Rief W, Klaus K, Brähler E, Mewes R. Medium- and long-term prognostic validity of competing classification proposals for the former somatoform disorders. Psychol Med. 2017 Feb 9:1-14. doi: 10.1017/S0033291717000149. [PMID: 28179046]

      15 Fink P, Toft T, Hansen MS, Ornbol E, Olesen F. Symptoms and syndromes of bodily distress: an exploratory study of 978 internal medical, neurological, and primary care patients. Psychosom Med. 2007 Jan;69(1):30-9. [PMID: 17244846]

      16 Carroll L. Alice’s Adventures in Wonderland. 1885. Macmillan.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 17, Suzy Chapman commented:

      Firstly, it is to be welcomed that authors Gureje and Reed have published this progress report on the work of the ICD-11 Somatic Distress and Dissociative Disorders Working Group (S3DWG) in an open access journal. The revision of ICD cannot be described as a "transparent and inclusive" process when ICD revision Topic Advisory Groups and sub working groups publish progress reports and rationales for their proposals behind paywalls.

      I note the paper discusses the S3DWG's rationale for not including the word "somatic" in the name it proposes for its prototype disorder.

      There is, however, no discussion within the paper of the sub working group's rationale for proposing to use the disorder term "Bodily distress disorder (BDD)" when this term is already being used interchangeably in the literature [1-4] with "Bodily distress syndrome (BDS)" - a divergent construct and criteria set already operationalized in Denmark, in clinical and research settings [5].

      Omission of consideration within this paper of the potential impact for maintaining construct integrity within and beyond ICD-11 is troubling.

      The S3DWG's "Bodily distress disorder" construct, as defined for the ICD-11 core version, has strong conceptual congruency and characterization alignment with DSM-5's "Somatic symptom disorder (SSD)" and poor conceptual and characterization alignment with Fink et al (2010) "Bodily distress syndrome."

      It is noted that "Somatic symptom disorder" is also inserted into the ICD-11 Beta draft under Synonyms for BDD.

      In sum:

      ICD-11's proposed BDD is more closely aligned with DSM-5's SSD (Gureje and Reed, 2016).

      The term "BDD" is already used interchangeably in the field for the operationalized "BDS" disorder construct [1-4].

      That DSM-5's SSD and Fink et al's (2010) BDS are "very different" concepts, with different criteria sets, capturing different patient populations has been acknowledged by SSD work group chair, Joel E Dimsdale, and by Per Fink, Peter Henningsen and Francis Creed [6][7].

      The unsoundness of introducing into ICD a new disorder category that proposes to use terminology that is already closely associated with a different (and already operationalized) construct/criteria set and the potential for conflation between the two has yet to be acknowledged or addressed by the sub working group responsible for this recommendation.

      The S3DWG's choice of nomenclature needs referral back to the ICD-11 Revision Steering Group (RSG) and Joint Task Force (JTF) for urgent consideration of the implications of this proposed name for disorder integrity.

      References:

      1 An introduction to "medically unexplained" persistent physical symptoms, Presentation, Professor Trudie Chalder, Department of Psychological Medicine, King’s Health Partners, 2014, Slide #3 http://www.kcl.ac.uk/ioppn/depts/pm/research/imparts/Quick-links/Seminar-Slides/Seminar-7/Trudie-Chalder-intro.pdf

      2 Rief W, Isaac M. The future of somatoform disorders: somatic symptom disorder, bodily distress disorder or functional syndromes? Curr Opin Psychiatry September 2014 - Volume 27 - Issue 5 - p 315–319 Rief W, 2014

      3 Ivbijaro G, Goldberg D. Bodily distress syndrome (BDS): the evolution from medically unexplained symptoms (MUS). Ment Health Fam Med. 2013 Jun;10(2):63-4. Ivbijaro G, 2013

      4 Fink P, Toft T, Hansen MS, Ornbol E, Olesen F. Symptoms and syndromes of bodily distress: an exploratory study of 978 internal medical, neurological, and primary care patients. Psychosom Med. 2007 Jan;69(1):30-9. Fink P, 2007

      5 Fink P, Schröder A. One single diagnosis, bodily distress syndrome, succeeded to capture 10 diagnostic categories of functional somatic syndromes and somatoform disorders. J Psychosom Res. 2010 May;68(5):415-26. Fink P, 2010

      6 Medically Unexplained Symptoms, Somatisation and Bodily Distress: Developing Better Clinical Services, Francis Creed, Peter Henningsen, Per Fink (Eds), Cambridge University Press, 2011

      7 Francis Creed and Per Fink. Presentations, Research Clinic for Functional Disorders Symposium, Aarhus University Hospital, May 15, 2014.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 20, David Keller commented:

      Effects of Rheumatoid Arthritis, and Its Treatments, on Parkinson Disease

      Sung and colleagues derived two new and independent hypotheses from this study. First, they observed an inverse association between rheumatoid arthritis (RA) and the risk of subsequent development of Parkinson disease (PD), consistent with the hypothesis that RA is protective against PD. Second, they observed that the risk of developing PD was reduced even more in RA patients treated with biological disease-modifying anti-rheumatic drugs (DMARDs) but not in patients treated without DMARDs or with non-biological DMARDs. [1] This led to their second hypothesis, that "biologic DMARDs appear to further reduce the PD risk" in RA patients (more so than treatment with non-biological DMARDs, or no DMARDs), suggesting a possible "role of biologic DMARDs in PD treatment".

      In summary, the authors of this study propose two new hypotheses to fully explain their results:

      Hypothesis #1: Rheumatoid Arthritis disease is protective against the onset of Parkinson disease.

      Hypothesis #2: Biological DMARDS, as treatment for RA, confer additional protection against the onset of PD.

      Sung and colleagues point out that Hypothesis #1 contradicts "the hypothesis that chronic inflammation in RA may increase the risk of developing PD", citing a recent study that, similarly, "identified an inverse association between PD and systemic lupus erythematosus", and another confirmatory study that "reported a 30% reduction in the risk of developing PD in patients with RA and systemic involvement" [Sung's references #23 and #24]. Sung mentioned that RA patients are more likely to take NSAIDs, and that certain NSAIDs have been found to be protective against PD, but Sung claims that the association of RA with protection from PD withstood controlling for NSAID use. In a separate comment, I will address the errors I believe Sung and colleagues made when correcting their data for NSAID use, and how those errors could falsely support Hypothesis #1, above.

      However, suppose hypothesis #1 is true. The association of additional reduction of PD incidence with the use of biological DMARDs might not be due to their having an intrinsic neuroprotective effect, but, rather, to the fact that they are reserved for use in the most severe or refractory cases of RA. The increased level of RA disease activity and severity associated with the use of biological DMARDs could be the cause for the additional decreased risk of PD. The inability to distinguish whether the additional protection from PD was due to neuroprotective benefits of biological DMARDs, or due to the more severe RA which caused biological DMARDs to be "indicated" (medically needed), is an example of confounding by "indication bias".

      Biological DMARDs have likely been compared with non-biological DMARDs in randomized clinical trials for treatment of rheumatoid arthritis. If these trials included data on the new onset of PD, they could be pooled in a meta-analysis to test Sung's Hypothesis #2.

      Reference

      Sung YF, Liu FC, Lin CC, et al. Reduced risk of Parkinson disease in patients with rheumatoid arthritis: A nationwide population-based study. Mayo Clin Proc. 2016;91(10):1346-1353.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 19, David Keller commented:

      Only ibuprofen is associated with reduced PD risk - controlling for use of any NSAID introduces error

      The following quote is from the above paper by Sung [1]:

      "Previous studies [2,3] have reported that nonaspirin NSAIDs, particularly ibuprofen, may be associated with a reduced risk of developing Parkinson disease. However, after controlling for all comorbidities and NSAID use, patients with RA still exhibited a reduced risk of PD compared with patients without RA..." However, the two cited studies actually demonstrated that ibuprofen use is associated with significantly reduced risk of PD, but other commonly-used NSAIDs are not.

      In a landmark paper published in 2011, but not cited by Sung, Gao and colleagues published a new observational trial, plus a comprehensive meta-analysis, which concluded that ibuprofen, but not other NSAIDs, is associated with a significant 38% reduction in risk for PD [4]. Therefore, the bolded phrase should be corrected to read: "ibuprofen, but not other NSAIDs, is associated with a significantly reduced risk of developing PD".

      Sung and colleagues controlled for NSAID use, but not separately for ibuprofen use; their Table 3 presents 11 baseline variables, and the adjusted HR of each. NSAIDs are discussed together as a single group, exhibiting a small but significant 9% protective effect against PD, the result of diluting the larger 38% protection associated with ibuprofen with the lack of significant protection associated with other NSAIDs.

      Thus, the HR of 0.91 used by Sung to control for the use of any NSAID systematically under-corrects for the protection from PD for patients who took ibuprofen, while systematically over-correcting when the NSAID used was not ibuprofen. To eliminate these systematic errors, the study data should be reanalyzed, and corrected specifically for ibuprofen use, rather than for the use of any NSAID.

      Until this correction is made, it is unclear how much of the apparent protection associated with RA disease or with biological DMARDs was actually attributable to the use of ibuprofen.

      In an unpublished reply to these arguments, Sung's group wrote: "[Keller's] criticism focuses on the issue whether [any] non-aspirin NSAID or ibuprofen only, has the truly protective effect against the development of PD". Sung and colleagues agreed that "ibuprofen was associated with decreased risk of PD, but not aspirin or other NSAIDs" and concluded that "ibuprofen use should be considered as an important covariable in future correlational research in PD." [5]

      References

      1: Sung YF, Liu FC, Lin CC, Lee JT, Yang FC, Chou YC, Lin CL, Kao CH, Lo HY, Yang TY. Reduced Risk of Parkinson Disease in Patients With Rheumatoid Arthritis: A Nationwide Population-Based Study. Mayo Clin Proc. 2016 Oct;91(10):1346-1353. doi: 10.1016/j.mayocp.2016.06.023. PubMed PMID:27712633.

      2: Chen, H., Jacobs, E., Schwarzschild, M.A. et al, Nonsteroidal antiinflammatory drug use and the risk for Parkinson's disease. Ann Neurol. 2005;58:963–967.

      3: Rees, K., Stowe, R., Patel, S. et al, Non-steroidal anti-inflammatory drugs as disease-modifying agents for Parkinson's disease: evidence from observational studies. Cochrane Database Syst Rev. 2011;:CD008454.

      4: Gao X, Chen H, Schwarzschild MA, Ascherio A. Use of ibuprofen and risk of Parkinson disease. Neurology. 2011 Mar 8;76(10):863-9. doi:10.1212/WNL.0b013e31820f2d79. PubMed PMID: 21368281; PubMed Central PMCID: PMC3059148.

      5: Sung YF, Lin CL, Kao CH, and Yang TY. Reply to Keller's unpublished letter to Mayo Clinic Proceedings. Received by email on November 22, 2016.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Feb 02, Christopher Korch commented:

      In this article by Binder et al. in PLOS ONE (Binder NK, 2016), the authors describe using the cell line ECC-1 as a model of endometrial tissue to study embryo implantation. Using recombinant human placental growth factor they found that treatment of ECC-1 cells increased their cellular adhesion to fibronectin-coated tissue culture plates. I wish to point out two major concerns about the authenticity of this cell line.

      First, two of the three references which they cite for having characterized ECC-1 are not correct. References 23 and 24 refer to the cell line HES, which my colleagues and I showed in 2012, was actually the HeLa subline WISH; i.e., a cervical adenocarcinoma cell line (Korch et al. Korch C, 2012) and our finding was confirmed in 2014 by the originator of HES (Kniss & Summerfield Kniss DA, 2014). WISH was shown by SM Gartler in 1966-1970 to be HeLa cells (Gartler SM, 1967, Gartler SM, 1968, Auersperg N, 1970) and this has been confirmed numerous times thereafter (e.g., by Nelson-Rees during the 1970s-1980s Nelson-Rees WA, 1980, Nelson-Rees WA, 1981; Lavappa at the ATCC in 1978 Lavappa KS, 1978; and Masters et al. 2001, Sandler AN, 1992). No authentic sample of WISH is known to exist (see cell register of misidentified cell lines at the websites of the International Cell Line Authentication Committee (http://ICLAC.org) and of the Expasy Cellosaurus (https://web.expasy.org/cellosaurus/). In fact, ECC-1 is also listed on both websites as a misidentified cell line.

      Next, the authors do not indicate whether they genetically authenticated their sample of ECC-1 (currently the method of reference is by STR genotyping). This cell line was developed by PG Satyaswaroop's group at the Hershey Medical Center in about 1985-1987 when it was established from a xenograft sample of the endometrial tumor EnCa101, which was being maintained by passaging in mice. The cell line ECC-1 was deposited at the ATCC by Bruce Lessey, who had received it from Dr. Sayaswaroop. It was STR genotyped by the ATCC. As described earlier by me and my colleagues (Korch C, 2012) from STR genotyping of numerous samples of ECC-1 and Ishikawa cells from various sources and comparison to the STR profile of the ATCC sample of this cell line, ECC-1 was found to be one of three cell lines - a derivative of the endometrial cell line Ishikawa developed by Nishida (see Nishida M, 2002 for history and dissemination of various subclones), the breast cancer cell line MCF-7, or a mixture of Ishikawa and MCF-7 cells. A complicating fact is that ISHIKAWA / ECC-1 is an MSI unstable cell line, giving rise to variable STR genotypes.

      In an attempt to determine the expected STR profile of ECC-1 / EnCa101, the Hershey Medical Center was approached, but no samples of the original tumor (paraffin block, etc) could be found as the Satyaswaroop lab had been closed in 2002. Furthermore, none of the ECC-1 or Ishikawa samples matched any of the four samples of xenografts of the tumor EnCa101. This tumor has been maintained by VC Jordan since about 1987 (Gottardis MM, 1988). The earliest tumor sample that I could obtain of EnCa101was from 1987 and was found to be genetically unrelated to the three tumor xenograft samples of EnCa101 from 2009 (unpublished data) and the STR genotypes of these four samples did match the profile of any known cell line in several databases.

      Therefore, these results using the cell line ECC-1 should be re-interpreted and used cautiously, because (1) this cell line is a misidentified cell line as shown earlier (Korch C, 2012) and is listed as a misidentified cell line on the ICLAC and Cellosaurus websites; (2) the true identity of the sample of ECC-1 that was used by the authors was not determined, (3) its provenance is confusing because two of the three references for its characterization are incorrect and refer to a different cell line, and (4) since the genetic identity of ECC-1 cell line was not checked, it could be one of at least five different cultures (three variants of Ishikawa and breast cancer cell line and a mixture of Ishikawa and MCF-7).

      Hopefully, this will encourage others to authenticate their cell lines. Below are some suggestions of how this problem could be avoided in the future. They are based on ideas put forth earlier by Dr. Amanda Capes-Davis in a PubMed Commons Comment on another article of concern (see Xu HT, 2016).

      Authors & Reviewers could use the aforementioned resources since: • STR genotyping is effective for authentication of human cell lines and is the consensus method for comparison of human cell line samples (American Type Culture Collection Standards Development Organization Workgroup ASN-0002., 2010, American Type Culture Collection Standards Development Organization Workgroup ASN-0002., 2010); and

      • Checklists for the identity of cell lines used in manuscripts and grant applications are readily available at http://iclac.org/resources/cell-line-checklist/ and through Expasy Cellosaurus https://web.expasy.org/cellosaurus/.

      Journals, their Editors, and Funding Organizations could: • Implement more stringent criteria for publication and funding of research because encouragement of authentication testing, although a step forward, is insufficient to stop use of misidentified cell lines.

      • Develop an authentication policy for sake of reproducible research. For example, to meet this need the NIH has recently implemented requirements for authentication of key resources as part of grant applications (e.g. see NOT-OD-16-011, NOT-OD-16-012).

      • Require testing using an accepted method of cell line authentication, which is effective as illustrated by the cell line authentication policy of the International Journal of Cancer (Fusenig NE, 2017). If the authors, the reviewers, or the journal editors had followed PLOS ONE's own guidelines (http://journals.plos.org/plosone/s/submission-guidelines#loc-cell-lines) and checked for the identity of this cell line on either the ICLAC website (http://iclac.org/databases/cross-contaminations/) or on the Cellosaurus website (https://web.expasy.org/cellosaurus/), the journal would have detected and avoided this problem prior to publication.

      • Regularly review the efficacy of their policy on cell authentication testing to see whether it is adequate (as was done by the International Journal of Cancer), especially in light of such examples having been published by a journal that requires authentication of cell lines prior to submission of manuscripts.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 15, IRWIN FEINBERG commented:

      Kurth et al base their study on a faulty premise, stated in the first sentence: “Brain networks respond to sleep deprivation or restriction (emphasis added) with increased sleep depth which is quantified as slow-wave activity (SWA) in the sleep electroencephalogram (EEG)…” In fact, there is now abundant evidence that sleep restriction does not increase SWA in subsequent sleep, although it produces strong increases in behavioral sleepiness. We first observed this result in a 1991 study of the effects of acute termination of the last 3.5 h of sleep. This decrease in sleep duration did not produce the expected increase in either visually scored or computer measured SWA Feinberg I, 1991. This result so surprised us that we immediately repeated the experiment in a new group of subjects and obtained the same result Travis F, 1991. Since it was well established that a night of total sleep deprivation increases SWA, we sought to determine how much sleep restriction is needed to increase SWA. We limited sleep to 100 min and found that this restriction increased SWA. However, the increased SWA after 100 min of sleep differed from that after a night of total sleep deprivation (TSD). Whereas TSD reduced both the amplitude and incidence of slow waves, restriction to 100 min of sleep increased SW incidence but not amplitude Feinberg I, 1988. Subsequently, an extensive study by van Dongen et al showed that sleep restriction to 4 h/night for 14 consecutive nights does not increase SWA although it produces intense sleepiness and impaired vigilance Van Dongen HP, 2003. The previous studies were done with young adults so that it remained possible that the young subjects Kurth et al studied might have a different SWA response to sleep restriction. This is not the case. Sleep restriction in late childhood also failed to increase SWA Campbell IG, 2016. An incidental but reliable observation in our previous studies was that TSD Feinberg I, 1979 and partial sleep deprivation by restricted sleep (studies cited above) both reduce eye movement density in REM sleep. None of this previous work is cited or discussed by Kurth et al. The omissions are especially regrettable for two reasons. First, the failure of sleep restriction to increase SWA illustrates one of the major predictive failures of recovery models of SWA (both our own Feinberg I, 1974 and the similar two-process model Borbély AA, 2016). Second, the suppressive effects of sleep loss on eye movement density holds major biological implications for interrelations among sleep depth, NREM and REM sleep. Kurth et al's disregard of previous, highly relevant, well-substantiated data impedes research progress.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 26, Harri Hemila commented:

      Antioxidants are not equal: an example of the apples and oranges problem

      In their network meta-analysis on treatments for contrast medium–induced acute kidney injury (CIAKI), Su X, 2017 pooled different vitamins to a single group of “vitamins and analogues” but in doing so ignored the fact that vitamin C is water soluble, whereas vitamin E is fat soluble, and therefore their effects might be quite different. This is an example of the apples and oranges problem. With an analogous reasoning, studies on ciprofloxacin and penicillin might be pooled together on the basis that they are ”antibiotics”, though they have quite different mechanisms and indications.

      Su X, 2017 calculated an odds ratio (OR) = 0.64 for the effect of “vitamins and analogues” but they did not calculate the specific effects of vitamins E and C. From 9 vitamin C trials, Sadat U, 2013 concluded that vitamin C may prevent CIAKI and calculated a risk ratio (RR) = 0.67. From 3 vitamin E trials, Rezaei Y, 2017 calculated that vitamin E decreased the incidence of CIAKI with RR = 0.38 (95%CI 0.24-0.62). On the OR scale, the effect of vitamin E corresponds to OR = 0.34 (95%CI 0.20-0.58).

      Thus, vitamin E seems to have a greater effect against CIAKI compared with vitamin C. These two different vitamins should therefore not be pooled into a single group of “vitamins and analogues”, but they should be analyzed separately. The point estimates also suggest that there is greater justification for further research on vitamin E than on vitamin C. Furthermore, vitamins E and C may also interact under some conditions; for example, Hemilä H, 2009 found that vitamin E decreased mortality of older males only when they had a high vitamin C intake, but vitamin E had no effect when vitamin C intake was low.

      Finally, Su X, 2017 estimated the effect of “vitamins and analogues” on the OR scale. However, Altman DG, 1998 pointed out that OR should be avoided when events are common. In many CIAKI studies, the proportion of CIAKI cases has been so high that OR gives an exaggerated impression of treatment effect. Su et al. calculated that high-dose statin plus NAC decreased the risk of CIAKI by OR = 0.31 (95%CI 0.14-0.60) and they concluded that “high-dose statin plus NAC or high-dose statin alone were likely to be ranked the best or the second best for preventing CIAKI”. However, the upper 95%CI limit for the effect of vitamin E on the OR scale (0.58) is lower than the upper 95%CI limit for the effect of high-dose statin plus NAC (0.60). Thus, on the basis of trials published so far. there is no basis to consider that these treatments actually differ in efficacy. Furthermore, half of the patients of the three vitamin E trials were concomitantly administered statins, and thus the effect of vitamin E may be at least partly independent of the effects of statins.

      Thus, had Su X, 2017 analyzed the vitamin E trials separately, they might have concluded that there is as strong evidence to further study vitamin E as to further study high-dose statin plus NAC.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 07, Cicely Saunders Institute Journal Club commented:

      The Cicely Saunders Institute journal club discussed this paper on 02/11/2016

      The need to address early rehabilitation in individuals with critical illness is an important clinical priority. In meeting the needs of this group, clinicians need to consider who is likely to benefit and to what extent, it is also important to address possible harms that may arise from intervention (see AVERT, Lancet 2015). This paper addresses an important area of practice in the surgical intensive care environment.

      We valued our discussion of the paper, which generated a number of reflections on the methods used and the wider applicability of the intervention. In rehabilitation and palliative care practice goal-setting to target intervention is a widely used approach. In many rehabilitation contexts, goals are set in conjunction with patients, carers and the clinical team. The method used here was somewhat different and used the Surgical ICU Optimal Mobilisation Score (SOMS), which in our view was more akin to a protocol. Nevertheless, the SOMS facilitated targeting of intervention at the individual ability level and seemed appropriate to the ICU environment, where patient engagement in goal setting maybe less practical in some instances.

      We noted that a significant difference was identified in delirium-free days in favour of the experimental group and that the mean difference was three days. We wondered if the authors had any further comment on this and its relationship to the beneficial outcomes identified, particularly reduced length of stay. We reflected that reduced delirium may result in improved ability to engage in rehabilitation practice.

      Commentary by Lucy Fettes & Stephen Ashford


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 14, Attila Csordas commented:

      Authors finish the article with the following suggestion: "To further extend human lifespan beyond the limits set by these longevity-assurance systems would require interventions beyond improving health span, some of which are currently under investigation (15). Although there is no scientific reason why such efforts could not be successful, the possibility is essentially constrained by the myriad of genetic variants that collectively determine species-specific lifespan(16).”

      Authors seem to be suggesting that all the current experimental efforts won’t be enough to increase maximum life span over a biological limit that has been reached already and overcome or counterbalance the collective effect of those “myriad of genetic variants”.

      In fact, there is at least one experimental result that leaves this question definitely open, and this is the scenario of clearing-out senescent cells from the aging organism, see mouse studies summarised in http://www.nature.com/nature/journal/v530/n7589/full/nature16932.html where concerning maximum lifespan the results are mixed: “Maximum lifespan was significantly increased for mixed AP-treated males and females combined (P = 0.0295), but not for females and males individually. Maximum lifespan was not extended for C57BL/6 AP-treated animals, either combined or separately”. Indeed this is the approach taken by biotech companies like http://unitybiotechnology.com/ amongst others.

      I wonder how a study, also appearing in Nature, earlier this year, February, 2016, have slipped through the attention of the authors?

      The assumption of my argument is that mouse studies can be relevant for human trials. After all mice have those “myriad of genetic variants” too.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 22, Marco Lotti commented:

      Please, see Video Comment at:

      https://youtu.be/ZJnt8wIvK14


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 09, Preben Berthelsen commented:

      Levosimendan for the Prevention of Acute Organ Dysfunction in Sepsis

      Gordon AC et al. NEJM 2016.

      A foregone conclusion - Levosimendan does not prevent organ dysfunction in sepsis – that is if you wait long enough before administering it.

      Successful treatment of sepsis is based on the prompt administration of antibiotics, volume repletion and perhaps surgery.

      In this RCT, levosimendan was first deployed a median of 15-16 hours after a diagnosis of sepsis had been established. In my opinion, it is very unlikely that any intervention - delayed for so many hours - will substantially influence the course of sepsis in a meaningful way.

      This trial does not inform on a well-timed use of levosimendan in sepsis. Historically, however, single interventions in sepsis have a poor track record.

      P.G.Berthelsen MD, MIA, DCAH. Charlottenlund, Denmark


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 28, Lydia Maniatis commented:

      It seems that the model being proposed by Snow et al (2016) was obsolete even as it was being constructed, as they decided to base it on assumptions of V1 neuron behavior known for decades to be invalid.

      The choice is signaled by the authors when they explain in their introduction that they are going to “address primary visual cortex (V1) as a paradigmatic example and focus on orientation adaptation phenomena that are within the classical receptive field (RF).”

      I think this is the only mention made in the article of the term “classical receptive field,” but it bears some elaboration.

      The obsoleteness of the concept is reflected in quotes from two articles, below.

      1. Graham (2011) "Beyond multiple pattern analyzers modeled as linear filters (as classical V1 simple cells): Useful additions of the last 25 years" Vision Research.

      “The classical receptive field of a V1 simple cell is very small relative to the distances over which visual perception has to function. Indeed, the classical receptive field is typically composed of only a few inhibitory and excitatory sub-sections…

      …non-classical responses of V1 simple cells can occur over a substantially larger area than the classical receptive field. This is one reason that non-classical receptive fields are now frequently invoked in explanations for perceptual phenomena.

      …these non-classical receptive fields have been invoked to account for a number of psychophysical phenomena as well

      …In addition to the possibility of non-classical suppressive (or facilitatory) effects from outside the classical receptive field, there is also a possibility that the same non-classical effects extend inside the classical receptive field. And perhaps there are different non-classical processes that exist inside the classical receptive field but not outside.”

      1. Angelucci and Bullier (2003) Reaching beyond the classical receptive field of V1 neurons: horizontal or feedback axons? (Journal of Physiology)

      “It is commonly assumed that the orientation-selective surround field of neurons in primary visual cortex (V1) is due to interactions provided solely by intrinsic long-range horizontal connections. We…conclude that horizontal connections are too slow and cover too little visual field to subserve all the functions of suppressive surrounds of V1 neurons in the macaque monkey. We show that the extent of visual space covered by horizontal connections corresponds to the region of low contrast summation of the receptive field center mechanism. This region encompasses the classically defined receptive field center and the proximal surround. Beyond this region, feedback connections are the most likely substrate for surround suppression. We present evidence that inactivation of higher order areas leads to a major decrease in the strength of the suppressive surround of neurons in lower order areas, supporting the hypothesis that feedback connections play a major role in center–surround interactions.”

      Naturally, as the authors point out, the model can’t explain many things (that it should be able to explain):

      “We have shown that our modeling approach can explain some classical adaptation effects as well as a more recent phenomenon of equalization. However, clearly the approach has limitations and does not capture the full set of phenomena for adaptation.

      First, the model in its present form does not include surround influences and cannot capture disinhibition of the surround nor interesting data on facilitation and attractive shifts of tuning curves (Solomon & Kohn, 2014; Webb et al., 2005; Wissig & Kohn, 2012). It would be interesting to consider extensions of the model to capture both spatial (Coen-Cagli et al., 2012) and temporal contextual influences.”

      The model is extremely involved. Evaluating it would require a significant amount of effort and time on the part of colleaugues. SInce even its authors already know its assumptions are false (i.e. lead to false predictions), why should anyone bother? Adding to a failed, ad hoc model is usually not the best way to achieve a better one.

      The publication of models based on invalid assumptions and empirically falsified a priori is common in the vision literature and reflects a culture in which piecemeal, ad hoc (with respect to facts and techniques) and logically and/or empirically false models are treated as equivalent to coherent and insightful hypotheses that first have to be tested before we know they’ve failed.

      The former, being much easier than the latter, overwhelmingly dominates the literature, which has become almost entirely self-referential and unprogressive, because its “theorists” don’t allow themselves to be challenged by inconvenient facts, but rather are satisfied with avoiding them, occupying themselves instead with mathematical prestidigitations.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 24, David Eidelberg commented:

      We thank Dr. Black for his interest in our work. To answer his basic questions:

      1. The minimum time interval between the start of the levodopa infusion and the start of O15- water PET was 60 minutes. As detailed in the original paper by Feigin et al. (Neurology 2001; 57(11): 2083-2088), scanning under infusion commences when UPDRS motor ratings are stable within 5% on two successive examinations separated by 30 minutes. The levodopa infusion is maintained at a constant rate at that point.

      2. As for the test-retest cohort, these PD subjects took their usual morning PD medications on the day of the scan. They maintained their routine antiparkinsonian medication regimen during the 8-week interval between test and retest imaging. The majority of the subjects were on daily carbidopa/levodopa, though a minority took a combination of levodopa and a dopamine agonist. Unfortunately, the exact dosing at the time of the scans (performed before 2008) is not currently available.

      3. All levodopa infusion subjects reported in the study were scanned according to the protocol described in Hirano et al., J Neurosci 2008; 28(16): 4201-4209.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Apr 08, KEVIN BLACK commented:

      Authors: Please tell us the time interval between the start of the levodopa infusions and the start of the [O-15]water and FDG PET acquisitions.

      Also, can you provide information about carbidopa intake on the day of the study? The supplement says only that the 8 PD test-retest subjects took "their usual morning dose of oral levodopa/carbidopa." The dose of LD and of CD is not given for those subjects. The reader is referred to the Ma et al 2007 paper, but I can't find dosing information there, either (nor how many were on CD-LD only and how many were on other dopaminergic drugs). In the infusion subjects, can you clarify whether the newly enrolled subjects had the same carbidopa dosing regimen as described in the Hirano et al 2008 methods?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 17, Jean-Jacques Letesson commented:

      In the starting paragraph of the discussion the authors wrote "Metabolism and usage of erythritol are regulated by the 7.7‑kb ery operon, which consists of four genes eryA, eryB, eryC and eryD (EryA, 519 AA; EryB, 502 AA; EryC, 309 AA and EryD, 316 AA). The functions of these four proteases are similar to xylulose kinase (E. coli xylB), glycerol‑3‑phosphate"

      I have two major comments on this sentences:

      1- THESE GENES ARE NOT PROTEASES: ery is a kinase, eryB is a dehydrogenase, eryC an isomerase and eryD a transcriptional regulator

      2- The EryABCD ery operon is by itself not sufficient to catabolize erythritol as published recently (Barbier T. et al., 2014.PNAS 111:17815–17820.) you need two more isomerase (eryH and eryI)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 14, Martine Crasnier-Mednansky commented:

      In Escherichia coli, CRP stands for Cyclic AMP Receptor Protein, not Catabolite Repressor Protein. CRP is also known—and rightly so—as CAP, the Catabolite gene Activator Protein. Repression by CRP-cAMP, was recognized as early as 1972 (Prusiner S, 1972) and later on emphasized. See, for an elaborate discussion, Kolb A, 1993. In this context, the designation 'Cyclic AMP Receptor Protein' is fully appropriate and has been used preferentially over the more specific designation 'Catabolite gene Activator Protein'.

      Glucose is not "a known inhibitor of CyaA activity". In fact, transport of glucose prevents activation of CyaA by a phosphorylated component of the phosphotransferase system (PTS). Such regulation is relevant to the present work because, upon glucose exhaustion (or some other PTS-transported carbon sources), there is a sharp increase in exogenous cAMP, which may act as a trigger at the onset of the infection.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 04, Peter Hajek commented:

      It is misleading to classify people who tried one puff on an e-cigarette several weeks ago and never touched it again as 'current e-cigarette users'. The common practice of misreporting experimentation with e-cigarettes as 'current use' is responsible for the myth that large numbers of non-smokers are becoming daily vapers, when in fact this is extremely rare.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 05, Kausik Datta commented:

      Congratulations to the authors for undertaking this informative study of physician behavior in Germany. I have a question about a study parameter: an important differentiating variable in this study seems to be the nature of the prescribed medication, pharmacological vs. phyto-pharmacological. However, the paper as published seems to lack any information on what medications the study considered pharmacological or phyto-pharmacological. It may likely be beyond the scope of this paper, but I'd like to have some information on the "phyto-pharmacological" medications prescribed by German GP-NPs evaluated in this study. May I request a pointer from the authors?

      COI Disclosure: I have no direct competing interest with this study, the authors or their sponsors, but I am personally interested in science and ethnobotany, and am known to be skeptical of wide-ranging claims made by CAM practitioners.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 14, Daniel Jarosz commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Oct 14, Daniel Jarosz commented:

      We considered this possibility extensively, and expected to find it commonly. However, we did not observe a change protein levels for the hit proteins that we checked (see supplement), and it seems unlikely that such feedback mechanisms would be transmissible to other cells through protein alone, so strongly sensitive to transient inhibition, or stable over hundreds of generations, through freeze/thaw etc. Investigating whether any of the remaining phenotypic states that we did not test in this way could arise from feedback mechanisms like those described in this comment stands as a goalpost for our future studies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Oct 12, Peter Ellis commented:

      The authors note that many of the proteins identified in this screen were transcription factors and RNA-binding proteins. The paper does not report whether they checked the mRNA and/or protein expression levels of the endogenous gene copy before and after transgenic overexpression of a given gene. In cases where a transcription factor promotes its own expression, or an RNA-binding protein promotes the translation of its own transcript, there is an obvious feedback mechanism through which the memory could be maintained.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 02, Lukasz Antoniewicz commented:

      Thank you for your opinion.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 May 02, Zvi Herzig commented:

      None of the sources with COIs relate to original data. The relevance of snus to this discussion is that it is the cleanest form of nicotine for which a wealth of long-term epidemiological evidence is available. As the American Heart Association notes Bhatnagar A, 2014:

      Because most of the toxicity from cigarette smoking derives from combustion products, the health effects of smokeless tobacco could be examined to assess potential long-term adverse effects of nicotine without exposure to combustion products. Smokeless tobacco users take in as much nicotine as cigarette smokers, although not by the pulmonary route. The most extensive and rigorous epidemiological studies on smokeless tobacco use come from Scandinavia, where a large percentage of men use snus, a smokeless tobacco product that contains nicotine but relatively low levels of carcinogens and other toxins.

      Rodu and Phillips do not present original data and their arguments cannot be refuted simply by citing their COIs. Similarly, Lee's meta-analyses do not present original data and are peer-reviewed. If any other meta-analysis refutes Lee's work, I'd be happy to cite it.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 May 02, Lukasz Antoniewicz commented:

      As our study does not focus on snus I will not continue this debate. I do not question the scientific creditability of your citations, but I find it very interesting that among your citations some researchers clearly stated following:

      Rodu B, 2015 Dr Rodu is supported by unrestricted grants from tobacco manufacturers to the University of Louisville, and by the Kentucky Research Challenge Trust Fund. Dr Phillips is partially supported by an unrestricted grant from British American Tobacco.

      Lee PN, 2013 The author is a long-term consultant to the tobacco industry

      Lee PN, 2009 PNL, founder of PN Lee Statistics and Computing Ltd., is an independent consultant in statistics and an advisor in the fields of epidemiology and toxicology to a number of tobacco, pharmaceutical and chemical companies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2017 May 01, Zvi Herzig commented:

      Snus users are exposed to equal or more more nicotine than smokers Holm H, 1992. If snus does not cause MI or stroke, this indicates that nicotine doesn't. This inference is also made by Benowitz and Burbank, as cited by Bates above.

      The conclusions of the Arefalk study have been refuted by Rodu B, 2015 (see also Rodu's follow-up link).

      The other possible effects of snus are unrelated to EPC mobilization. In any case, the link to type 2 diabetes is inconsistent, e.g. Rasouli B, 2017. Rodu shows that the correlation is consistent at >6 cans per week link, but "more research is needed to confirm a link" in his opinion, because of other known factors. With regards to cancers see Lee PN, 2009 Lee PN, 2013.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2017 Apr 28, Lukasz Antoniewicz commented:

      I agree that we may speculate that nicotine mobilizes EPCs. But I do not agree on the last statement: I find it very interesting that Zvi Herzig cites studies on the tobacco product called snus and equates it to pure nicotine. As a tobacco product, Swedish snus is not comparable to pure nicotine. Swedish snus increases the risk for type 2 diabetes Carlsson S, 2017 and pancreatic cancer Luo J, 2007 and seems even to increase the risk for other types of cancer Zendehdel K, 2008, Song Z, 2010, Hirsch JM, 2012, Nordenvall C, 2013. As cited by Zvi Herzig, case fatality was increased among patients with myocardial infarction and stroke. This was confirmed by Arefalk, wo observed a 50% mortality reduction upon snus-cessation following myocardial infarction Arefalk G, 2014. In conclusion: Swedish snus is not the same thing as pure nicotine or a NRT. Swedish snus is probably not as safe as suggested by the comment of Zvi Herzig.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2017 Apr 25, Zvi Herzig commented:

      Farsalinos and Polosa explicitly write:

      Obviously, we are not implying that the elevation in EPCs that follows acute exposure to nicotine-containing e-cigarette reported by Antoniewicz et al. is beneficial to cardiovascular health.

      Rather, they show that EPC mobilization doesn't indicate cardiovascular harm, as it occurs with activities known to be safe. Moreover, they note:

      acute effects on EPC number could be related to a documented direct effect of nicotine.

      See also Heeschen C, 2006:

      Administration of nicotine increased markers of EPC mobilization.

      Thus, the findings of the Antoniewicz el al appear to be a function of nicotine. Decades of epidemiological evidence don't indicate that nicotine causes myocardial infarction Hansson J, 2012 or stroke Hansson J, 2014, even though there might be some detrimental effects, as noted above.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2017 Apr 24, Lukasz Antoniewicz commented:

      A lot of reviews explain the function of endothelial progenitor cells (EPCs). I will repeat some key points from our letter (https://www.ncbi.nlm.nih.gov/pubmed/28159320): EPCs are cells that participate in vascular repair. The trigger that releases EPCs from a pool into the blood stream is hypoxia in the vascular wall. This fact is well described in plenty reviews and studies. We know that smoking a single cigarette causes a sudden mobilization of EPCs, but this effect is temporary and EPCs return to baseline values during 24 hours (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3938677/). If the vasculature is exposed to chronic stress (like in the case of daily cigarette smoking) this causes frequent releases of EPCs into the blood stream resulting in a diminished pool of EPCs. So, when analyzing EPCs in chronic smokers (but not directly after smoking a cigarette) the amount of EPCs is lower compared to non-smokers. Upon smoking cessation, this pool seems to be replenished and the amount of EPCs increases. The cited studies investigating the effects of red-wine consumption or Mediterranean diet investigate the pool of EPCs in a ”steady state” so the terms long-term and short-term are relative. We investigated the sudden effects (during hours) on EPC-release following smoking and e-cigarette inhalation.

      Physical activity causes physiological stress on muscles and vessels resulting in physiological hypoxia causing a sudden mobilization of EPCs following only hours of physical activity. It is important to highlight that this mobilization is triggered as a physiological response following exercise. It is hard to argue that smoking a couple of cigarettes a day with several EPC releases has a beneficial effect on health.

      Our study shows that e-cigarette inhalation has the potential to mobilize EPCs that are needed for vascular repair. We will see if daily mobilization diminishes the pool of EPCs. It remains to be shown if daily e-cigarette inhalation causes chronic changes to the vascular wall. Maybe in 20 or 30 years we will get a clear answer if the number of e-cigarette users continues to increase and epidemiological studies on myocardial infarction and stroke will give us more information. Until then, there will be a lively debate.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    8. On 2016 Dec 23, Clive Bates commented:

      In their published reply to this article, Endothelial progenitor cell release is usually considered a beneficial effect: Problems in interpreting the acute effects of e-cigarette use, Farsalinos and Polosa point out that the measured increase in endothelial progenitor cells (EPCs) is usually associated with beneficial effects, and not necessarily a cause for the concerns expressed by the authors.

      Farsalinos and Polosa point out that several problem conditions are associated with lower EPC levels:

      However, the increase in EPC levels is largely interpreted in the scientific literature as a beneficial effect while a reduction is interpreted as an adverse prognostic marker. Several risk factors for cardiovascular disease, such as ageing, hyperlipidemia, hypertension, obesity and diabetes, are associated with reduced levels and functional impairment of EPCs. Similar associations were found with “non-classic” risk factors such as high C-reactive protein and homocysteine, and low vitamin D levels. Smokers have lower levels of EPCs compared to nonsmokers.

      They also point out that many positive conditions are associated with higher EPCs:

      Various short-term or acute interventions are associated with elevated EPCs. Consumption of red wine, switching to Mediterranean diet and acute exercise are associated with elevated number of circulating EPCs in healthy subjects. EPCs increase shortly after smoking cessation (especially in light smokers), with nicotine patch users having slightly higher (but not statistically significant) elevation in EPCs after smoking cessation compared to non-users. Short-term administration of green tea also caused an increase in EPCs in young healthy smokers. In all the above-mentioned interventions, the increase in EPCs was interpreted as a beneficial effect and there was no suggestion that it was a response to vascular injury caused by the intervention

      This is merely the latest of several recent analyses in which observations of acute effects of nicotine have been uncritically assumed to be a marker for a chronic cardiovascular disease risk (see Vlachopoulos C, 2016 and Carnevale R, 2016 for example), generating alarming news headlines as a result (see: E-cigs: The incendiary truth... Just 10 puffs increases your risk of heart disease, Daily Mail, 3 Dec 2016).

      Please see Benowitz NL, 2016 for a more credible and complete account of the cardiovascular effects of nicotine as they relate to e-cigarettes. Benowitz and Burbank review the relevant evidence and summarise the current state of knowledge as follows:

      The cardiovascular safety of nicotine is an important question in the current debate on the benefits vs. risks of electronic cigarettes and related public health policy. Nicotine exerts pharmacologic effects that could contribute to acute cardiovascular events and accelerated atherogenesis experienced by cigarette smokers. Studies of nicotine medications and smokeless tobacco indicate that the risks of nicotine without tobacco combustion products (cigarette smoke) are low compared to cigarette smoking, but are still of concern in people with cardiovascular disease. Electronic cigarettes deliver nicotine without combustion of tobacco and appear to pose low-cardiovascular risk, at least with short-term use, in healthy users.

      The absence of serious disease risk when nicotine is consumed through NRT or smokeless tobacco (i.e. without the products of combustion of tobacco leaf) should be a basis for reassuring and encouraging smokers considering switching to vaping.

      I hope the authors and journal will take care that any misunderstandings generated by their work and the media attention that followed will be corrected and placed in context. The problem is that alarming but baseless statements about vaping risks can easily have the unintended effect of encouraging continued smoking and cause harm as a result.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 11, Jos Verbeek commented:

      Unfortunately this review did not adhere to the following important Cochrane standards. There was no published protocol. The authors combined the results of all RCTs and case series thus artificially inflating the effect. They pooled very different interventions in one pooled result even though heterogeneity was as high as 82%. I believe that calculating one pooled effect size for interventions as different as meditation, communication skills or improved work schedules does not make sense. It is also not a methodological standard to loosely asses the quality of the evidence as moderate without further justification and not using the GRADE approach. Another interesting item is the claim that the results are ‘clinical meaningful reductions’. It is not clear what the authors refer to. With patient-reported outcomes one would be interested in a minimally clinically relevant difference or reductions that patients perceive as an improvement. However, for the Maslach Burnout Inventory this difference has never been established as far as we know. Thus we don’t know what the clinical meaning of reductions on this scale is. It is very well conceivable that these will not be perceived as improvements by an individual health care worker. West et al neither discuss the problem of small studies that is apparent with the average number of participants being less than 50 in the 15 included trials but again loosely refer to the funnel plot as not indicating publication bias. I believe that physicians are put on the wrong foot with the conclusion of moderate quality evidence of clinically meaningful reductions in burnout based on this review.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 08, Lydia Maniatis commented:

      The authors state in their conclusion that: "At each contrast level, the perceived depth first increases with the magnitude of disparity modulation up to a critical value and then decreases gradually with further increases in the magnitude of disparity modulation. "

      As is well-known, perceived depth is largely mediated by stimulus structure. The stimuli used by Chen et al (2016) have a structure that produces a 3D impression. Do the "single-cycle" and the "corrugated" version, produce the same depth impression as forms, i.e. viewed in outline monocularly? Then this would have to be factored in to the conclusions re disparity and contrast. Are the authors claiming that they have controlled for this factor? Then this should be stated. Otherwise, their results can only be said to hold for the particular stimuli they employed, not for perception in general. In this case, there are no principles being investigated here; results have no predictive value; conclusions are purely ad hoc.

      The most simple illustration of the role of luminance structure (combined with the principles instantiated in the visual system) is what happens in a completely homogeneous visual field, produced by a flat surface - in other words one with zero contrast. In this case, we perceive a cloudy, 3D space. A small luminance variation on that surface collapses the perceived fog into a flat perceived surface. If we converted that surface into an image that we would refer to as trompe l'oeil, we would have a very strong impression of 3D structure with zero disparity. At low contrast or high contrast, if structure implies depth, we'll see depth, and vice versa.

      With respect to subjects, as is very common in psychophysics, there were very few - three here - and one of them was an author. The participation of authors is odd especially in light of the fact that we're told explicitly that the other two subjects were "naive to the purpose of this study." If it's important that the observers be naive, then why is an author a subject; if it's not important, why mention it?

      Other points: The authors fail to consider the distinction between luminance contrast and perceived constrast. They modulate the luminance contrast between dots and background. We know that small elements, like thin lines, tend to produce assimilation with the background, i.e. lowers perceived contrast.

      The authors mention that previous studies, using various stimuli and conditions, have produced inconsistent results. Despite these studies, we're told, " it is still difficult to infer the effect of luminance contrast effect on perceived depth in a scene..." So the question the authors seem to be asking is "what is the effect of luminance contrast on perceived depth in a scene?" But if the previous studies show anything, it is that conditions matter; so a search for "the effect" seems inappropriate. Picking a set of conditions out of a hat and testing them will only tell us about the luminance contrast effect under those conditions. The set of possible conditions is infinite.

      The authors state in their intro that "as shown in signal detection theory (Green & Swets, 1966; Chen & Tyler, 2001), the threshold measurement constituting stereoacuity depends not only on the intensity of the stimulus but also on the internal noise." Neither Green & Swets (1966) nor Chen & Tyler (2001) have shown that there is noise in the visual system, and they have certainly not shown that this putative internal noise is maintained and expressed in the percept. The "internal noise" claim has generated no evidence (it is not even clear what this evidence would look like, including in terms of physiological measurements), and is maintained on the basis of studies employing a narrow set of conditions generating crude datasets whose results are highly overinterpreted.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 11, Lydia Maniatis commented:

      The authors of this study transparently misrepresent and/or misunderstand the theoretical situation with respect to the subject they are investigating.

      The most blatant oversight involves the failure to note that the type of stimulus being used here – a checkerboard configuration – was shown decades ago to reduce, rather than enhance, perceived contrast in adjacent surfaces. (see demo here: https://www.researchgate.net/figure/225158523_fig5_Fig-5-The-DeValois-and-DeValois-Checkerboard-stimulus). This fact was similarly overlooked by Maertens, Wichmann and Shapely (2015), who simply assumed the opposite.

      The checkerboard configuration is clearly a special and unusual case, in the sense that it elicits the impression of adjacent surfaces rather the more typical experience of surfaces on top of backgrounds (figure/ground relationships). It is unlikely that an ad hoc model that involves averaging of luminances would work for both the former and the latter sets of conditions.*

      The Missing Segmentation Step

      In general, Wiebel et al (2016) have chosen not to address the fundamental factor mediating color and lightness, i.e the structure of the stimulus and the principles by which the visual system organizes the retinal projection to form the percept. Yet Zeiner and Maertens (2014), whose “normalized contrast model” the authors are applying here, acknowledged this gap in retrospect:

      “The important piece of information that is still missing, and which we secretly inserted, was the knowledge about regions of different contrast range. Here we simply used the values that we knew to originate from the checks within the regions corresponding to plain view, shadow, or transparent media, but for a model to be applicable to any image this segmentation step still needs to be elucidated.”

      But the entire problem - the ability to predict perceived lightness of any surface - lies precisely in the "segmentation step." Furthermore, since they haven't taken connection of structure to assimilation and contrast into account, it is, as noted above, doubtful that their model would work in general, even if they were to similarly sneak segmentation in through the back door.

      Wiebel, Singh and Maertens sidestep the issue, simply assuming “that the visual system is sensitive to differences in contrast range and can use them to detect regions seen in plain view, because they have the highest contrast range.”

      The problem, of course, is that the contrast ranges of different “regions” of an image depend on how the image is divided up in the perceptual process; again, they depend on the “segmentation step” that Wiebel, Singh and Maertens, like Zeiner and Maertens, (2016) sneak in unanalyzed.

      The failure to address structure and principles of organization is also reflected in the fact that their definition of contrast depends on comparing luminances in an arbitrary, local area of the total image, rather than everything the observer could see, both on and off the screen. Again, the consequences of this global image depend on the “segmentation” step.

      Model Failures Can't Be Redeemed By Ad Hoc Successes

      The “model” the authors leave us with is not only ad hoc, it fails tests within this study. The abstract indicates the Zeiner Maertens model model fails even for the narrow (and very unusual) set of conditions chosen, but that “model extensions” “fit the observed data well.” In the discussion we find that the fits weren’t all that good: “For the normalized contrast model, significant differences between model predictions and observed data were shown for Reflectance 6 (p < 0.05) in the light transparency. For the dark transparency, significant differences were found for Reflectances 3, 5, 6, 8, and 9 (p < 0.05).” In other words, even for the highly selective conditions used, fits were hit-and-miss. The authors don’t have the theoretical tools to explain why (though the tools are arguably available). But they speculate, and make the following very odd statement.

      In contrasting their “normalized contrast model” with the “contrast ratio model,” they note that one works for better for the “light transparency” conditions and the other for the “dark transparency” conditions. This, they argue, is due to the anchoring assumptions of each model. They propose to create new stimuli to “decide between the two models.”

      But each of the models has already failed. These failures won’t be undone by any future “successes.” Any such successes would obviously be ad hoc, another theoretically and factually agnostic exercise such as the one we are discussing. No amount of such structure-blind, ad hoc model-building could take us even to the point of knowledge already available, though ignored.

      Short Summary: 1. The authors construct ad hoc models of lightness perception without taking into account the fundamental mediator of lightness, i.e. stimulus structure and the principles by which the visual system organizes the retinal projection.

      1. Their model fails a number of tests in this study, but they propose to keep on testing it, in order to “decide” between it and an alternative, which has also failed a number of the tests put to it in this study.

      It is not clear what is the purpose in testing two failed, ad hoc models. Clearly, they are both capable of “succeeding” and of failing in an infinite number of cases.

      *The assimilation that we see in the checkerboard demo is linked to the fact that contrast enhancement is highly correlated with perceived figure ground relationships, making the borders of the figure more visually salient. Percieved figure-ground relationships in turn hinge on conditions indicating figural overlap, such as intersecting continuous edges of potentially closed figures. In the case of the checkerboard, the perceptual relationships between squares is one of adjacency rather than overlap. Luminance changes interpreted as occurring within a single surface tend to produce assimilation, as for example in the case of the Koffka ring.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 30, Clive Bates commented:

      There is nothing at all in these findings to justify the conclusion. In fact, the findings are more likely to support the opposite - that such social media activity is helpful in reducing smoking. If vaping companies are "enticing consumers" it is almost always to stop smoking and to vape instead. So if these tweets were actually affecting behaviour, it is likely that it would be in a way that reduces smoking and is beneficial to public health.

      The authors cannot know if these tweets do actually cause changes in smoking or vaping status. However, it seems likely (by which I mean 'obvious') that vaping-related tweets would be seen by almost exclusively people who already vape. The key design feature of twitter is that users opt into the content they wish to see by choosing to follow other users. Non-vapers are unlikely to follow a vaping company or vaping review feed. Equally, advertising is targeted at users algorithmically to reach users and potential users )i.e. smokers). So the likelihood is that, if these tweets have any impact at all, it will be in shaping preferences among those already vaping for particular flavours, brand or retailers or advertising an alternative to smoking, albeit one that the authors appear to disapprove of.

      The authors seem concerned that only 3% mention vaping as a way of quitting smoking. Why should that matter at all? If people are attracted to vaping for different reasons but give up smoking as a result, what's the problem? Furthermore, The Vaper points claims about quitting smoking are usually banned or deemed irresponsible Kavuluru R, 2016. Other authors have managed to be worried about the finding the opposite van der Tempel J, 2016.

      There is no justification for doing this work in the first place, no case to publish such flawed interpretation of the findings, and no need to spend any time or money following its self-evidently false conclusion and inappropriate policy recommendation.

      Disclosure: I am a longstanding advocate for 'harm reduction' approaches to public health. I was director of Action on Smoking and Health UK from 1997-2003. I have no competing interests with respect to any of the relevant industries.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 07, John Sotos commented:

      In assessing the possible impact of machine learning on clinical medicine, Obermeyer and Emanuel(1) describe the narrowing gap between human vs. computer analysis of images, and declare that "machine learning will displace much of the work of radiologists and anatomical pathologists."

      We hesitate to agree, owing to the Jevons paradox and elastic demand for medical imaging.

      In 1865, the economist William Jevons predicted that more efficient coal-burning in manufacturing plants would not lower the nationwide consumption of coal. Instead, the lower cost per unit of energy would increase demand for coal energy and thereby increase consumption(2).

      Thus, assuming machine interpretation lowers the cost per imaging study, future human case loads will depend on the quantitative balance between a Jevonsonian increase in imaging (if any(3)) and the fraction of cases where computers completely exclude humans (e.g. only 25% for contemporary computerized Pap smear interpretation(4)).

      Clearly, major changes are coming, but, given healthcare's tangled economics, it is premature to affirm that computerized image interpretation will decimate physician workloads.

      John Sotos, MD

      Lester Russell, BM DRCOG MRCGP MBA

      (1) Obermeyer Z, Emanuel EJ. Predicting the future -- big data, machine learning, and clinical medicine. N Engl J Med. 2016; 375: 1216-1219.

      (2) Jevons WS. The Coal Question. London: Macmillan and Co., 1865. Pages 102-104.

      (3) Polimeni JM, Mayumi K, Giampetro M, Alcott B. The Jevons Paradox and the Myth of Resource Efficiency Improvements. New York: Earthscan Routledge, 2008.

      (4) Bengtsson E, Malm P. Screening for cervical cancer using automated analysis of Pap-smears. Computational and Mathematical Methods in Medicine. 2014; Article ID 842037. http://dx.doi.org/10.1155/2014/842037


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 10, Øjvind Lidegaard commented:

      Thanks to Chelsea Polis for her interest in and comments on our study.

      First, the group of non-users, which was the reference group in the main analysis, includes those users of copper IUD which Chelsea Polis calls for, in addition to users of barrier methods, eventually combined with natural methods such as interrupted coitus or calendar methods. The important point here is, that those women who previously have used or in the future will use hormonal contraception are among those women. Therefore, it is not correct to consider the control group as a group of women not using contraceptive methods.

      Our recommendation of further studies on this issue was not primarily due to uncertainty about the methods or the results we achieved, but more the fact that sometimes scepticism is easier to overcome when several study groups reach the same results as achieved in one sound large cohort study as the Danish study.

      We also made assessments including pregnant women. For all users of oral contraceptives the relative risk of first antidepressant use actually increased with inclusion of pregnant and delivering women from 1.23 (1.22-1.25) to 1.31 (1.29-1.32) and for the 15-19 year old users the relative risk of antidepressant use was unchanged 1.8 (1.75-1.84). The increase is due to those women who begin taking oral contraceptives within the first six months after delivery, and who already have an increased risk of depression due to their delivery.

      In Denmark 4% of women of reproductive age are pregnant, while 40% are current users of some kind of hormonal contraception. That is another good reason why the inclusion of pregnant women does not have much impact on the risk of depression in users of hormonal contraception. And remember that the majority of delivering women get pregnant because they want to, and not because of contraceptive failure. Women getting unwanted pregnant and choose to terminate their pregnancy, generally do not get depressed, which was demonstrated in another large Danish prospective study (1), despite the frequent claim of the opposite from especially opponents of legal abortions.

      The biggest weakness of our study is in our opinion the comparison group of non-users in the main analysis. A more correct comparison group would have been never-users. The influence of hormonal contraception on depression risk increases from 1.2 to 1.7 with this change in comparison group. So if anything, our relative risk figures are underestimated.

      1. Munk-Olsen T, Laursen TM, Pedersen CB, Lidegaard Ø, Mortensen PB. Induced first-trimester abortion and risk of mental disorder. NEJM 2011; 364; 332-9.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 07, Chelsea Polis commented:

      This analysis by Skovlund et al. (1) suggests that Danish women who currently or recently used various types of hormonal contraception may be at greater risk of being diagnosed with depression or initiating use of antidepressants, as compared against those who formerly or never used hormonal contraception. As the authors conclude, this study suggests that “further studies are warranted to examine depression as a potential adverse effect of hormonal contraceptive use”. This is particularly important since the study was unable to provide information on these outcomes among women using non-hormonal contraceptive methods, such as a copper IUD, which may have helped to clarify whether the observed associations were related to factors common to women choosing to use contraception, or to the hormonal content of the methods assessed.

      Importantly, the investigators note that they censored person-time during pregnancy and through six months post-partum. The authors characterize this as a strength of the study, noting that it was done to reduce the influence of postpartum depression on the results. However, women not using highly effective methods of contraception are presumably more likely to become unintentionally pregnant, which also has implications for women’s mental health. (2)

      A sensitivity analysis not excluding pregnant and post-partum person-time could be useful in better understanding the potential competing risks faced by women in their day-to-day lives. It would be helpful for the authors to present pregnancy rates by contraceptive status, and to replicate the main analyses without excluding pregnant and post-partum person-time.

      Sincerely,

      Chelsea B. Polis, PhD, Senior Research Scientist, Guttmacher Institute

      Ruth B. Merkatz, PhD, RN, FAAN, Director, Population Council

      1. Skovlund CW, Mørch LS, Kessing LV, Lidegaard Ø. Association of hormonal contraception with depression. JAMA Psychiatry 2016 (Epub ahead of print).
      2. Abajobir AA, Maravilla JC, Alati R, Najman JM. A systematic review and meta-analysis of the association between unintended pregnancy and perinatal depression. J Affect Disord 2016;192:56-63.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 04, Quinn Capers commented:

      Thank you for your interest in our work. Briefly, we are aware of theories that the IAT may measure something other than racial bias. However, regardless of what they are actually measuring, IAT results predict discriminatory behavior. Secondly, it was beyond the scope of the paper to control for the Hawthorne effect, but we acknowledge that behaviors could have changed because committee members were aware that they were being observed. Finally, we agree that an analysis of the composition of the class following the IAT should be accompanied by full admissions statistics pre- and post- the exercise. This is provided in the paper and commented on (Table 2). Word count restrictions prevented us from including this information in the abstract. Feel free to email me after reading the full length paper.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Sep 30, Thomas Heston commented:

      Does the black-white implicit association test measures racial bias? Or something else? (Kaufman SB. Psych Today 28-Jan-2011 https://goo.gl/h3jvw1). Suffice it to say that there does exist at least some controversy. The authors also failed to control for the Hawthorne Effect (BMJ 2015;351:h4672 http://bit.ly/2dqvD4p). The statement that the class that matriculated following the IAT exercise was the most diverse is really meaningless without examining the applicant pool. Examine bias? Yes, by all means. But don't ignore potential test bias and researcher bias which may make the results unscientific.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 26, Martine Crasnier-Mednansky commented:

      This paper reinforces –perhaps validates– the authors’ previous work (Mondal M, 2014) indicating PTS-transported GlcNAc is utilized by Vibrio cholerae in the mucus layer, as further explained.

      Data in Meibom KL, 2004, particularly supplemental figure 7, clearly indicate chiA2 (VCA0027) is not upregulated by GlcNAc. Therefore, in agreement with the present work, GlcNAc utilization by V. cholerae in the mucus may rely on the periplasmic activation of ChiS for production of ChiA2. Because chiS mutant strains do not produce extracellular chitinases in the presence of chitin oligomers (Li X, 2004), they are unlikely to produce chitinases in the presence of mucin (the authors report ChiS is activated in the presence of mucin). Thus, both chiA2 and chiS mutant strains may prevent colonization of the intestine because they are both unable to cause mucin hydrolysis by ChiA2 and subsequent release of GlcNAc, which, according to the authors’ original 2014 proposal, is necessary for growth and survival in the mucus. The mucin-derived 'inducer' for ChiS activation is possibly (GlcNAc)3, as the authors reported (GlcNAc)3 is released upon mucin hydrolysis by ChiA2.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 12, Maria Sammartino commented:

      "The interaction of the electron beam, emitted by the gun, with the sample induces an excitation, energy is lost and a single X-ray is emitted that is characteristic of the element hit." A very bad description of the EDS! Anyway the author Gatti A.M. improved her knowledge on the subject; really in one of her oldest article ( Liver and kidney foreign bodies granulomatosis in a patient with malocclusion, bruxism, and worn dental prostheses.By: Ballestri, M; Baraldi, A; Gatti, AM; et al. GASTROENTEROLOGY Volume: 121 Issue: 5 Pages: 1234-1238 Published: NOV 2001)she defined the X-ray microprobe "radiograph microprobe" May be due to the scarce knowlege of the EDS mechanism, that imply an elemental analysis, the error usual in almost all the articles by Gatti is to state that what she find in the sample are non-biodegradable metals (The particles detected showed to contain highly-reactive, non-biocompatible and non-biodegradable metals. It is not specified to which of the white particles the spectra in fig. 1 refer. In my opinion, from the SEM images of the same figure, i.e. at such magnitude, it is almost impossible to measure the particles dimension. Further, even if the total surface occupied by the sample on the acetate filter is not declared, in my opinion, it is anyway almost impossible to count all the particles in a reasonable time; as an example the SEM image show an area of about 50x50 micrometers; how many images they have to acquire to cover a 10x10 mm area? The PCA is at all not explained, and not correctly graphicated. First of all the graph of the two Principal Components must be isometric and squared; the graph of the Loadings lacks and, looking at the data reported in tables and hystogram, it is unclear what they are; the first two components account for less than 47% of the total variance and the graph of the % variance as a function of the components lacks


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Feb 01, Davide Radice commented:

      Visani G et al, compared the occurrence of nanoparticles and aggregates in the peripheral blood samples of AML patients and healthy subjects in a matched case-control study and based on their findings they argue that nanoparticles could lead to AML (that is: nanoparticles could be a risk factor for AML). However I found a number of issues regarding both the design and the statistical analysis. Here in brief the most relevant ones:

      1. apart from the few number of subjects included they say that AML patients were matched with healthy subjects, however they do not specify which confounders they matched and controlled for (Table 1 only describes the characteristics of the AML patients but not those of the matched healthy subjects)

      2. the primary analysis compares the average particles and aggregates counts, element by element (Table 4) through a series of two-sample t-tests: it should be kept in mind that

        a) the t-test is not suitable for count data because counts violate the underlying assumptions, thus any inference based on the significance of the t-test must be considered as possibly wrong

        b) despite they call each t-test as ‘independent’ because they consider controls as independent of cases, the individual tests are not independent of one another due to the fact that they all were conducted on the same sample of subjects. This is a well-known additional issue called ‘the multiple comparison problem’ they would have to take into account even if they used a more appropriate non-parametric alternative to the t-test. For example taking the 19 raw p-values in Table 4 it can be shown that after adjusting for the multiplicity using the Holm method [1], the only statistically significant comparisons are those regarding the average counts for Al (p = 0.019) and Ca (p = 0.019).

      Moreover a statistically significant difference for the average of the counts between AML and healthy subjects it does not imply that aggregates and particles can be considered as possible risk factors, as can not be inferred in general as a risk factor any other variable just on the basis of the significance of the difference between two means (think for a while to a significant difference between the average shoes size, observed by chance). To conclude correctly about a risk you don’t compare two means, you must estimate and test the risks. The authors they should have properly analyze their data using a conditional logistic regression model [2]

      Taking the data in Table 2 and assuming that each sample in a row shows the counts of a properly matched case-control for the unspecified confounders, I ran the appropriate statistical analysis (SAS 9.3) as described above. Taking the controls as reference the (multivariable) conditional logistic regression analysis clearly shows that neither aggregates nor particles are significant risk factors for AML. Here the results:

      • particles : OR = 1.19 (95% CI: 0.83,1.69) p = 0.34
      • aggregates: OR = 0.58 (95% CI: 0.09,3.83) p = 0.57

      Legenda: OR = Odds Ratio, CI = Confidence Interval

      [1] Holm, S. (1979). "A simple sequentially rejective multiple test procedure". Scandinavian Journal of Statistics. 6 (2): 65–70

      [2] Breslow, N. E., et al. (1978). "Estimation of multiple relative risk functions in matched case-control studies.". American Journal of Epidemiology. 108 (4): 299–307


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.