10,000 Matching Annotations
  1. Jul 2018
    1. On 2016 Jan 24, Claudiu Bandea commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jan 19, Emmanuelle Charpentier commented:

      I regret that the description of my and collaborators’ contributions is incomplete and inaccurate. The author did not ask me to check statements regarding me or my lab. I did not see any part of this paper prior to its submission by the author. And the journal did not involve me in the review process.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Jan 18, JENNIFER DOUDNA commented:

      From Cell editor: “…the author engaged in substantial fact checking directly with the relevant individuals.”

      However, the description of my lab’s research and our interactions with other investigators is factually incorrect, was not checked by the author and was not agreed to by me prior to publication.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 21, Anju Anand commented:

      This is a commentary from our online Twitter #rsjc discussions


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 18, Ian Small commented:

      We are in the process of setting up a new site to make this data accessible. In the meantime, you can contact me directly (ian.small@uwa.edu.au) for help.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Oct 13, Kristian Ullrich commented:

      The link to the web portal http://www.plantppr.com is not working! If it has moved to another location maybe the authors could add a comment on where this resources is now located.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 21, Mayer Brezis commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jan 21, Mayer Brezis commented:

      The paper by Dr. Macy in fact says: "Physician-documented cephalosporin-associated anaphylaxis was about 2.9-fold more common in individuals with histories of penicillin allergy than in individuals with no history of drug allergy." A remote risk of anaphylaxis supports using recommended first line antibiotics for cystitis such as nitrofurantoin or trimethoprim-sulfamethoxazole - see international guidelines (Clin Infect Dis. 2011;52: e103-e120) which add: "The β-lactams [such as cephalexin] generally have inferior efficacy and more adverse effects, compared with other UTI antimicrobials".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Jan 14, Eric Macy commented:

      Recent research has shown no increased risk of anaphylaxis with cephalosporin use in individuals with a history of penicillin allergy.J Allergy Clin Immunol. 2015 Mar;135(3):745-52.e5. doi: 10.1016/j.jaci.2014.07.062. Epub 2014 Sep 26.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 17, ROBERT COMBES commented:

      Professor Michael Balls and Dr Robert Combes respond to Dr Coral Gartner regarding concerns about possible conflicts of interest.

      We thank Dr Gartner for her comments, and for the opportunity to clarify potential conflicts of interest relating to our paper [1]. This was written by us as independent individuals, free of any commercial influence or funding, and after both of us had ceased having close ties with FRAME. FRAME is a scientific charity that has openly received financial support from the chemical, cosmetic, household product, pharmaceutical and tobacco industries, to enable it to undertake independent research into the development, validation and acceptance of alternatives to animal experiments.<br> Some of this work included the development, characterisation and preliminary assessment of in vitro models of inhalation toxicology. While we are not in a position to say anything about FRAME’S current policy on industrial funding, we must stress that the tobacco industry funding enabled FRAME to investigate ways to replace highly invasive and complex animal experiments with urgently needed alternatives with the potential for producing more-relevant and more-reliable data for assessing human safety.

      As far as personal remuneration is concerned, RDC has acted as an external consultant for the tobacco industry since retiring in 2007 from FRAME. This work was conducted under standard contract research agreements, the last of which terminated over 12 months prior to the writing of our article. The work referred to by Dr Gartner, that was co-authored by RDC with a named individual as lead, relates to research undertaken when this individual and RDC were employed by Inveresk Research International (IRI, now Charles River Laboratories), a contract research establishment. This can be directly verified by opening the authors’ affiliations in PubMed (http://9www.ncbi.nlm.nih.gov/pubmed)for each of the four respective abstracts (PMID: 9491389; PMID: 1600961; PMID: 1396612; and PMID: 7968569). This work was entirely funded by the US Government, as was acknowledged in each of the papers, and also by the inclusion of another co-author, then based at NIEHS (the National Institute of Environmental Health Sciences, Research Triangle Park, USA), who acted as project leader. It should be noted that the lead author of the publications arising from the work conducted at IRI subsequently went to work at BAT, and this might have added to any confusion.

      MB has never been a paid consultant for any industrial company. He was honorary Chairman of the FRAME Trustees from 1981 to 2013, and has been honorary Editor of FRAME’s journal, Alternatives to Laboratory Animals, since 1983. He no longer has any influence on FRAME’s policies on the tobacco industry or on any other issue. None of FRAME’s industrial supporters ever attempted to dictate or limit FRAME’s activities, or influence the circulation and/or publication of the results of any FRAME research. While MB was head of the FRAME Alternatives Laboratory at the University of Nottingham Medical School, no tobacco product, or chemical, other material or product of interest to the tobacco industry was involved in FRAME’s research. He left the University of Nottingham in 1993, to become the first head of the European Commission’s European Centre for the Validation of Alternative Methods, a position from which he retired in 2002. We consider that there is a distinction between the above situation, in which, despite previous links of various kinds with the tobacco industry, we wrote our critique [1](ref) without any form of external influence, and that which we referred to, involving alleged conflicts of interest in the MCDA study. However, while we acknowledge that conflicts of interest and their consequences are complex, we hope that we have taken into account as much relevant information as possible to permit a fair and balanced appraisal of the information on which PHE's policy on electronic cigarettes is based. We consider it crucial that scientific opinions, and the policies which result from them, are based on freely-available evidence of high quality, which has been openly conducted and independently assessed. We know of no such evidence to support PHE’s claim that e-cigarettes are 95% safer than tobacco cigarettes.

      We welcome Dr Gartner’s comment, we hope that others will address the scientific arguments that we have used to justify our position, since, the validity, or otherwise, of these should be unaffected by any conflicts of interest. There is a great deal at stake, including the future well-being of those who have opted for vaping as an alternative to tobacco smoking. We stand by our belief, expressed in a letter published in The Times on 18 February 2016, that “The human respiratory system is a delicate vehicle, on the which the length and quality of our lives depend. For governments and companies to condone, or even suggest, the regular and repeated inhaling of a complex mixture of chemicals with addictive and toxic properties, but without comprehensive data, is irresponsible and could have serious consequences.”

      1. Combes, R D. & Balls, M. On the safety of e-cigarettes: "I can resist anything except temptation". ATLA. (2015) 43, 417-425.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Sep 15, Coral Gartner commented:

      Combes and Balls raise the issue of potential conflicts of interest in their critique of the PHE-commissioned report on e-cigarettes.

      Indeed, links between scientists and industry are regularly raised in the debate concerning tobacco harm reduction, and Tobacco Industry funding is of particular concern generally. For example, Simon Chapman writes, "Tobacco-funded research and the conduct of the industry which oversees it has arguably the worst of all reputations. This explains why that industry is unique among all others in being barred from funding research and scholarships at many universities."

      Given this context and the general sensitivity around tobacco-industry funded researchers, I was somewhat surprised that Robert Combes didn't address his own history of being a paid consultant for British American Tobacco over at least two decades (Combes R, 2013 ; Combes R, 2012 ; Dillon D, 1994; Dillon D, 1998 ; Dillon D, 1992 ; Dillon D, 1992) when raising the issue of conflicts of interest in this article.

      It would be very helpful if both authors could clarify the nature of their past and present relationships with BAT, and any other tobacco companies they have performed consultancy work for, and confirm when they last performed consultancy work for a tobacco company. Also could they advise what their current position is on undertaking further consultancy work for the tobacco industry in the future?

      Similarly, it would be helpful to know if FRAME currently or previously has received funds from a tobacco company and what the organisation's current official policy on receipt of tobacco industry funds is.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 13, NephJC - Nephrology Journal Club commented:

      This trial was discussed on March 15th and 16th 2016 in the open online nephrology journal club, #NephJC, on twitter. Introductory comments are available at the NephJC website.<br> The discussion was quite dynamic, with more than 75 participants, including nephrologists, gastroenterologists, geriatricians, clinical pharmacists, fellows, residents and patients, and included one of the authors (F. Perry Wilson). The transcript of the entire tweetchat is available on the NephJC website.<br> Some of the highlights of the tweetchat were:

      • The team of investigators should be commended for designing and conducting this trial, and the NIH for funding the investigators, specifically the NHLBI for funding the ARIC study.

      • The research team leveraged an existing study (ARIC) to test the association between PPI use and kidney disease, and went on validate the findings in another cohort (Geisinger). Along with the detailed statistical analysis and the robustness across various subgroups, this data does suggest a link between PPI use and chronic kidney disease (the link with acute kidney injury has been made before, and was also found in this study) with a very concerning number needed to harm (~ 30 in the ARIC cohort and 59 with the Geisinger cohort).

      • However, given the wide prevalence of PPI use and quite likely some residual confounding for unmeasured factors (such as frailty), as also the lack of a firm biological basis for chronic kidney damage, most participants felt further replication of these findings would be more convincing. Despite the relatively low number needed to harm, PPIs have a remarkably lower number needed to treat (eg NNT 4, compared to ranitidine for reflux esophagitis). Nevertheless, deprescribing PPIs, especially with a useful algorithm such as this, could be incorporated into practice.

      Interested individuals can track and join in the conversation by following @NephJC on twitter, liking #NephJC on facebook, signing up for the mailing list, or visit the webpage at NephJC.com.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 20, Rafael Najmanovich commented:

      Anyone interested in creating annotated human kinome images similar to those presented in this article may wish to take a look at the free kinome render tool: Chartier M, 2013.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 08, Tony Gardner-Medwin commented:

      This paper (Gomes et al., 2016) raises a dilemma for readers. If well founded, it clearly merits much work to understand it and its implications. But the conclusions conflict so strongly with conventional wisdom that it is tempting to dismiss it as probably somehow incorrect. All credit therefore to Barbour (2016) for critiquing it and pinpointing questions that clearly need answering. Hopefully, both the authors and others with their own perspectives may contribute to clarify the situation.

      Broadly I concur with the points Barbour raises. I would add however what seems possibly a key oversight in the papers from Gomes' group. This is in the argument that observed filter characteristics and cell input impedances that fall off with the square root of frequency at high frequencies are indicative of diffusion processes, rather than R-C elements as in conventional modelling. Cable equations for the input impedance of even the simplest dendritic model (with uniform characteristics and a length of many space constants: V'' = Λ<sup>-2</sup> (1+ jωτ) V, I= -V'/R, Z = RΛ / √(1+jωτ), |Z| = RΛ (1+ω<sup>2</sup> τ<sup>2</sup> )<sup>-0.25</sup> ) predict just such a relation (Rall & Rinzel, 1973: see equations A13, A15 for the more general solution with dendrites of any length). So the argument of Gomes et al. that the data implicate diffusion processes (which can also lead to square root relationships) seems to collapse.

      Though the external pathway for current generated by neurons is usually regarded as largely within interstitial space, it is not exclusively so, even at low frequencies or DC. Around 6% of DC current passed through rat cortex is accounted for by K<sup>+</sup> flux (Gardner-Medwin, 1983). Since current in interstitial space would only account for a K<sup>+</sup> flux of 1.2%, the difference is presumably due to trans-cellular passage of at least 5% of long distance current flow, probably largely through the astrocytic syncytium. This is a small adjustment to the notion that low frequency currents are largely extracellular, but it does represent a 5-fold enhancement of K<sup>+</sup> flux driven by an electro-chemical gradient, which when applied to chemical (concentration) gradients implies greatly enhanced K<sup>+</sup> dispersal around regions of build up in interstitial space, compared with diffusion alone - the so-called 'spatial buffer' mechanism for K<sup>+</sup> .

      An additional, larger, component of macroscopic cortical conductance appears to arise from extracellular but not interstitial pathways, possibly via perivascular tissue. This may not have been studied in detail, but is indicated by the fact that measured cortical impedance is in at least some circumstances only around half what would be expected on the basis of measurements of the volume and tortuosity of local interstitial space around a microelectrode (Gardner-Medwin, 1980; Nicholson & Phillips, 1981). Barbour (2016) points out that these two ways of approaching impedance both give an order of magnitude hugely below that of Gomes et al. (2016). Taking account of interstitial tortuosity shows, however, that they do differ by a factor of about 2.

      Barbour B. (2016) Analysis of claims that the brain extracellular impedance is high and non-resistive. https://arxiv.org/abs/1612.08457

      Gardner-Medwin A.R. (1980) Membrane transport and solute migration affecting the brain cell microenvironment. Neurosci. Res. Progr. Bull. 18:208-226

      Gardner-Medwin A.R. (1983) A study of the mechanisms by which potassium moves through brain tissue in the rat. J Physiol 335:353-374

      Gomes J.-M., C. Bédard, S. Valtcheva, M. Nelson, V. Khokhlova, P. Pouget, L. Venance, T. Bal, and A. Destexhe (2016) Intracellular impedance measurements reveal non-ohmic properties of the extracellular medium around neurons. Biophysical Journal, 110(1):234-246

      Nicholson C. & Phillips J.M. (1981) Ion diffusion modified by tortuosity and volume fraction in the extracellular microenvironment of the rat cerebellum. J. Physiol. 321:225-257

      Rall W. & Rinzel J. (1973) Branch input resistance and steady attenuation for input to one branch of a dendritic neuron model. Biophysical Journal 13(7):648-688


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2018 Feb 01, Alain Destexhe commented:

      I see your points but I have no idea whether they are valid or not. Why don't you try to publish your model? At the moment, you have a model drawn on the back of the envelope, and for which we don't know if it is valid or flawed. You use this model to criticize a series of papers published in journals like Biophysical Journal or Physical Review, so having been reviewed by biophysicists and physicists. I suggest you do the same to give more credibility to your criticism, otherwise it seems a bit easy...

      About the issue of ohmic or non-ohmic, as said above, this paper is not alone, it is part of a series of papers which bring evidence based on other signals, such as intracellular recordings, LFP, EEG, MEG... This is the impedance-measurement part of the story. You can of course say - in non peer-reviewed media - that these peer-reviewed papers are all wrong, but frankly it is not very credible.

      (and also not very constructive - as you do not propose anything to go forward)

      In you read our review paper published in J. Integrative Neurosci (preprint copy at https://arxiv.org/abs/1611.10047), we review all these elements, and propose new experiments to go forward. In my opinion, this is the only way to go.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2018 Jan 31, Tony Gardner-Medwin commented:

      Destexhe suggests here (and in his published response to Barbour) that conventional modelling of their data must predict a resistive impedance at high frequencies, which they did not see. The argument is that a capacitative membrane impedance must become negligible at sufficiently high frequencies compared with a series extracellular resistance. This is an incomplete argument and misleading, as I thought was clear to Destexhe early in our correspondence.

      The conclusion would indeed be true (though only evident at frequencies orders of magnitude higher than the 1kHz maximum in the data) if it were possible to directly measure an impedance between cytoplasm and an extra-cellular site. However, real measurements require electrodes with resistance and capacitance that need to be included in a model. Gomes et al. used a single patch electrode for both current passage and voltage measurement, which further limits the interpretation. When capacitance of electrode walls and amplifier input are included, impedance phase tends towards -90deg at moderately high frequencies, while compensation in the amplifier using negative capacitance can push it towards +90deg, as may explain the steeply increasing phase in Fig. 8. These effects can be explored in the model I provide, and I thought they were acknowledged by Destexhe many months ago.

      My model of course makes simplifying assumptions. The parameters set in the download are only examples of sets that can fit well to the data of Figs 2,8 in the paper. Indeed they rather arbitrarily use equal time constants for soma and dendrite membranes since Destexhe thought for some unstated reason that this should be a constraint. It is not at all clear that implausible parameters are needed to fit the data using conventional electrophysiological analysis, though of course there are many unknown details about the preparations used. Destexhe coments (26 Jan) that more experiments could settle matters. It is indeed possible of course that new experiments could overturn conventional electrophysiological understanding. But the point at issue at present is not whether conventional understanding is wrong, but whether the paper of Gomes et al. (2016) shows that it is wrong, as claimed in the title.

      Gomes J.-M. et al. (2016) Intracellular impedance measurements reveal non-ohmic properties of the extracellular medium around neurons. Biophysical Journal, 110(1):234-246


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2018 Jan 26, Alain Destexhe commented:

      Thanks Tony for sharing your model, which indeed reproduces part of our measurements. As pointed in our discussions, this match concerns the impedance amplitude (modulus), which can be reproduced by different resistive models. However, this is not true for the phase, no resistive model can account for the phase converging to -45 degrees that we observed. These points were detailed in our commentary in Biophys. J:

      http://www.cell.com/biophysj/fulltext/S0006-3495(17)30914-1

      So to your question, can resistive models fit our data, our answer is no.

      As we said before, there is no point in discussing this any further now. The only way we can agree is to do the right experiment that everybody will fully accept.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2018 Jan 26, Tony Gardner-Medwin commented:

      I think for most people with some knowledge of biophysics it will be fairly clear already that the paper Gomes et al. 2016 is quite inadequate to provide any sort of challenge to conventional electrophysiological modelling.

      . I had an extensive and constructive correspondence with Destexhe last year. It was clear to me (and at least to some extent to Destexhe) that conventional theory can closely and quite simply fit the data of Figs 2,8 in Gomes et al.. At first Destexhe and colleagues argued that such a model was too simplistic, leaving out certain factors. When these were included and more complete fits could be made, there have been no clear suggestions from them that critical parameters were actually wrong or implausible (for example, over-compensated negative capacitance to explain phase data in Fig. 8). I had hoped to publish these fits jointly with Destexhe and colleagues, without getting involved in the more detailed disputes over Barbour's comments. Destexhe didn't agree to this, and I have not so far felt it worth publishing on my own what might be seen as a rather unnecessary attack on an already somewhat discredited paper.

      . Since the issues seem still continuing in a manner reminiscent of tweets, I have put up the model that I developed in the correspondence with Destexhe at http://tmedwin.net/docs/Gomes_fits_v4.xlsx and encourage anyone interested to play around with it. I have a more sophisticated version (in Labview) that takes account of the distributed e-c field potentials generated by dendritic current in the model - but the differences are pretty negligible in their effect on what Gomes et al. measured. Comments and queries are of course welcome, either here or to me at ucgbarg@ucl.ac.uk .

      . Tony Gardner-Medwin


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2018 Jan 26, Alain Destexhe commented:

      Not at all - this paper uses NEURON so we can't include any complex extracellular impedance. As we said previously, we are waiting for appropriate experiments to settle this issue of extracellular impedance, evaluate the importance of the short-cut and determine which model should be considered in which circumstance. The previous discussion has shown that each party has its own logic and supporting data, so there is no point of continuing the debate in the absence of these new experiments. So no, the discussion is not "closed" at all, but it is "on hold" waiting for new experiments.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2018 Jan 25, Boris Barbour commented:

      Alain Destexhe is an author on a recently posted preprint using a low and resistive value for the extracellular impedance.

      https://www.biorxiv.org/content/early/2018/01/05/243808

      This is completely inconsistent with the position he puts forward here. If he has changed his opinion, it would be helpful to readers to close this discussion by making that clear.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    8. On 2017 Oct 04, Boris Barbour commented:

      My preprint has been published as a letter to the editor

      http://www.cell.com/biophysj/fulltext/S0006-3495(17)30913-X

      Bédard and Destexhe reply

      http://www.cell.com/biophysj/fulltext/S0006-3495(17)30914-1

      I have since found a metaphor that might offer a useful intuition to newcomers to the subject. Imagine that the extracellular impedance is represented by wallpaper and the membrane impedance by a castle wall. You wish to measure the thickness of the wallpaper (extracellular impedance). You must choose one of the following measurement methods. 1) Measure the wallpaper thickness directly, before sticking it to the wall. 2) Measure the thickness of the paper AND the castle wall to which it is stuck. Obviously it is much, much more difficult to extract an accurate estimate of the wallpaper thickness if it is measured together with that of the wall. Yet this is essentially the approach that Gomes et al have chosen by placing one of their electrodes in the intracellular compartment and measuring membrane and extracellular impedances in series. The paper contains no independent validation or justification of their ability to perform the separation of membrane and extracellular components of the impedance to the required accuracy.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    9. On 2017 Mar 08, Alain Destexhe commented:

      The discussion becomes interesting... we are very happy that our paper triggers so much comments. To the last Barbour's reply, it contains several simplifications and errors about the 1/f scaling, in particular the claim that cable filtering explains the 1/f scaling, while it only partially explains it. Barbour focuses on the evidence for non-resistive media from impedance measurements, and we agree that those are subject to controversy and to multiple interpretations (but all of them, not only ours...) A more detailed reply is obviously needed here - we are preparing this and it will take some time.

      The evidence for non-resistive media goes well beyond impedance measurements, and because Barbour does not mention these evidences, while they are important for the discussion, we are summarizing them here. We hope that it will be apparent that our view is coherent, perhaps even more coherent than the traditional view.

      (1) The first point is te observation that LFP and EEG can scale as 1/f. This 1/f is only seen in the so-called "desynchronized EEG" states, with no slow-waves (with slow-waves it scales as 1/f2). In 2006, in a paper in PRL, by relating LFPs with unit activity, we suggested for the first time that there is a 1/f filter somewhere. Three explanations were proposed for this: first, a random arrangement of capacitive and resistive elements (like in extracellular space) is known to create a 1/f filter; second, the cable filtering (Pettersen and Einevoll, 2008), and third, the influence of ionic diffusion (Bedard et al. Biophys J, 2009). All generate 1/f, but cable filtering generates 1/f only at high frequencies (the kernel is flat at low frequencies), and this is also true for the very recent measurements discussed by Barbour. On the other hand, ionic diffusion predicts 1/f over the whole frequency spectrum, and thus accounts better for the LFP and EEG, which also scale as 1/f at low frequencies (the cable filtering is unable to explain this). This is already a first indication that cable filtering is not entirely satisfactory.

      (2) A second point is that if one relates the power spectra of intracellular and LFP recordings in vivo, one cannot fit the transfer function obtained using a resistive medium, but it better fits the prediction from ionic diffusion (Bedard et al., JCNS 2010). In the same paper, we showed that the cable filtering only works in a silent neuron (in vitro conditions), but this effect vanishes in the presence of synaptic bombardment (in vivo conditions), so cable filtering is unlikely to provide a satisfactory explanation for these in vivo data as well. The fact that ionic diffusion fits better is of course not any kind of proof, but only an indication that the spectral structure of these signals is consistent with it.

      (3) A third evidence comes from the scaling exponent of EEG and MEG signals. If the media traversed by the fields are resistive, electric potential and magnetic field should scale the same, and they are not (Dehghani et al., 2010). This is true for all locations on the scalp, and when the exponents were similar, it is when the SNR was low, so there seems not to be a single location in the brain which behaves as a resistive medium according to that analysis.

      (4) The inhomogeneity of the impedances is also important. In 2004, we made a theoretical study showing that if there are impedance inhomogeneities (like a succession of fluids and membranes), then the electric potential is strongly filtered (Bedard et al., Biophys J. 2004). Note that this model was resistive, but with spatial variations in resistivity (which is actually known to exist in cortex; see conductivity measurements of Herreras lab). Later, a study by Nelson et al. (J Neurosci 2013) showed that the neuropil in cerebral cortex is also inhomogeneous at smaller scales. This does not show anything in favor or disfavor of a resistive medium, but it shows that the inhomogeneous electric structure of the medium necessarily predicts a filtering effect on the extracellular potential.

      (5) The Gomes et al. measurements (Biophys J 2016, discussed here) also show a filtering filtering consistent with ionic diffusion. However, the present discussion is about whether the same result can also be explained by cable filtering rather than an effect of the extracellular medium. It remains to be shown (1) if the match of cable filtering is as good as claimed for biophysically realistic conditions; (2) whether it also works in vivo, where we see the same effect, while cable filtering is supposed to be much attenuated. This is the discussion we have at the moment also with Tony Gardner-Medwin, and I hope we can reach an agreement at least on that one.

      (6) Finally, we suggested a framework where all these data can be explained, in a recent review paper (Bedard et al., J Integrative Neurosci, 2017). We showed that all the above data can be explained by a diffusive medium, although we agree that this is a theory, nothing has been demonstrated about diffusion, there may be something else scaling as 1/sqrt(omega). We also suggested that there is a current shunt in the water column that surrounds electrodes, which could explain measurements of resistivity. We also sketched an experiment to test it.

      On the other hand, if we understand well, Barbour's "theory" postulates that 1,2,3,4,5,6 are all wrong. Of course this is also a possibility, but frankly, we find it not really satisfying and not constructive, since no experiment is proposed to test it. This looks like a tentative to close the discussion, while our approach is the opposite, our paper tries to open the debate.

      Moreover, as theoreticians, we are more satisfied with the diffusive theory because it builds on a much wider set of experimental observations and analyses. Building only on impedance measures is dangerous, because we do not know at 100% the accuracy of these measures (and indeed they are controverted, the proof is all this discussion we have here...) In any case, we made efforts to find a framework where all data can be explained. Of course this framework may be incomplete and probably needs to be improved, but at least it provides a solid basis to make further experiments.

      So in the future, one should examine the respective contributions of cable filtering, impedance inhomogeneity, ionic diffusion (also recently investigated by the group of Einevoll), and possibly other unknown factors, to yield a precise biophysical picture of the genesis of extracellular potentials and magnetic fields. In our mind, it is clear that all these factors contribute, and considering the medium as just a resistance is an dangerous oversimplification.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    10. On 2017 Mar 05, Boris Barbour commented:

      See also this recent paper measuring brain extracellular impedance in humans and finding no evidence for a frequency dependence.

      Ranta R, 2017


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    11. On 2017 Feb 18, Boris Barbour commented:

      Tony Gardner-Medwin shows how even a simple cable model with a low-resistance extracellular space comes remarkably close to generating the somatic impedance spectra reported in Gomes et al. This cable property was described no later than 1973.

      Destexhe calls for more experiments, but both both prior and recent work on pyramidal cells have clearly demonstrated that the ~1/sqrt(f) impedance can be fully accounted for by the cellular impedance without any need to invoke a high or reactive extracellular impedance, eliminating the only supporting (and very indirect) argument Destexhe has so far advanced. A prior paper was

      Yaron-Jakoubovitch, Jacobson, Koch, Segev and Yarom (2008) A paradoxical isopotentiality: a spatially uniform noise spectrum in neocortical pyramidal cells Front. Cell. Neurosci., https://doi.org/10.3389/neuro.03.003.2008

      It shows that the somatic impedance (i.e. the impedance measured at the soma) of juvenile rat pyramidal cells exhibits a slightly steeper than 1/sqrt(f) decrease with frequency, just like in Gomes et al (although the curves are shifted somewhat): see the red somatic impedance trace in Fig. 2A of Yaron-Jakoubovitch et al. A very similar impedance spectrum emerges from a simulation with a reconstructed pyramidal cell: see red simulated somatic impedance in Fig. 4A of Yaron-Jakoubovitch et al. Thus, impedance spectra of the form observed by Gomes et al. can be quite precisely accounted for by the electrical morphology of the pyramidal cell, without invoking an implausible reactive and elevated extracellular impedance. This has been known for at least 9 years.

      A recent paper that directly addresses the results of Gomes et al.:

      Miceli, Ness, Einevoll and Schubert (2017) Impedance spectrum in cortical tissue: Implications for propagation of LFP signals on the microscopic level. eNeuro, https://doi.org/10.1523/ENEURO.0291-16.2016

      The authors reproduce a ~1/sqrt(f) impedance in the presence of a low and resistive extracellular space (see their Fig. 4D). Furthermore, that low, resistive impedance was verified directly, yet again. Miceli et al conclude

      "... the overall evidence points to an essentially real (Ohmic) extracellular conductivity with negligible effects from ionic diffusion in the frequency range between 5 and 500Hz."

      Destexhe questions my suggestion that his proposed high extracellular impedance would be associated with gigantic extracellular action potential signals. My argument does contain several implicit approximations. However, because the proposed extracellular impedance is of the same order of magnitude as the membrane impedance, I still believe that membrane and extracellular voltage responses to a given current would be predicted to have quite comparable amplitudes. In other words, those in the extracellular space would indeed be gigantic.

      The shunt-current mechanism proposed by Destexhe in the recent ArXiv was already addressed and ruled out in my preprint (the tissue fraction occupied by the electrode is too small to make a big difference).

      The majority of the points I raised in my preprint regarding the experimental design and analysis strategy of Gomes et al. have not been addressed at all, except by appeal to authority. In particular, the assertion by Destexhe to have measured the extracellular impedance is precisely the point in question. I maintain they have simply measured the intracellular impedance and are unable to draw from it any conclusions regarding the extracellular impedance. This point of view is in full agreement with the results of the Segev/Yarom and Einevoll/Schubert groups, as well as the arguments of Gardner-Medwin drawing on the work of Rall and Rinzel. In summary, Gomes et al. provide no good reason to prefer their conclusions over those of the many researchers who have previously measured the extracellular impedance directly, including Ranck, Nicholson and Logothetis.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    12. On 2017 Feb 08, Alain Destexhe commented:

      Dr Barbour casts doubts on our analysis, but makes several confusions: first he confuses measurements in Fourier frequency space, which cannot be extrapolated so simply to the temporal domain (for example translate mV/Hz into a LFP in mV is not so straightforward). Consequently, he obtains an aberrant LFP amplitude of 20 mV, which our measurements do not predict at all. Barbour makes a second confusion by making an analysis that assumes that the medium is resistive, while our measurements did not make this assumption, so his analysis also predicts aberrant values for this reason.

      We have to say that it is very harmful to allow such non peer-reviewed criticisms, which spread wrong statements and cause harm to published papers, because uninformed readers will believe we were not careful in our analysis. Our papers, measurements and analyses were all peer reviewed, in journals such as Physical Review or The Biophysical Journal. It was thus seen by reviewers (mostly physicists) specialists in electromagnetism theory, electrophysiology or biophysics. We suggest that, instead of spreading non-reviewed and wrong critisms, it is more scientific to make himself biophysical measurements and gets his work published in peer-reviewed journals (ie, not just words), and in journals of the same standard as Physical Review or Biophysical Journal (ie, not just arXiv).

      Finally, we fully understand that our measurements are against the current belief, and it is of course much easier and reassuring to try find arguments against them. In our recent review paper https://arxiv.org/abs/1611.10047 , we suggested an experiment which will determine if the evidence for resistive medium was contaminated by shunt currents. So instead of exchanging non peer reviewed statements, we suggest to make this experiment, which should definitively solve the issue.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    13. On 2016 Dec 28, Boris Barbour commented:

      In this and previous papers, the authors report that the extracellular impedance is high and non-resistive, compared to many previous measurements that have found it to be much lower and essentially resistive. I argue in this arXiv preprint

      https://arxiv.org/abs/1612.08457

      that the authors' estimate was probably confounded by an inaccurate representation and extraction of the series neuronal impedance in their measurement. In consequence, there is no compelling evidence to abandon the well established consensus that the extracellular impedance is low and essentially resistive in the frequency range of interest for biological signals.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 09, KEVIN BLACK commented:

      If you get to this page, click on "Other versions," above, to see the corrected version 2.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 18, Shin-ichiro Hiraga commented:

      I wonder if the MRE11 SIM2 mutant shows hypersensitivity to camptothecin.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 12, Tom Yates commented:

      Pretorius and colleagues are to be congratulated for undertaking perhaps the largest case control study to date exploring the viral aetiology of severe acute respiratory illness (SARI) (Pretorius MA, 2016). The study is remarkable for two reasons other than its size.

      First, large numbers of HIV positive cases and controls were enrolled. Second, unlike other studies of acute respiratory tract infection or pneumonia undertaken in Sub Saharan Africa (Hammitt LL, 2012, Bénet T, 2015, Breiman RF, 2015), human rhinovirus was isolated more frequently in cases than in controls. The investigators estimated that the population fraction of SARI attributable to rhinovirus (an entity they call ‘adjusted prevalence’) was approximately twenty percent.

      One potential reason for an association between rhinovirus infection and SARI being observed in this study but not in other studies would be if rhinovirus were only a cause of SARI in the setting of HIV-related immunocompromise. It would therefore be very interesting to see these data presented separately by HIV status.

      Tom A. Yates <sup>a</sup> <sup>b</sup> <sup>*</sup>

      Patrick K. Munywoki <sup>a</sup> <sup>c</sup>

      D. James Nokes <sup>a</sup> <sup>d</sup>

      a) Virus Epidemiology and Control Research Group, Epidemiology and Demography Department, KEMRI-Wellcome Trust Research Programme, Kilifi, Kenya b) Institute for Global Health, University College London, London, UK c) School of Health and Human Sciences, Pwani University, Kilifi, Kenya d) School of Life Sciences and WIDER, University of Warwick, Coventry, UK

      * Dr Tom Yates, t.yates@ucl.ac.uk


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 21, Randi Pechacek commented:

      Elisabeth Bik references this paper on a microbe.net blog post, discussing Roman culture.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 28, Clive Bates commented:

      Please note this comment is replicated on the PubMed entry for the Pediatrics version of this paper. See Singh T, 2016.

      There are several weaknesses in the reasoning in this paper.

      1. Singh et al assert: “exposure to e-cigarette advertisements might contribute to increased use of e-cigarettes among youths.” The study is cross-sectional and does not (and cannot) establish that advertising causes e-cigarette use. Even if the authors acknowledge that their study does not establish a causal relationship, that has not stopped them drawing conclusions as if it does.

      2. The finding that higher advertising exposure would be associated with greater e-cigarette use is quite likely, but there are many possible explanations. It could be that e-cigarette users see more advertising because of the way they live and their interests direct them to where such advertising is visible. Alternatively, teenagers already using the products may just be more interested and have better recall of advertising they have seen, even if exposure is no different.

      3. Marijuana is not advertised but its prevalence among high school students is higher (23.4%) than tobacco (22.4%) on the basis generally used by CDC (used at least once in the last 30 days) - see CDC Youth Risk Behavior Surveillance — United States, 2013. It may be that better explanatory variables than advertising exposure are needed to interrogate prevalence of youth risk behaviours.

      4. Even if we allow that e-cigarette advertising may be increasing youth e-cigarette uptake, something that is far from established, the authors need to consider the likelihood that this is displacing cigarette smoking. Given that independent risk factors for teenage vaping and smoking are likely to be similar, it would be surprising if vaping was not adopted as an alternative to smoking, at least in some cases. In this event, the health effect of e-cigarette use is positive, and, therefore, so is any role that advertising plays in it.

      5. The observed sharp decline in teenage cigarette smoking that has coincided with the sharp rise in teenage e-cigarette use does not establish a causal relationship between these trends, but it does suggest that it is a plausible hypothesis that vaping is reducing smoking and therefore that great care should be taken when designing policies that may attenuate this effect. See data: CDC Tobacco Use Among Middle and High School Students — United States, 2011–2014.

      6. The authors jump to a policy conclusion that goes far beyond the limitations of their study: "Multiple approaches are warranted to reduce youth e-cigarette use and exposure to e-cigarette advertisements”.

      7. Before making such policy proposals, the authors should consider several issues and potential unintended consequences beyond the scope of this paper: that young people might smoke instead of using e-cigarettes; that e-cigarette advertising might be an effective form of anti-smoking advertising; that such restrictions may reduce adult switching from smoking to vaping and thus create damage to adult health; that excessive restrictions on advertising protect incumbent industries (cigarettes) and reduce the returns to innovation in the disruptive entrants (e-cigarettes).

      8. The assertive and unqualified nature of the policy proposal raises concerns of an unacknowledged investigator bias. This would not be that surprising. The authors are from CDC, and the media release that accompanied this survey abandoned all caution about causality or unintended consequences. It reflected the CDC’s adversarial prior policy stance on these issues, none of which are supported by the survey itself:

      “The same advertising tactics the tobacco industry used years ago to get kids addicted to nicotine are now being used to entice a new generation of young people to use e-cigarettes,” said CDC Director Tom Frieden, M.D., M.P.H. “I hope all can agree that kids should not use e-cigarettes.”

      Conclusions: CDC should adopt a more dispassionate and careful approach to both reporting and communicating survey findings. In publishing for peer-reviewed journals, CDC staff should disclose relevant CDC policy and advocacy positions as non-financial competing interests.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 18, Gwangseong Kim commented:

      In two recent articles [1, 2] two techniques for removing or inactivating blood borne pathogens were introduced. The initial experiments were performed in vitro under simplified conditions. First, the primary achievement of the PDT work deserves clarification [1]. PDT is a powerful therapeutic modality, but its clinical application has been hampered by the inability of light to penetrate deep layers of the tissue, which is mainly due to hemoglobins in the blood readily absorbing photons. Utilizing a millimeter- diameter transparent tube for extracorporeal blood circulation allows PDT to function well despite the presence of hemoglobins in blood. Another point that deserves clarification is that the tube capturing device is not a microfluidic device [2]. This technique can be adapted using existing medical tubing without the need for complicated microfluidics and micro-fabrication. The device is a medical tube that has been chemically modified using simple steps to adapt the internal surface for cell capturing. We would like to take this opportunity to respond to concerns brought up in [3]. We start off by addressing concern (1), which speculates about the possibility of overheating during the use of near IR light. Our control data (Fig.3 and Fig.4 of [1]), confirmed that controls illuminated without photosensitizer-antibody conjugates did not undergo cell death, whereas those with photosensitizer-antibody conjugates underwent significant cell death under identical conditions. Thus it is clear from our data that temperature did not affect the outcome. It has been shown that 660 nm irradiation is safe and effective [4-6]. Moving on to concern (2) part (a) that brings up the problem of using the CD-44 antigen as a target. Limitations of antibody specificity are common knowledge and not unique to CD-44, but to all antibodies. To our knowledge, a targeting method that exclusively binds only to cancer cells does not yet exist, making the use of such a compound an unreasonable standard for publication. We used CD-44 antibody to demonstrate feasibility. As targeting methodologies advance and better selectivity to target cells becomes available, this technique will have improved selectivity. Our experiments were designed to avoid non-specific damage to other cells by pre-staining pure cancer cells with the photosensitizer-antibody conjugates and subsequently removing extra free conjugates before spiking into blood (described in detail in [1]). This elimination of the possibility of side effects due to undesired binding to other blood cells and excess free photosensitizer-antibody conjugates precluded the need for a toxicity study, particularly because we were at the proof-of-principle stage. Part (b) of concern (2) suggests that we may have caused non-specific damage to non-cancerous cells by ROS' convection in the blood stream. We believe that this is highly unlikely. One of the authors has been conducting research focusing on ROS and PDT for years, in collaboration with other researchers [7-15]. This research demonstrated that PDT is extremely selective to targeted cells [13]. Part (c) of concern (2) states that we should have used additional cytotoxicity assays, such as Annexin V, TUNEL, and MTT. However, because none of these techniques are cell-type specific, they would be useless for the particular objective they were suggested. Once our line of investigation reaches a more mature stage, we plan to undertake more useful studies, such as applying separate fluorescent tags, or radio labels, in addition to a cell viability assay and analyzing cell death with a cell sorting technology, such as FACS, MACS, density gradient centrifugation, etc. Concern (3) is that the capturing work [2] lacked purity confirmation concerning non-specific capturing of blood cells. Though purity confirmation is critical in diagnostic testing, our work was strictly limited to in vitro conditions, using spiked pure PC-3 cells as a model. To visualize and quantify PC-3 cells in the presence of whole blood, PC-3 cells were pre-labeled using a fluorescence tag (Calcein AM) and the extra free dye was subsequently removed before spiking PC-3 cells into blood. Because only PC-3 cells can have fluorescence in the blood mixture, and because quantification was based on fluorescing cells, false-positive results from other blood cells can be reasonably excluded. Furthermore, if other blood cells were captured but not identified by our detection method our data would then indicate that the simple tube captured cancer cells despite being blocked by other blood cells. If our technique were applied to CTC diagnosis, independent isolation procedures could be used to ensure the purity of captured cells. In contrast, if used for removal or killing, the purity of captured cells would not be as critical, provided that CTCs are effectively removed. If, by chance, capturing is hampered by accumulation of non-specific binding in filtering the entire blood volume, this issue can be addressed with strategies such as scaling up the tube and carefully determining the tube dimensions, flow rate, frequency of tube replacements, etc. Finally, concern (4), points out that the experimental conditions were not translatable to clinical applications. Part (a) regards scaling up the system to show high throughput. The concept of extracorporeal blood processing of the entire blood volume has been used for years in cases such as hemodialysis. We already are working on optimizing the technique for larger blood volume processing. Part (b) of concern (4) discusses the static no-flow condition as being unrealistic. This issue was brought up during the review process, and we provided with our results showing data under constant flow conditions by peristaltic pump (to be published in future publication). The reviewers agreed that the use of a no-flow condition as a conservative approach during a proof-of-concept stage was appropriate. Despite its preliminary nature, we believe that our work communicates novel ideas, an important objective of research and publication. Given the number of research articles dealing with diagnostics and microfluidics, perhaps a further point of confusion came about by thinking of our work in those terms. We want to clarify that diagnostics were not the primary objective in our work. Furthermore, as it becomes evident by this response our experimental design was carefully devised to minimized unnecessary interferences. We hope that this response mitigates any confusion and addresses the concerns raised. The entire response appears in the PLOS1 comment section under response: http://www.plosone.org/article/comments/info:doi/10.1371/journal.pone.0127219. Feel free to contact us for further clarifications.

      1. Kim G, Gaitas A. PloS One. 2014;10(5):e0127219-e.
      2. Gaitas A, Kim G. PLoS One. 2015;10(7):e0133194. doi: 0.1371/journal.pone.0133194.
      3. Marshall JR, King MR. DOI: 101007/s12195-015-0418-3. 2015;First online.
      4. Ferraresi C, et al. Photonics and Lasers in Medicine. 2012;1(4):267-86.
      5. Avci P, et al. Seminars in cutaneous medicine and surgery; 2013.
      6. Jalian HR, Sakamoto FH. Lasers and Light Source Treatment for the Skin. 2014:43.
      7. Ross B, et al. Biomedical Optics, 2004
      8. Kim G, et al. Journal of biomedical optics. 2007;12(4):044020--8.
      9. Kim G, et al Analytical chemistry. 2010;82(6):2165-9.
      10. Hah HJ, et al. Macromolecular bioscience. 2011;11(1):90-9.
      11. Qin M, et al. Photochemical & Photobiological Sciences. 2011;10(5):832-41.
      12. Wang S, et al. et al. Lasers in surgery and medicine. 2011;43(7):686-95.
      13. Avula UMR, et al.Heart Rhythm. 2012;9(9):1504-9.
      14. Kim G, et al. R. Oxidative Stress and Nanotechnology, 2013. p. 101-14.
      15. Lou X, et al. E. Lab on a Chip. 2014;14(5):892-901.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 24, Melissa Garrido commented:

      Thank you for the feedback on the paper. I wanted to respond to the comment about exclusion of post-discharge survival time from the propensity score model.Covariates chosen for inclusion in a propensity score should be those thought to influence both the treatment (here, palliative care consultation) and outcome (hospitalization costs). Because post-discharge survival occurs after both receipt of palliative care and after the period in which the outcome was measured (and thus could not be on any potential causal pathway between palliative care receipt and hospitalization costs), it was not considered as a potential covariate.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 22, Cicely Saunders Institute Journal Club commented:

      The Cicely Saunders Institute journal club discussed this paper on Wednesday 2nd March 2016. We agreed that this is an important study that adds to the growing body of evidence on the cost saving effects of palliative care, which has the potential to influence policy and commissioners.

      We enjoyed discussing the prospective cohort design of the study and the use of propensity score matching to address problems of selection bias. We commended the large number of variables that were included in the propensity matching; however we also questioned the impact of omitting some important potential confounders, such as post-discharge survival time. We noted that those with a diagnosis of dementia were excluded from the study. Whilst we acknowledged there may be practical reasons for this, such patients may have higher numbers of comorbidities, higher associated costs and possibly be more likely to benefit from palliative care. We agreed that enabling participation of those who lack capacity by gaining assent via a consultee and obtaining proxy responses would strengthen future studies of this design.

      The study examines the impact of palliative care consultation on hospital costs only without assessing its concurrent impact on patients and carer outcomes, which limits the impact and relevance of the findings. Authors acknowledge this limitation and state that further analyses will assess the effect of palliative care on patient and carer outcomes. We look forward to seeing these findings presented in future publications to strengthen the evidence regarding hospital palliative care consultations.

      Commentary by Nilay Hepgul and Anna Bone


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 24, Ben Goldacre commented:

      This trial has the wrong trial registry ID associated with it on PubMed: both in the XML on PubMed, and in the originating journal article. The ID given is NCT0247317. We believe the correct ID, which we have found by hand searching, is NCT02473172.

      This comment is being posted as part of the OpenTrials.net project<sup>[1]</sup> , an open database threading together all publicly accessible documents and data on each trial, globally. In the course of creating the database, and matching documents and data sources about trials from different locations, we have identified various anomalies in datasets such as PubMed, and in published papers. Alongside documenting the prevalence of problems, we are also attempting to correct these errors and anomalies wherever possible, by feeding back to the originators. We have corrected this data in the OpenTrials.net database; we hope that this trial’s text and metadata can also be corrected at source, in PubMed and in the accompanying paper.

      Many thanks,

      Jessica Fleminger, Ben Goldacre*

      [1] Goldacre, B., Gray, J., 2016. OpenTrials: towards a collaborative open database of all available information on all clinical trials. Trials 17. doi:10.1186/s13063-016-1290-8 PMID: 27056367

      * Dr Ben Goldacre BA MA MSc MBBS MRCPsych<br> Senior Clinical Research Fellow<br> ben.goldacre@phc.ox.ac.uk<br> www.ebmDataLab.net<br> Centre for Evidence Based Medicine<br> Department of Primary Care Health Sciences<br> University of Oxford<br> Radcliffe Observatory Quarter<br> Woodstock Road<br> Oxford OX2 6GG


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 19, Melissa Rethlefsen commented:

      I applaud the authors' reporting of all of the search strategies behind this Cochrane review. There is, however, one area with a minor error. The MEDLINE search strategy for this review update is not reproducible, due to two causes: 1) no PRISMA-style flow chart to indicate the number of references found in either all or individual databases or in-text reporting of the complete number of references identified by searches before and after deduplication, and 2) a major Boolean error in the MEDLINE search strategy. For the second point, the MEDLINE search strategy is missing one or more lines that would combine the gynecological cancer terms together with the obstruction terms. It can be somewhat extrapolated from more complete search strategies reported for EMBASE and other databases, but because there is no flow chart, it is not clear if the extrapolated search is indeed what the authors performed. It appears as though the correct Boolean logic addition would be to combine lines 15 and 23, plus add the additional date limitation.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 24, Lydia Maniatis commented:

      The authors do two things in this study:

      First, they point out that past studies on “constancy” have been hopelessly confounded due a. to condition-sensitivity of and ambiguity in what is actually percieved and b. questions that are confusing to observers because they are vague, ambiguous, unintelligible or unanswerable on the basis of the percept, thus forcing respondents to try to guess at the right answer. As a result, the designers of these studies have generated often incoherent data and proferred vague speculations as to the reasons for the randomness of the results.

      Second, as though teaching (what to avoid) by example, they produce yet another study embodying all of the problems they describe. Using arbitrary sets of stimuli, they ask an arbitrary set of questions of varying clarity/answerability-on-the-basis-of-the-percept, and generate the typically heterogeneous, highly variable set of outcomes, accompanied by the usual vague and non-committal discussion. (The conditions and questions are arbitrary in the sense that we could easily produce a very different set of outcomes by (especially) changing the colors of the stimuli or (less importantly) changing the questions asked.) Thus, the only possible value of these experiments would be to show the condition-dependence of the outcomes. But this was an already established fact, and it is, furthermore, a fact that any experimenter in any field should be aware of. It's the reason that planning an experiment requires careful, theory-guided control of conditions.

      The authors make no attempt to hide the fact that some of the questions they ask participants cannot be anwered by referring to the percept. For example,, they are asked about some physical characteristic of the simulus, which, of course, is inaccessible to either the human visual system and unavailable in the conscious percept. In these cases, we are not studying perception of the color of surfaces, but a different kind of problem-solving. The authors refer to answers “based on reasoning.” If we're interested in studying color perception, then the simple answer would be not to use this type of question. The authors seem to agree: “Although we believe that the question of how subjects can reason from their percepts is interesting in its own right, we think it is a different question from how objects appear. Our view is that instructional effects are telling us about the former, and that to get at the latter neutral instructions are the most likely to succeed...In summary, our results suggest that certain types of instructions cause subjects to employ strategies based on explicit reasoning— which are grounded in their perceptions and developed using the information provided in the instructions and training—to achieve the response they believe is requested by the experimenter.” This was all clearly known on the basis of prior experimence, as described in the introduction.

      So, at any rate, the investigators express an interest in what is actually perceived by observers. But what is the question they're interested in answering? This is the real problem. The question, or goal, seems to be, “How do we measure color constancy?” But we don't measure things for measurement's sake. The natural follow-up is “Why do we want to measure color constancy?” What is the theoretical goal, or question we want to answer? This question matters because we can never, ever, arrive at some kind of universal, general, number for this phenomenon, which is totally condition-dependent. But I'm not able to discern, in these authors' work, any indication of their purpose in making these highly unreliable measurements.

      Color constancy refers to the fact that sometimes, a surface “x” will continue to appear the same color even as the kind and intensity of the light it is projecting to the eye changes. On the other hand, it is equally possible for that same surface to appear to change color, even as the kind and intensity of the light it is reflecting to the eye remains the same. In both cases – constancy and inconstancy – the outcome depends on the total light projecting to the eye, and the way the visual system organizes it. In both cases – constancy and inconstancy – the visual principles mediating the outcome are the same.

      The authors, in this and in previous studies, “measure constancy.” Sometimes it's higher, sometimes it's lower. It's condition-dependent. Even if they were actually measuring “constancy” in the sense of testing how an actually stable surface behaves under varying conditions, what would be the value of this data? We already know that constancy is condition-dependent, that it is often good or good enough, and that it can fail under certain well-understood conditions. (That these conditions are fairly well-understood is the reason the authors possess a graphics program for simulating “constancy” effects). How does simply measuring this rise and fall under random conditions (random because not guided by theory, meaning that the results won't help clarify any particular theoretical question) provide any useful information? What is, in short, the point?

      Yet another twist in the plot is that in their experiments, the authors aren't actually measuring constancy. Because we are talking about simulations, in order to exhibit “constancy,” observes need (often) to actually judge two surface with different spectral characteristics as being the same. This criterion is based on assumptions made by the investigators as to what surfaces should look the same under different conditions/spectral properties. But this doesn't make sense. What does it mean, for example, if an observer returns “low constancy” results? It means that the conditions required for two actually spectrally different surfaces to appear the same simply didn't hold, in other words, that the investigators' assumptions as to the conditions that should produce this “constancy” result didn't hold. If the different stimuli were designed to actually test original assumptions about the specific conditions that do or do not produce constancy, fine. But this is not the case. The stimuli are simply and crudely labelled “simplified” and “more realistic.” This means nothing with respect to constancy-inducing conditions. Both of these kinds of stimuli can produce any degree of “constancy” or “inconstancy” you want.

      In short, we know that color perception is condition-sensitive, and that some questions may fail to tap percepts; illustrating this this yet again is the most that this experiment can be said to have accomplished.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 10, Shashi Seshia commented:

      I regret there is an error in the formatting of the abstract that may cause some confusion. Readers are requested to look at the abstract in the review. Briefly, the key points are: 1.SUMMARY OF THE APPRAISAL: should be BOLDED the 3rd main heading, equivalent to results.Subheadings under this would include: (i) Strengths: Clinical pearls...including the examples (which should not be bolded), (ii) Weaknesses, (iii) Notable Omissions, and (iv) Additional issues. Shashi Seshia January 10, 2016


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 09, David Keller commented:

      Without a placebo control group, double-blinding or randomization, the results are difficult to interpret

      Without a randomized, double-blinded, placebo-controlled trial, it is not possible to determine how much of the decrease in motor seizures was due to cannabidiol, nor how cannabidiol affected the frequency of serious adverse events, such as status epilepticus and "convulsions". Cannabidiol is a component of cannabis, the persistent use of which has been shown to be associated with neuropsychological decline in childhood through midlife [1].

      Reference:

      1: Meier MH, et al. Persistent cannabis users show neuropsychological decline from childhood to midlife. Proc Natl Acad Sci U S A. 2012 Oct 2;109(40):E2657-64. PubMed PMID: 22927402


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 15, Felix Scholkmann commented:

      We read with interest the article by Rizzo et al. [1] about the alleged ability for diagnosing “cardiovascular disease through mitochondria respiration as depicted through biophotonic emission”. As researchers in the field of ultra-weak photon emission (UPE) from biological systems (e.g. [2-12]) (also termed “biophotonic emission”) we were surprised to notice that some important statements made in the publication of Rizzo et al. are not correct or unsubstantiated, unfortunately. In the following we will point out these issues in detail.

      (1) The “ClearView System” is neither able to detect UPE, nor is the measurement principle related to UPE.

      The authors claim that the measurement device used, i.e., the ClearView System, “can detect cardiovascular disease through the measurement of mitochondria dysfunction through biophoton emission.” Concerning the measurement principle of the ClearView System the authors wrote that the “system measures electromagnetic energy at a smaller scale through amplification of biophotons.” Furthermore, it is stated that through “measuring mitochondrial respiration via biophoton detection, the ClearView system has the ability to quantify electrophysiological biophoton activity.

      Unfortunately, the statements are technically not correct. As described in section “1.3 ClearView System” of the paper, and also on the companies’ website (http://epicdiagnostics.com/clearview), the ClearView System is a corona discharge photography (CDP) device (for a detailed description of the technique see [13, 14]), i.e., a device performing contact print photographs of the coronal discharge (of the finger tip) by a high-frequency and high voltage pulse exposure (sinusoidal, 1 KHz, according to the patent for the device [15]). A CCD camera under a transparent electrode is capturing then this discharge pattern. The obtained image is due to the electrical discharge causing air-ionization and subsequently electromagnetic radiation in the optical spectrum when the excited electrons of molecules in the air return to the energetic ground state. This measured light by the device is neither “biophoton emission” nor is it due to an “amplification of biophotons”. The detection of UPE is only possible using high-sensitive photodetectors (i.e., photomultipliers or specific CCD cameras [16-19]). The optical radiation detected using CDP is a stimulated emission, whereas the UPE is a spontaneous emission. Furthermore, the authors state that the “ClearView System is a non-invasive, electrophysiological measurement tool”. And the sentence “[u]nlike other bio-impedance devices, … the ClearView System is ...” is misleading since it links the ClearView system to “bio-impedance devices”. However, this is erroneous and introduces further confusion.

      (2) The assignment of the measured corona discharge patters to mitochondria respiration is unsubstantiated.

      The authors state that the used measurement device (ClearView) is able of “measuring mitochondrial respiration” indirectly, and “can detect cardiovascular disease through the measurement of mitochondria dysfunction”. Even if we assume that the device would be able to measure UPE (which we showed is not justified), the detected UPE from the skin (i.e., finger tips) is a result of many different biochemical reactions that are not necessarily linked to mitochondrial function/respiration only. A detailed description of the UPE sources in biological systems can be found in the recent reviews [20, 21].

      (3) Further questionable statements.

      According to the authors, the “ClearView system taps into the global electromagnetic holographic communication system via the fingertips.” Neither give the authors an explanation what they mean with the term “global electromagnetic holographic communication system” nor do they refer to scientific literature that supports their statement.

      Also the authors state that it “has been scientifically proven that every cell in the body emits more than 100,000 light impulses or photons per second.” This statement is inconsistent with the earlier statement that “biophoton emission is described to be less than 1000 photons per second per cm”. Additionally that statement contains incorrect units (it should be “per cm<sup>2</sup>”).In addition the authors state that the “biophotons”, they are allegedly measuring, “have been found to be the steering mechanism behind all biochemical reactions.” Whereas there are indeed theories linking UPE to physiological functions (i.e., delivering activation energy for biochemical reactions and coordinating them) these concepts are based on theoretical work (e.g., [22]) and there exist no scientific consensus regarding this issue at present.

      (4) Flaws in the studies experimental design and statistical analysis.

      Although the results presented by the authors are interesting, according to our view they should be regarded with caution because of the following reasons:

      (a) A case-control study must ensure that the characteristics of the investigated populations (cases and controls) are similar, i.e. the populations should be age-matched and the number of subjects should be approximately the same [23]. Both conditions seem to be not fulfilled in the study of Rizzo et al. According to the authors the “age of cardiovascular subjects was 64.22 (95%CI: 62.44, 65.99) and the mean age of controls was 44.14 (95% CI: 40.73, 47.55).” That the age is a confounder was even found by the authors using the statistical analysis (i.e., OR for cardiovascular disease without considering age: 4.03 (2.71, 6.00), OR with age as a confounder: OR: 3.44 (2.13-5.55)). Additionally, the sample sizes were different (n = 195 vs. n = 64).

      (b) The authors state the studies aim was to “indicate the presence or absence of cardiovascular disease”. For such an assessment the calculation of the odds ratios are not sufficient since they give only information about the prevalence, whereas the sensitivity and specificity are important factors to determine if a biomarker is useful for diagnostic purposes [24]. Such an analysis is classically performed by calculating the receiver operating characteristic (ROC) curves and quantifying them. Unfortunately, this kind of analysis is not reported by the authors in the manuscript. However, in the patent application (from which the results of the study were taken), ROC curves were given [25].

      (c) Another important factor in showing the usefulness of the proposed novel diagnostic approach is to show the reproducibility of the measurement. The authors report that “a second measurement session was done 3-5 minutes after the first one was completed” in order to “assess the reproducibility and variability of the measurements”. However, we cannot find the results of this assessment in the published paper.

      Felix Scholkmann<sup>1,</sup> <sup>2,</sup> Michal Cifra<sup>3</sup>

      <sup>1</sup> Biomedical Optics Research Laboratory, Division of Neonatology, University Hospital Zurich, 8091 Zurich, Switzerland

      <sup>2</sup> Research Office of Complex Physical and Biological Systems (ROCoS), 8038 Zurich, Switzerland

      <sup>3</sup> Institute of Photonics and Electronics, The Czech Academy of Sciences, 18200 Prague, Czech Republic


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 May 15, Felix Scholkmann commented:

      References

      [1] Rizzo, N.R., N.C. Hank, and J. Zhang, Detecting Presence of Cardiovascular Disease through Mitochondria Respiration as Depicted through Biophotonic Emission. Redox Biologx, 2016. 8: p. 11-17

      [2] Cifra, M. and P. Pospisil, Ultra-weak photon emission from biological samples: definition, mechanisms, properties, detection and applications. J Photochem Photobiol B, 2014. 139: p. 2-10.

      [3] Kucera, O. and M. Cifra, Cell-to-cell signaling through light: just a ghost of chance? Cell Commun Signal, 2013. 11: p. 87.

      [4] Scholkmann, F., D. Fels, and M. Cifra, Non-chemical and non-contact cell-to-cell communication: a short review. Am J Transl Res, 2013. 5(6): p. 586-93.

      [5] Rahnama, M., et al., Emission of mitochondrial biophotons and their effect on electrical activity of membrane via microtubules. J Integr Neurosci, 2011. 10(1): p. 65-88.

      [6] Cifra, M., J.Z. Fields, and A. Farhadi, Electromagnetic cellular interactions. Prog Biophys Mol Biol, 2011. 105(3): p. 223-46.

      [7] Kucera, O., M. Cifra, and J. Pokorny, Technical aspects of measurement of cellular electromagnetic activity. Eur Biophys J, 2010. 39(10): p. 1465-70.

      [8] Scholkmann, F., et al., The effect of venous and arterial occlusion of the arm on changes in tissue hemodynamics, oxygenation, and ultra-weak photon emission. Adv Exp Med Biol, 2013. 765: p. 257-64.

      [9] Fels, D., M. Cifra, and F. Scholkmann, eds. Fields of the cell. 2015, Research Signpost: Trivandrum.

      [10] Scholkmann, F., et al., Using multifractal analysis of ultra-weak photon emission from germinating wheat seedlings to differentiate between two grades of intoxication with potassium dichromate. Journal of Physics: Conference Series, 2011. 329: p. 012020.

      [11] Cifra, M., et al., Spontaneous ultra-weak photon emission from human hands is time dependent. Radioengineering, 2007. 16(2): p. 15-19.

      [12] Cifra, M., et al., Biophotons, coherence and photocount statistics: A critical review. Journal of Luminescence, 2015. 164: p. 38–51.

      [13] Boyers, D.G. and W.A. Tiller, Corona discharge photography. Journal of Applied Physics, 1973. 44(3102).

      [14] Kwark, C. and C.W. Lee, Experimental study of a real-time corona discharge imaging system as a future biomedical imaging device. Med Biol Eng Comput, 1994. 32(3): p. 283-8.

      [15] Rizzo, N.R., Localized physiologic status from luminosity around fingertip or toe, I. Epic Research And Diagnostics, Editor. 2013.

      [16] Van Wijk, R., M. Kobayashi, and E.P. Van Wijk, Anatomic characterization of human ultra-weak photon emission with a moveable photomultiplier and CCD imaging. J Photochem Photobiol B, 2006. 83(1): p. 69-76.

      [17] Kobayashi, M., et al., Two-dimensional photon counting imaging and spatiotemporal characterization of ultraweak photon emission from a rat's brain in vivo. J Neurosci Methods, 1999. 93(2): p. 163-8.

      [18] Kobayashi, M., Highly sensitive imaging for ultra-weak photon emission from living organisms. J Photochem Photobiol B, 2014. 139: p. 34-8.

      [19] Yang, M., et al., Spectral discrimination between healthy people and cold patients using spontaneous photon emission. Biomed Opt Express, 2015. 6(4): p. 1331-9.

      [20] Prasad, A., Pospisil, P., The photon source within the cell, in Field of the cell, D. Fels, M. Cifra, and F. Scholkmann, Editors. 2015, Research Signpost: Trivandrum. p. 113-129.

      [21] Pospisil, P., A. Prasad, and M. Rac, Role of reactive oxygen species in ultra-weak photon emission in biological systems. J Photochem Photobiol B, 2014. 139: p. 11-23.

      [22] Popp, F.A. and J.J. Chang, The Physical Background and the Informational Character of Biophoton Emission, in Biophotons, J.-J. Chang, J. Fisch, and F.-A. Popp, Editors. 1998, Springer Netherlands. p. 239-250.

      [23] Bland, M., An Introduction to Medical Statistics. 4th ed. 2015: Oxford University Press.

      [24] Grund, B. and C. Sabin, Analysis of biomarker data: logs, odds ratios, and receiver operating characteristic curves. Curr Opin HIV AIDS, 2010. 5(6): p. 473-9.

      [25] Rizzo, N.R., Isolated physiological state of brightness by fingertip or toe, I. Epic Research And Diagnostics, Editor. 2014.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 03, Jos Verbeek commented:

      It is interesting to see that the conclusions of this review are at odds with a Cochrane Review on the same topic that concluded that there was no evidence of an effect for safety engeneered devices. The reason is that the authors of this review included uncontrolled before-after studies. They took the reduction of sharps injuries at one point in time after the introduction as the control condition. Of course, this carries a very high risk of bias because many things change over time and it is difficult to ascribe the change to the intervention. Using the GRADE qualification moderate quality evidence for evidence based on uncontrolled before-after studies is therefore misleading. I rather believe the results of the Cochrane Review


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 22, Emily Ferenczi commented:

      Here we expand upon the influence of medial prefrontal cortex (mPFC) stimulation in different hedonic behaviors. As documented in several previous studies, behaviorally relevant sucrose preference changes (for example those induced by stress) are often modulated similarly to the extent we observed (for example: Chaudhury et al., Nature 2013, 493: 532-536; Lim et al., Nature 2012, 487: 183-189; Covington et al., J Neurosci 2010, 30: 16082-16090). Indeed, sucrose has powerful motivational influences (as noted in the earlier commenter’s numerous Pubmed comments and postings relevant to his prior findings, e.g. http://www.ncbi.nlm.nih.gov/pubmed/25120076, http://www.ncbi.nlm.nih.gov/pubmed/23542690) and it would be surprising to achieve complete indifference to sucrose with a mild and distant mPFC modulation, nor is anhedonia absolute in the clinical setting. With respect to the influence of increased excitability of mPFC via SSFO stimulation, we did not claim (or use the word) “indifference” to sucrose (which would presumably mean a sucrose preference of 50%) but rather noted a significant reduction in preference. In line with previous literature, we in fact expected this reduction to be modest and characterized the reduction as a “mild but consistent and reversible reduction in sucrose preference only during days when light-stimulation was delivered”. The decrease in sucrose preference that we observed was in fact not only consistent across days and reversible, but also significantly differed to the behavior of YFP controls (Figure 4J) and accompanied suppressed engagement in another naturally rewarding activity (social interaction), as well. Finally, further support for the neurobiological relevance of this manipulation comes from the observation that it induced differences in fMRI functional connectivity between the medial prefrontal cortex and striatum, which in turn tracked the magnitude of the decrease in sucrose preference at the individual subject level (Fig. 6I). It is interesting to note that recent clinical trials have demonstrated suppression of cocaine use (Terraneo et al., Eur Neuropsychopharmacol 2016, 26: 37-44) and heroin cravings (Shen et al., Biol Psychiatry 2016, doi: 10.1016/j.biopsych.2016.02.006) in patients with addiction following transcranial magnetic stimulation of the prefrontal cortex, perhaps pointing towards a similar principle of cortical regulation of hedonic processing.

      In the dual stimulation place preference experiments, rats were allowed to explore three chambers freely for 30 minutes total, and one chamber was paired with stimulation of ventral tegmental area dopamine (VTA-DA) neurons. The rats were exposed to chamber-paired VTA-DA stimulation for the first 10 minutes of the test and quickly began to show a preference for the VTA-DA stimulation side. When SSFO was switched on for the middle 10 minutes of the test, rats no longer preferred the VTA-DA stimulation side (consistent with approximately 50% preference). Once SSFO was switched off, the rats spent another 10 minutes with chamber-paired VTA-DA stimulation only, again preferring the VTA-DA stimulated side. Crucially, in the second 10 minute section of the test when mPFC was concurrently stimulated, it was in a non-chamber paired fashion. Specifically, SSFO stimulation was constantly present throughout the 10 minutes regardless of which chamber rats occupied (as reported in the paper, “10 min of superimposed mPFC activation by SSFO (single 5-s pulse of blue light in mPFC at the start, 10-s pulse of yellow light at the end)”). Thus, the chamber not paired with VTA-DA stimulation acted as an internal control for concurrent SSFO stimulation, and any possible SSFO-mediated aversion would therefore occur in both chambers. Both before and after mPFC stimulation, rats significantly preferred the VTA-DA stimulation chamber (mean percent time spent on stimulation side = 65.1%, 95% CI = 55.0 to 75.1%, p = 0.007) but did not prefer this chamber during mPFC stimulation (mean = 45.1%, 95% CI = 26.7 to 63.4%, p = 0.54). It is possible that a sufficiently aversive condition in which rats are no longer responsive to environmental stimuli could lead to anhedonic behavior, which we would consider to be one of several interesting mechanisms for this effect and would have some clinical relevance, though we note that the mPFC intervention did not alter baseline locomotion, water consumption, initial social interaction, or novel object investigation. While mPFC stimulation reduced the positive hedonic impact of VTA-DA stimulation, we note that the behavioral paradigm can also detect aversive effects, since inhibition of VTA-DA dopamine neurons with eNpHR elicited behavioral avoidance instead (see place preference test in Figure S7).

      In summary, the behavioral effects related to mPFC stimulation are significant and coherent across studies, with changes in elicited ofMRI activity linked to individual-subject differences in targeted hedonic behavioral phenotypes. Although much more experimental ground was covered in the article (as noted by the commenter), no claims were made regarding identification of direct top-down anatomical connectivity -- a topic that remains of great future interest. We would be happy to provide any additional clarifications, so if questions remain please directly contact us for more detailed, extensive, or even hands-on productive exchange.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Apr 05, Serge Ahmed commented:

      There are many different types of data in this ambitious Research Article. I will only comment on the behavioral effects of medial prefrontal (mPFC) activation. Though those manipulations induced large-scale reorganization of both cortical and subcortical brain activities, as measured by fMRI, they had only marginal behavioral repercussions.

      Notably, contrary to what the authors seem to imply, there is little or no evidence for anhedonia during mPFC activation (see Figure 4 of the paper). At best, there is a small reduction in preference for a low concentration of sucrose (i.e., from 90 to about 85%) but rats still continued to largely prefer sucrose over water. The behavioral effects of mPFC activation are simply magnified by setting the origin of the Y-axis at 70% - a value well above indifference. The behavioral significance of this tiny decrease in preference, if any, is largely unclear. In addition, there is no direct evidence that it results from a prefrontal top-down control over dopamine-dependent reward signaling in the striatum, as suggested by the authors. Finally, there is no evidence in Figure 5 that rats seek to spend more time in a place associated with stimulation of midbrain DA neurons. Rats are initially globally indifferent. The small, albeit significant, effect of mPFC activation likely reflects a direct aversive effect that might also be observed alone with no concurrent stimulation of midbrain DA neurons – an important control experiment that the authors apparently failed to conduct. It would have been more relevant to test whether and to what extent mPFC activation directly modulates the self-stimulation behavior of midbrain DA neurons reported in Figure 2.

      Overall, the behavioral effects reported in this Research Article are too marginal and too disparate to offer a clear picture of the role of mPFC activation in regulating dopamine-dependent reward-seeking behavior.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Jan 06, DAVID LUDWIG commented:

      A recent NY Times investigative article raised questions about the interpretation of this study, and possible harms to public health in developing nations from consumption of sugary beverages such as the ones examined here. However, the NY Times article did not provide a detailed critique of the scientific methods.

      The authors of this study conclude, "These findings suggest that malted drinks are a micronutrient-rich beverage which are unlikely to promote excess energy intake and obesity risk, at the consumption pattern in the population assessed." I believe that this conclusion is overstated, for the following reasons:

      First: The study design, a cross-sectional survey, is weak and especially susceptible to confounding.

      Second: The final participation rate was only about 1/5 the total invited sample, potentially leading to major selection bias (thus, undermining intent to obtain a nationally representative sample).

      Third: The validity of the method to measure diet, (e.g., 10 to 12 year olds responded on their own) was not demonstrated.

      Fourth: Malt beverage consumers were more physically active and watched less screen time: thus, they likely came from families with greater health consciousness. Therefore, the associations with higher intakes of a few micronutrients may simply reflect confounding. That is, they may have gotten the extra nutrients from other components of the diet. Unfortunately, this point was not examined in the study.

      Fifth: Concerningly, the statistical models were not adjusted for physical activity and screen time. Doing so could have unmasked higher body weight among the malt beverage consumers. That is, perhaps the increased PAL and less screen time counteracted the adverse effects of beverage consumption.

      Indeed, the true associations between sugar-sweetened beverages and body weight shown in higher quality prospective studies Schulze MB, 2004 are often not seen in cross-sectional surveys Forshee RA, 2004 – demonstrating the weak nature of this study type.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 07, Luis Querol commented:

      The authors raise an important and intriguing finding that is the extraordinary response to Rituximab of some diseases. This fact is becoming an inteersting topic of research in the neuroimmunology field because several diseases show this type of response (Huijbers MG, 2015). Although the authors do not mention it, this finding may be related to the IgG4 nature of the pathogenic autoantibodies. This type of extraordinary and long-lasting response happens in several other diseases with very different target organs: myasthenia with anit-Musk antibodies (Díaz-Manera J, 2012), anti-contactin and anti-NF155 chronic inflammatory polyradiculoneuropathy (Querol L, 2015), anti-PLA2R nephropathy (Beck LH Jr, 2011)... Uncovering the differences that IgG4 plasma cells show with respect the more typical Ig1-3 would be key to understand the amazing response to rituximab in these diseases.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 09, Melissa Greenwald commented:

      The United States Health Resources and Services Administration (HRSA) is the federal agency that awarded the grant funding for this research proposal. Grant awards were based on ranking after applications were reviewed by an external Technical Review Committee. “Delayed graft function” (DGF) was clearly stated as one of the goals of this research study as noted in the original grant application submitted in March 2011: “The goals of the this intervention are to demonstrate that TH [Therapeutic Hypothermia]: 1) better preserves deceased donor renal function while awaiting organ recovery when compared to normothermia; 2) increases the number of suitable organs for transplantation; and 3) improves recipient renal function after transplantation as measured by a reduction in DGF [Delayed Graft Function] and SGF [Slow Graft Function].” The grant application listed “Initial graft function” one of four variables to be measured for assessment of the first of two specific objectives of this research study. This is further specified in the Methods section of the grant application as: “The primary outcome measure will be number of patients in each group showing DGF/ SGF.” The parameters for information about research grants that is included and displayed on the Clinical Trials.gov website is under oversight of the U.S. National Institutes of Health.

      Melissa Greenwald MD, Acting Director, Division of Transplantation, Health Resources and Services Administration, Rockville, Maryland, USA


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 21, Oliver E Blacque commented:

      You are right about the name. HGNC has now approved KATNIP (katanin interacting protein) as an alternative name, and JBTS26 as an alias. Appreciate your comment.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jan 22, Christopher Southan commented:

      Exellent paper, but why didnt the authors engage with the HGNC to give this a useful name rather than a decades-old Japanese clone ID?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 08, Andrew Leifer commented:

      I am the senior author of this work. There is a small error in the supplement regarding which frame grabbers were used with which camera. In Table S6, the row "Frame Grabbers" should read "Active Silicon Firebird PCIe GEN II 8x" in the left column, and "BitFlow Karbon PCIe x16 10-tap" in the right column. This error has no impact on the results, but may have caused confusion for those seeking to exactly replicate our setup. I apologies for the inconvenience.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 06, Carl L von Baeyer commented:

      Spoofs and hoaxes are sometimes fun to read, but not if they look so much like real studies that many readers would be fooled -- particularly people whose first language is not English.

      This article is amusing but it is a hoax and it should not have been indexed by NCBI (and perhaps should not have been published without a clear disclaimer in the title). It does a disservice to the field of research on pain relief in young children.

      See http://ethicsalarms.com/2016/01/01/whats-more-unethical-than-a-web-hoax-how-about-a-scientific-journal-hoax/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 03, Jaime A. Teixeira da Silva commented:

      There are public concerns at PubPeer about a figure in this paper: https://pubpeer.com/publications/F2D03946483D4C86569CD34751C4C7

      Despite a formal request to the senior author, Prof. Suprasanna Penna, to address this, the request has been met with silence.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 21, Servi Stevens commented:

      While consulting the Exome Aggregation Consortium Database (ExAC) I noticed that the identified variant at position chrX:153,173,202 has a frequency of >1% in the East Asian population (see http://exac.broadinstitute.org/variant/X-153173202-G-A). This frequency may be too high to fully explain the clinical phenotype in this family. Furthermore, the variant is referred to as "T491M" in the Abstract, but is "T941M" in the rest of the manuscript.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 04, Avital Rodal commented:

      First, we would like to clarify that our FCHSD1 and FCHSD2 constructs, which generate protrusions in cultured cells, are not >500 aa as Dr. Gould suggested in her comment on 12/31. As described in Becalska AN, 2013, we quantitatively determined robust cellular activities for FCHSD1[1-417] (41 amino acids longer than the construct in McDonald NA, 2015 and for FCHSD2[1-414] (only 18 amino acids longer than the construct in McDonald NA, 2015). While we have not yet further characterized these two mammalian proteins in vitro, the same purified fragment of their Drosophila homolog, Nwk, generates ridges and scallops on liposomes and flattens, pinches, or crumples GUVs of the appropriate lipid composition Becalska AN, 2013, Kelley CF, 2015. These in vitro activities occur in the absence of the cytoskeleton or additional cellular factors. Further, when actin polymerization is inhibited in cells, the Nwk F-BAR still generates small buds, analogous to its scalloping activity in vitro Becalska AN, 2013. The cellular activity of Nwk requires both the concave surface and the tips of the canonical F-BAR, suggesting that the short additional C-terminal alpha-helical segment in our constructs is critical for F-BAR-dependent membrane bending activity, similar to SrGAPs Guerrier S, 2009. With this information, readers can make their own assessment about how the lack of activity reported by McDonald NA, 2015 from slightly shorter fragments of FCHSD1 and FCHSD2 could be related to the robust activity we have previously reported.

      Second, our interpretation that the mutants generated in McDonald NA, 2015 do not uncouple membrane binding and oligomerization arises from their data showing no biochemical difference in membrane binding affinity between mutants in the basic oligomerization interface (K163E) compared to the acidic oligomerization interface (E30K, E152K) (Fig. 5D,E). Their assumption was that the acidic patch mutants would not affect electrostatic membrane binding. However, these mutants impair binding to charged membranes to the same extent as the basic patch mutants, instead of the expected intermediate membrane binding affinity if only oligomerization (and thus avidity) was affected. Since the mutants behave identically, it suggests either that membrane binding affinity and oligomerization are intrinsically coupled, or at least that these specific mutations do not uncouple them.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Dec 31, Kathleen L Gould commented:

      We appreciate the productive comments by Dr. Rodal regarding the important biological function of F-BAR proteins. We completely agree that deciphering the nature of the “non-canonical” function of various F-BAR domain proteins is an important area of future research.

      Dr. Rodal takes issue with two aspects of our study: first with a lack of discussion of srGAP and Nwk literature, and second with the experimental methods used to detect membrane bending.

      First, we agree that additional F-BAR proteins have been identified with “non-canonical” membrane remodeling abilities in vitro including srGAPs and Drosophila Nwk proteins. Due to space constraints we could not extensively discuss this previous work, but cited and summarized it in Table S1. We utilized the same assays which originally discovered the activities of srGAP and Nwk to conclude Cdc15 and 6 other human F-BAR domains do not bend membranes. We tested a wide range of additional binding conditions (Figure S1), including testing diverse lipid compositions (unpublished data), which resulted in identical results. Indeed, we believe these examples of non-canonical F-BAR membrane assemblies further support the idea that F-BAR domains utilize diverse modes of oligomerization for functions upon the membrane.

      Second, Dr. Rodal indicates her group “published in 2013 that the F-BAR domains of FCHSD1 and FCHSD2 generate extensive membrane protrusions (to which the protein localizes) in both S2 cells and HEK cells, similar to Drosophila Nwk”. Indeed, these experiments and those with Gas7 were performed in vivo, with a full complement of cellular machinery. The results of such an in vivo experiment cannot substantiate the conclusion that these F-BAR domains physically deform the membrane. To directly test for membrane deformation, an in vitro experiment with the isolated domain is required, as we have performed (Figure 3 and S1). Dr. Rodal correctly points out the protrusions observed for FCHSD1/2 and Gas7 in vivo were actin dependent, indicating these proteins are organizing the actin cytoskeleton to generate protrusions, not necessarily directly remodeling of the membrane on their own. Furthermore, additional portions of FCHSD1/2 and Gas7 were present in the constructs used in these experiments. F-BAR domains comprise ~300-350 aa that fold into banana-shaped molecules as evidenced by >10 crystal structures. Large constructs were used in Dr. Rodal's experiments and these additional elements may confer unknown interactions and/or activities to the proteins in vivo.

      In sum, though our specific interpretations may differ, we appreciate the interest in our work and encourage all researchers in this area as we together pursue this exciting area.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Dec 27, Avital Rodal commented:

      F-BAR domains: diversity in oligomerization and membrane bending activities

      Avital Rodal, Brandeis University

      McDonald et al. report that several oligomerizing yeast and mammalian F-BAR proteins do not form membrane tubules in vitro or in heterologous cells. They go on to show that surfaces required for oligomerization in vitro are required for the in vivo functions of several of these proteins. They conclude that oligomerization, but not membrane bending, underlies the in vivo roles of these F-BAR proteins, and that their main function is to recruit and organize other proteins at the membrane. However, their conclusion that these proteins do not bend membranes is only supported by negative results (i.e. the authors did not observe deformation in vitro or in heterologous cells), and in some cases contradict published results. Further, the authors do not discuss a previous body of literature that shows that several F-BAR proteins, including SRGAPs and FCHSD proteins, generate non-canonical (i.e. non-tubular) membrane deformations that would not have been detected in their assays.

      Several groups have shown that the SRGAP family of F-BAR proteins generate negative membrane curvature (i.e. away from the protein-decorated face of the membrane) in multiple contexts: in purified systems in vitro, in heterologous cells upon overexpression, and in vivo during neuronal protrusion formation (Guerrier S, 2009; Carlson BR, 2011; Coutinho-Budd J, 2012). This activity is not consistent with tubular arrays observed for canonical F-BAR proteins (Frost A, 2008). Further, our group found that the FCHSD family of F-BAR proteins, which includes Drosophila Nervous Wreck (Nwk), exhibit an unusual higher order assembly that leads to non-tubule membrane remodeling. Using single particle EM, we showed that Nwk assembles into zig-zags on membranes instead of linear filaments typical of canonical F-BARs, and that this resulted in membrane ridges (Becalska AN, 2013). These deformations led to actin-dependent protrusions in heterologous cells, similar to a previous model for formation of cellular microspikes by the F-BAR protein syndapin (Becalska AN, 2013; Kelley CF, 2015; Shimada A, 2010). This activity does not require a novel membrane binding surface for any of these F-BAR domains. Instead these proteins use a conventional concave membrane binding surface, and appear to oligomerize into non-canonical arrays to deform membranes. This is not likely to be a special case for these specific F-BAR proteins, but rather suggests that different members of this protein family oligomerize into variable types of higher order arrays to generate and/or sense different types of membrane curvatures, which are neither tubules (the "dogma" for F-BAR domains (Traub LM, 2015)) or flat membranes (as proposed by McDonald et al.).

      McDonald et al. report that the Nwk homologues FCHSD1 and FCHSD2 do not bend membranes in vitro or in cells, and state that theirs is the first study to report their activities. In fact, we published in 2013 that the F-BAR domains of FCHSD1 and FCHSD2 generate extensive membrane protrusions (to which the protein localizes) in both S2 cells and HEK cells, similar to Drosophila Nwk (Becalska AN, 2013). They may have failed to detect membrane remodeling activity for FCHSD1 and FCHSD2 in cells because their constructs omitted part of a C terminal alpha-helical extension to the F-BAR domain that is essential for function in SRGAPs (Guerrier S, 2009). Indeed, another of their non-membrane bending mammalian proteins, Gas7, has been reported to generate cellular protrusions upon full-length protein overexpression (She BR, 2002). It remains to be tested if Cdc15 or the other apparently non-membrane remodeling mammalian F-BAR proteins in their study also show activity in vivo or in vitro when a more extended region of the protein is studied.

      In addition to the issue of potentially using inactive protein fragments, the specific in vitro and in vivo assays used by McDonald et al. could easily have missed non-canonical membrane bending activities. Several types of deformations are subtle on giant unilamellar vesicles (e.g. flattening, ridging, or any deformation that occurs on a ~100-200 nm scale rather than the micron scale of tubules), or are not detectable by negative stain (e.g. ridged, negatively curved, or flattened liposomes appear very similar to dried undecorated liposomes), or are unresolvable by light microscopy in cells. Cryo-EM of liposomes or thin sectioning and EM of cells is necessary to detect smaller scale deformation. Indeed, only large scale deformations like tubulation would have been detectable in the assays they used. Further, BAR domain membrane remodeling depends on a large set of parameters (Simunovic M, 2015), many of which were not tested by McDonald et al. For example, we recently showed that membrane binding and membrane deformation are not correlated, and that Nwk only deforms membranes within a limited "sweet spot" of membrane charge. This is likely dependent on F-BAR domain assembly and orientation on the membrane, which favors concave side-down under stringent binding conditions (Kelley CF, 2015). The activities of F-BAR proteins like Nwk/FCHSD1/FCHSD2 are not likely to have been detected by McDonald et al at 5% PI(4)P, the only lipid composition they tested for GUV and liposome deformation assays. Indeed, two more members of their set of six “non-deforming” F-BAR proteins, Fer and Fes, were previously shown to generate membrane tubules in vitro at 10% PI(4,5)P2 (Tsujita K, 2006; McPherson VA, 2009).

      Thus, though McDonald et al. may be able to make a case against tubulation for a few of these six human F-BAR proteins (as has previously been demonstrated for both SRGAPs and Nwks), they do not test other types of membrane bending or enough parameters to conclude that these proteins do not have membrane remodeling activities. Instead, the most compelling conclusion from our work, the SRGAP work, and McDonald et al. is that F-BAR domains oligomerize on membranes into diverse higher order assemblies, and that tubular scaffolds (for which there is indeed little in vivo evidence) are just one potential way to deploy F-BAR oligomers. A non-membrane-deforming assembly, as they suggest for Cdc15, is a plausible variation on this theme for some subset of F-BAR proteins, but the limited negative data they provide are not convincing enough at this point to rule out other models, nor do they indicate that this is the rule for non-tubulating F-BAR proteins. In addition, we note that since the mutants generated in McDonald et al. do not fully uncouple membrane binding affinity from oligomerization (because the tips are part of the membrane-binding surface), an alternative model that remains consistent with all of their data is that some F-BAR domains, including Cdc15, may function as individual, non-oligomerized dimers on the membrane.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 17, Marcus Munafò commented:

      Price and colleagues rightly celebrate the success of genomewide association studies (GWAS) in identifying genetic loci associated with a range of physical and mental health outcomes [1]. We agree that the GWAS approach continues to prove extraordinarily successful, in stark contrast to the candidate gene approach that preceded it [2]. However, what they do not emphasise is that in many cases these genetic loci can tell us as much about modifiable behavioural risk factors as they can about underlying biology.

      Perhaps the clearest example of this comes from GWAS of lung cancer, which identified an association with a nicotinic receptor gene cluster CHRNA5-A3-B4 on chromosome 15 (at 15q25) [3]. This cluster encodes three nicotinic acetylcholine receptor subunit proteins: alpha-5, alpha-3 and beta-4. The same locus was shown, in the same GWAS, to be associated with peripheral arterial disease, and has also been shown to be associated with chronic obstructive pulmonary disease [4]. What is critical to interpreting these results is the knowledge that the same CHRNA5-A3-B4 locus has been consistently associated with heaviness of cigarette smoking [3,5]. Functional studies have demonstrated that the minor allele at rs16969968 (i.e., the risk variant for heavier smoking) is associated with a decreased maximal response to a nicotine agonist in vitro [6], while animal studies using alpha-5 knock-out mice have clarified the behavioural effects on nicotine self-administration: knock-out mice respond far more vigorously for nicotine infusions at high doses and do not self-titrate nicotine delivery [7].

      There was initially some debate as to whether there is an independent effect of this locus on lung cancer risk (based on evidence of residual association after adjustment for self-reported smoking quantity). However, it is likely this residual association is due to the imprecision of self-report measures of heaviness of smoking, and misclassification of smoking status. For example, the locus accounts for a far greater proportion of variance in nicotine metabolite levels relative to self-report measures of daily tobacco consumption, and this is sufficient to fully account for the association with lung cancer risk [8]. In other words, smoking causes lung cancer, peripheral arterial disease and chronic obstructive pulmonary disease (as well as many other diseases), and this is confirmed by GWAS.

      The implication is clear – GWAS can tell us about modifiable behavioural risk factors that contribute to disease [9], and the results of GWAS should therefore be interpreted with this in mind. For example, a recent GWAS of schizophrenia identified the same CHRNA5-A3-B4 locus [10], raising the intriguing possibility that smoking may be a risk factor for schizophrenia [11]. With this in mind, Figure 3 in the commentary by Price and colleagues is striking – one gene associated with a range of different outcomes is ALDH2. The genes in this figure are described as pleiotropic, but here the distinction between biological (or horizontal) and mediated (or vertical) pleiotropy is critical. The former refers to a genetic variant influencing multiple separate biological pathways, while the latter refers to the effects of a genetic variant on multiple outcomes via a single biological pathway. In the case of ALDH2, which encodes the aldehyde dehydrogenase enzyme involved in the metabolism of alcohol and acetalydehyde, it is well established that a variant in this gene influences alcohol consumption [12]. Individuals with one or two copies of the inactive variant experience unpleasant symptoms following alcohol consumption, due to slow metabolism of acetaldehyde and its subsequent transient accumulation [12]. For example, ALDH2 was not identified in GWAS of blood pressure that recruited predominantly European samples [13], where the variant is rare, but was identified in studies that recruited East Asian samples [14,15], where the variant is common. A parsimonious explanation for the associations shown in Figure 3 of Price and colleagues, therefore, is that alcohol consumption causes these outcomes – a results of mediated (rather than biological) pleiotropy.

      Behavioural traits such as tobacco and alcohol use can be regarded as intermediate traits, which are under a degree of genetic influence, but which are themselves direct causal agents influencing various health outcomes. This logic also applies to intermediate phenotypes that may lie on the causal pathway, and may be amenable to behavioural or pharmacological intervention, such as LDL cholesterol. While GWAS have been extraordinarily successful in identifying genetic loci associated with disease outcomes, making full use of this knowledge will require an appreciation that GWAS can tell us as much about modifiable – including behavioural – risk factors as it can about underlying biology.

      Marcus Munafò and George Davey Smith

      1. Price, A.L., et al. Progress and promise in understanding the genetic basis of common diseases. Proc Biol Sci, 2015. 282: 20151684.
      2. Flint, J. and, Munafò, M.R. Candidate and non-candidate genes in behavior genetics. Curr Opin Neurobiol, 2013. 23: p. 57-61.
      3. Thorgeirsson, T.E., et al. A variant associated with nicotine dependence, lung cancer and peripheral arterial disease. Nature, 2008. 452: p. 638-642.
      4. Pillai, S.G., et al. A genome-wide association study in chronic obstructive pulmonary disease (COPD): identification of two major susceptibility loci. PLoS Genet, 2009. 5: e1000421.
      5. Ware, J.J., et al. Association of the CHRNA5-A3-B4 gene cluster with heaviness of smoking: a meta-analysis. Nicotine Tob Res, 2011. 13: 1167-1175.
      6. Bierut, L.J., et al. Variants in nicotinic receptors and risk for nicotine dependence. Am J Psychiatry, 2008. 165: p. 1163-1171.
      7. Fowler, C.D., et al. Habenular alpha5 nicotinic receptor subunit signalling controls nicotine intake. Nature, 2011. 471: p. 597-601.
      8. Munafò, M.R., et al. Association between genetic variants on chromosome 15q25 locus and objective measures of tobacco exposure. J Natl Cancer Inst, 2012. 104: p. 740-748.
      9. Gage, S.H., et al. G = E: What GWAS Can Tell Us about the Environment. PLoS Genet, 2016. 12: e1005765.
      10. Schizophrenia Working Group of the Psychiatric Genomics Consortium. Biological insights from 108 schizophrenia-associated genetic loci. Nature, 2014. 511: p. 421-427.
      11. Gage, S.H. and Munafò M.R. Smoking as a causal risk factor for schizophrenia. Lancet Psychiatry, 2015. 2: p. 778-779.
      12. Quertemont E. Genetic polymorphism in ethanol metabolism: acetaldehyde contribution to alcohol abuse and alcoholism. Mol Psychiatry, 2004. 9: p. 570-581.
      13. International Consortium for Blood Pressure Genome Wide Association Studies, et al. Genetic variants in novel pathways influence blood pressure and cardiovascular disease risk. Nature, 2011. 478: p. 103-109.
      14. Kato, N., et al. Meta-analysis of genome-wide association studies identifies common variants associated with blood pressure variation in east Asians. Nat Genet, 2011. 43: p. 531-538.
      15. Lu, X., et al. Genome-wide association study in Chinese identifies novel loci for blood pressure and hypertension. Hum Mol Genet, 2015. 24: p. 865-874.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 01, C Lopez-Molina commented:

      Code for this research, as well as for similar projects, can be found in: http://kermitimagetoolkit.com/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 01, C Lopez-Molina commented:

      Too bad, you're right, that's a terrible typo. It was formulated differently in [24], we rewrote it wrong ever since the earlier versions, and it got totally under our radar in every single revision. We hope it's understandable from the text and the previous formulae. Thanks for noticing.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Feb 12, Michael McCann commented:

      It seems that the inequality in P5 is reversed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 21, Anju Anand commented:

      Mohan Sindu, Anand A, Stanbrook M. Neuromuscular electrical stimulation to improve exercise capacity in patients with severe COPD. The Lancet Respiratory Medicine. 2016 Apr;4(4):e14-e16.

      This is a summary from our twitter journal club @respandsleepjc on this article


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 15, Victoria MacBean commented:

      Plain English summary:

      Patients with severe COPD often have weak legs as breathlessness can limit their ability to be active. Normally, to combat this and other symptoms of COPD, exercise classes called Pulmonary Rehabilitation (PR) are carried out. However, more severely affected patients may struggle to do PR.

      An alternative therapy was introduced, neuromuscular electrical stimulation (NMES), to COPD patients with more severe symptoms. NMES is when electricity is used to create muscle contractions, in this case in the thigh muscles. While NMES has been used to strengthen muscles in previous research, this trial is the first to explore the impact on daily activities and the first to investigate the longer-term impact of the treatment.

      52 participants with very severe COPD took part in this trial over two years. Participants received 30 minutes of NMES to both sets of thigh muscles daily for 6 weeks; 27 were placebo (‘sham’ stimulation) and 25 received active NMES. The aim: to assess the effectiveness of NMES, as a therapy to be conducted unsupervised at home, and at aiding daily activities. The main measure of effectiveness in this trial was a test of how far participants could walk in 6 minutes.

      The results of the walk tests strongly support the use of NMES for severe COPD patients, with the patients who received the active NMES being able to walk substantially further. During interviews active NMES participants expressed a greater ease in everyday tasks (such as climbing the stairs) and stated that they could carry out physical activities for longer. No participants reported any negative views. Unfortunately, the improvement provided by NMES quickly waned after the treatment had stopped. Therefore, all existing evidence suggests that NMES should not be considered a replacement for PR. NMES can be used as an extension to PR, and could be used when patients are unable to take part in PR programmes. In addition, the short duration of effect suggests that longer programmes need to be investigated. Nonetheless, this trial has shown that NMES is a practical home-based therapy, suited to patients with more severe symptoms and has gives suggestions for future research.

      This summary was produced by Reef Ronel, Year 12 student from JFS School, London as part of the investigators' departmental outreach programme.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 08, Morten Oksvold commented:

      Please note that this article contains falsified data presentations, which was concluded after an investigation by ORI:

      http://ori.hhs.gov/content/case-summary-forbes-meredyth-m


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 05, Marko Premzl commented:

      The third party data gene data set of eutherian growth hormone genes LM644135-LM644234 was deposited in European Nucleotide Archive under research project "Comparative genomic analysis of eutherian genes" (https://www.ebi.ac.uk/ena/data/view/LM644135-LM644234). The 100 complete coding sequences were curated using tests of reliability of eutherian public genomic sequences included in eutherian comparative genomic analysis protocol including gene annotations, phylogenetic analysis and protein molecular evolution analysis (RRID:SCR_014401).

      Project leader: Marko Premzl PhD, ANU Alumni, 4 Kninski trg Sq., Zagreb, Croatia

      E-mail address: Marko.Premzl@alumni.anu.edu.au

      Internet: https://www.ncbi.nlm.nih.gov/myncbi/mpremzl/cv/130205/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 10, Akihiro Umezawa commented:

      Thank you so much, Alistair. I also received the same comment from a chinese scientist. The authors totally agree with your comment. We will investigate phenotypes of the iPSC from the viewpoint of your comment 'Compound heterozygous mutations of the ERCC2 gene'. Some of the phenotypes could be linked to the gene functions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jun 05, Alistair Pagnamenta commented:

      Very interesting study and great that VCFs were freely available to review. Although my primary interest was to look at iPS induced artefacts (incl. UPD9), as the study “failed to find causal mutations in the XP-related genes”, I also searched for variants that might be responsible for the XP phenotype. Amongst the 300+ high-confidence, rare variants that were predicted potentially deleterious (https://variants.ingenuity.com/XP40OS), there were two in ERCC2, a gene which encodes a subunit of the TFIIH core complex helicase and which is linked to xeroderma pigmentosum complementation group D, XPD (OMIM *126340):

      • a c.2048G>A; p.(Arg683Gln) in exon 22 which is listed in HGMD and ClinVar as a disease causing mutation (CM970443; RCV000248679.1). The variant is in gnomAD database present on 4/246,000 chromosomes.

      • a 23bp deletion (c.2025_2046+1delCCTCATGGTCTTTGCCGACAAGG) which removes the end of the preceding exon. As indels aren’t robustly called from exome data, we are typically wary about reporting such variants without having viewed read alignments and/or having validated them with another method. However in this case, the deletion passes a number of confidence filters and is present in gnomAD on 1/246,108 chromosomes in an E Asian sample.

      As these variants are both heterozygous, it remains to be confirmed that they are found in trans. However, given the loci are 153bp apart, the alleles can likely be phased by using Illumina read-pair information. Assuming compound-heterozygosity can be demonstrated, these variants represent plausible candidates to explain the condition.

      The fibroblast cell line (XP40OS) was originally obtained from the JCRB cell bank catalogue which lists it as belonging to complementation group C and not group D. The stock cell line should be retested, either by the original polyethylene glycol-induced cell fusion based method (Sato K, 1982), or by molecular analysis of the above loci. Based on a review of submissions to major cell repositories, it has been estimated that 18-36% of cell lines are misidentified or contaminated (Hughes P, 2007). While in Okamura K, 2015, the mislabelling was relatively minor and doesn’t affect the overall conclusions, in other situations misidentified cell lines can lead to the inability to replicate scientific results and is a drain on scientific funding.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 15, Victoria MacBean commented:

      Plain English summary:

      Sickle Cell Disease (SCD) is amongst the most prevalent genetic conditions worldwide. Only being inherited if both one’s parents carry a ‘faulty’ gene in their DNA, SCD affects the Haemoglobin molecules that carry Oxygen in the blood, changing the shape of the red blood cells into so-called crescent shaped ‘sickles’. Despite its commonness, with over 300,000 babies being born with SCD worldwide every year, a clear and consistent picture of how SCD affects the lungs of children with SCD had not yet been researched. This study aimed to research the lung function of children affected by the disorder over time, observing how this changed in early and later childhood, and how this was affected by episodes of ACS (Acute Chest Syndrome) in early childhood, when the sickle-shaped red blood cells can block blood vessels and lead to various different injuries.

      Two groups of children were tested. The first, who were slightly younger on average, were measured twice for their lung function over an average of 2 years, while the second group were measured twice over approximately 10 years. A number of methods were used to test each person’s lung function, including ‘spirometry’ in which the quantity of air one can force out the lungs is measured, among other values like lung capacity. These measurements were then compared to a ‘control’ group of healthy children without SCD at a similar age, to give a normal level of lung function to compare against the SCD patients’.

      In both groups of children with SCD, a reduction in lung function over time was seen when compared to the groups of children without SCD. However, the lung function of those in the first, younger, group decreased at a faster rate.

      The results suggest that the fastest period of deterioration in lung function takes place in early childhood. Indeed, having an episode of ACS in young childhood was the only factor found that increased the likelihood of worse overall lung function later on. This could explain the faster decline of the younger group, as ACS is more common in younger children. This would seem to conclude that a focus should be placed on preventing ACS in young children as a strategy to improve the general lung function later on of those with SCD.

      This summary was produced by David Launer, Year 12 student from JFS School, London, as part of the investigators' departmental outreach programme.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 03, Pedro Silva commented:

      I am glad that Kosak et al. were able to improve the description of this reaction mechanism, and to find a superior pathway vs. the bimolecular mechanism I proposed with Carla Sousa (http://dx.doi.org/10.1002/ejoc.201300337 , with erratum at http://dx.doi.org/10.1002/ejoc.201301647).

      The following text would be most relevant in an entry for http://dx.doi.org/10.1002/ejoc.201300337, which is unfortunately unavailable in PubMed. It may nonetheless be useful for readers of the present paper.

      Upon reading this paper, I realized that I had not been completely clear regarding the reaction barrier of the bimolecular pathway. Our computations of that barrier were performed as:

      Gº(TS) -Gº(anisole:anisole prereaction complex )

      rather than :

      Gº(TS) - 2* Gº(anisole)

      The barrier we depicted did not therefore include the translational entropy component of ca. 10 kcal.mol-1 . We performed the computation in that way because the computation of entropy changes upon binding in solution (reviewed by Zhou and Gilson, Chem.Rev. 2009,109,4092-4107 DOI: 10.1021/cr800551w) is still somewhat contentious, and can be argued to depend on the relative magnitude of the volume of the complex vs. the expected volume available to a molecule in solution at a standard concentration of 1 mol.L-1 (1661 cubic angstrom). I feared that including the entropy values "as provided" by the program in gas phase would unreasonably inflate that estimate, especially because of the relatively large dimensions of the anisole:anisole complex (vs. the "theoretical cubic cage" of length 11.8 angstrom and 1661 angstrom3 volume).

      In case any readers find it useful for additional investigations, I have placed all my results (ca. 11 GB) in figshare.

      http://figshare.com/articles/Supporting_info_for_DOI_10_1002_ejoc_201300337/1541140


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 14, Michelle Lin commented:

      For more discussion about the study and Team Based Learning, read more in the JGME-ALiEM Hot Topics in Medical Education discussion (medical education virtual journal club) featuring this article. This site also includes a Google Hangout on Air discussion with the authors (Balwan, Fornari), a TBL expert (Jalali), and education scholar (Sherbino).

      http://www.aliem.com/team-based-learning-2016-jgme-aliem-hot-topics-in-medical-education/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 04, Michael Green commented:

      ATP6AP1 and ATP6V1B2 mutations were described about 1 year ago in a paper that the authors did not cite. PMID: 25713363.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 07, Stephen L. Black commented:

      Update: BMJ has now belatedly but effectively responded to my complaint, and a version of my original comment has been posted on the Open Diabetes website as I had initially intended.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jan 27, Valentine Njike commented:

      The reviewer is right to suggest this point it getting excessive attention. It was a secondary outcome measure, and a significant within-group change only, when primary measures were between-group changes. This is indicated explicitly in the paper...


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Jan 22, Stephen L. Black commented:

      This study claims to demonstrate that eating walnuts lowers cholesterol level, a claim enthusiastically and uncritically repeated in the media. Unfortunately, the claim is spurious, as the published data show no such effect. Instead the results do not show a statistically significant difference in cholesterol level between the experimental group which ate walnuts and the control group which did not. This is evident in Table 2 and is admitted by the authors in their discussion. In a puzzling comment, they attribute this negative outcome as “probably due to the placebo effect”. They instead focus on a significant decline in cholesterol level from baseline in the experimental group, ignoring the fact that a similar decline was evident in the control group which did not ingest walnuts. Thus, by the conventional logic of a placebo-controlled experiment, they failed to demonstrate that eating walnuts lowers cholesterol level. I find it inexplicable that a study with such an obviously-flawed conclusion should have been allowed to be published.

      As the journal web page invites on-line comments, I submitted one, pointing this out. I was astonished to be told by the editor that my comment would not be published, not because it lacked merit, but because the journal does not publish comments. As a last resort, I protested to the BMJ itself this pointless practice of inviting on-line comments while refusing to publish them. Alas, the BMJ, which proudly states that it “welcomes complaints” and will acknowledge them “within three working days” instead ignored me. All-in-all, my failed attempt to merely correct a scientific error leaves me discouraged concerning the current practices of this once-respected journal.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 02, Katherine Flegal commented:

      Kim et al Kim SH, 2016 provide a valuable review of the complexities surrounding issues of obesity and cardiovascular disease. They describe the AARP 10-year follow-up study of 527,265 respondents (Adams et al NEJM 2006 Adams KF, 2006) as showing that overweight was associated with higher mortality than normal weight. They then contrast these findings to those of the meta-analysis by Flegal et al (JAMA 2013 Flegal KM, 2013), attributing the apparent differences to a difference in the normal weight BMI category used in the two studies. However, the findings of these two studies are not as different as they may appear. The AARP article used the high normal BMI of 23-24.9 as the reference category and divided overweight into low (BMI 25-26.4), medium (BMI 26.5-27.9), and high (BMI 28.0-29.9) overweight. For men, the multivariate-adjusted HRs were 0.95 (95% CI 0.91–0.98) for low overweight. 0.95 (95% CI 0.92–0.98) for medium overweight and 1.00 (95% CI 0.96–1.04) for high overweight. The corresponding values for women were 1.00 (95% CI 0.94–1.07), 1.06 (95% CI 0.99–1.12) and 1.07 (95% CI 1.01–1.14). Thus even with a narrower reference BMI category, these findings from the full sample in the AARP study do not suggest excess mortality in the full overweight category (BMI 25-<30). Similarly, the ALLHAT study (Shah et al 2014, J Clin Hypertens Shah RV, 2014) cited by Kim et al found an HR of 0.96 (0.76–1.23) for overweight relative to BMI 22-24.9), again suggesting that a narrower reference category is not the explanation for findings of lower risk in overweight than in normal weight. The finding of higher mortality in overweight categories in the AARP study is only seen after the authors switched to a subgroup of 111,181 respondents and used reported past weight at age 50 instead of baseline weight. This special analysis with reported past weight rather than baseline weight, rather than the use of a narrower reference category, is the source of the apparent disagreement between the AARP study and the Flegal meta-analysis. Results from selective analyses of subgroups should be interpreted with caution.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 15, RAMON VILALLONGA commented:

      PLEASE CHECK THE COMMENT ON THE ARTICLE.

      Comment on “Panniculectomy Combined with Bariatric Surgery by Laparotomy: An Analysis of 325 Cases” R Vilallonga - Surgery Research and Practice, 2016 - hindawi.com

      We would be happy to comment on the study by Dr. Colabianchi et al. on the role of synchronic panniculectomy and bariatric surgery when performed by laparotomy. (...)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 26, Miguel Lopez-Lazaro commented:

      Most cancer risk is unavoidable, but most cancer cases are preventable.

      Tomasetti and Vogelstein recently reported in Science a highly positive correlation between the lifetime number of stem cell divisions in a tissue and the risk of cancer in that tissue. Based on this correlation, they proposed that most cancers are unpreventable ('the bad luck of cancer'), and that early detection may be more effective than prevention to reduce cancer mortality [1]. Fortunately, 'the bad luck hypothesis' does not seem to be correct. It was based on the assumption that the parameters 'stem cell divisions' and 'DNA replication mutations' are interchangeable. These parameters cannot be interchanged, mainly because the mutations arising during DNA replication are random and unavoidable, while the division of stem cells is not a random and unavoidable process (the division of stem cells is highly influenced by external factors and physiological signals that can be controlled). A second important reason is that the parameters 'cancer risk' and 'cancer incidence' cannot be interchanged either [2].

      In this Nature article, Wu et al. use several modeling approaches to propose that most cancer risk is avoidable. They conclude that unavoidable intrinsic factors contribute only less than 10-30% of the lifetime cancer risk. However, cancer statistics make this conclusion very difficult to accept. Age (an 'unavoidable' intrinsic factor) is by large the most important risk factor for the development of most cancers. For example, according to SEER Cancer Statistics Review 1975–2012, the risk of being diagnosed with prostate cancer is over 2800 times higher in men over 60 years old than in men under 30. For lung cancer, the risk is over 600 times higher in people over 60 than in people under 30. Extrinsic factors do not increase cancer risk that much; for example, smoking increases lung cancer risk by approximately 20 times. Therefore, the proposal that extrinsic factors contribute more than 70-90% to the development of these and other common cancers (see e.g. Figure 3b) does not seem to be correct. The second assumption present in the Science article also seems to be present in this article (see e.g. Extended Data Table 2).

      The fact that most cancer risk is unavoidable does not mean that most cancer cases are unpreventable. Preventing a small percentage of cancer risk may be sufficient to prevent a high percentage of cancer cases. For example, although age is by large the most important risk factor for lung cancer, avoiding smoking prevents a high percentage of lung cancer cases. Extrinsic factors can be seen as the “the straw that breaks the camel’s back”; they are not the major contributors in most cases, but they can be decisive [2].

      [1] http://www.ncbi.nlm.nih.gov/pubmed/25554788

      [2] http://www.ncbi.nlm.nih.gov/pubmed/26682276


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Dec 23, Song Wu commented:

      We appreciate the thoughtful comment, which raises a very interesting point that tumor microenviroment may also affect tissue-specific cancer incidences. We agree with that. However, it is important to note that we adopted a very specific definition of intrinsic cancer risk factors as defined in this article, as well as in the previous study in Science. In this formulation intrinsic risk refers only to the internal mutation rate in those dividing cells (stem or otherwise). As such, this factor is most susceptible to randomness. In our further analyses including all our four distinct approaches, we remained agnostic as to the nature of extrinsic factors. These would include not only environmental factors but also factors in the organism that are extrinsic to the tumor including inflammatory mediators, immune responses, hormones, and tissue microenviroment. These are all potentially modifiable conditions, and should belong to the domain of extrinsic factors.

      We also agree that external factors may act through avenues other than stem cell, which is the reason that we specifically did not say that external factors (or even internal factors) act exclusively through stem cells. Additionally, in some components of our analyses, such as evidence from epidemiological data and mutational signatures, the results are independent of whether external factors act through stem cells or not. In this regards, our initial approaches were primarily directed at the question of whether the strong correlation between stem cell division and cancer risk can distinguish the effects of intrinsic from extrinsic factors, and our results show that it does not.

      Overall, our main message is to promote further research into the causes of cancer and how they could be prevented.

      -The Authors


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Dec 20, James Degregori commented:

      The article by Wu et al argues that extrinsic risk factors contribute far more to cancer risk than calculated by Tomasetti and Vogelstein (1). While I share their critique of the deficiencies in the assumptions and conclusions made by Tomasetti and Vogelstein (2), I would argue that they make a major error in solely regarding extrinsic factors as mutagens; i.e. they calculate the added risk of cancer caused by these factors (such as smoking) as solely coming from increases in mutation frequency. In fact, their modeling (see Fig 4) requires very large numbers of lifetime stem cell divisions, in part because they do not consider any selective impact of mutations. The accumulation of multiple mutations in a single cell lineage would be extremely improbable without clonal expansion (to increase the target size) following each mutation. Since Nowell (1976) (3), the cancer field has largely considered these mutational events to be inherently advantageous. Thus, most cancer models have been primarily focused on the occurrence of mutations, assuming that each oncogenic mutation immediately and inevitably leads to clonal proliferation and is thus rate-limiting for cancer progression. But this understanding of the fitness effect of mutations is discrepant with evolutionary theory, whereby the fitness value of a mutation is entirely dependent on context (genetic, environmental, etc.).

      Extrinsic risk factors like smoking, as well as intrinsic risk factors like aging, will do much more than affect mutation load – they will drastically alter tissue landscapes and thus influence the selective value of mutations (4). The major driver of organismal evolution is environmental change, largely by impacting selection and drift. To give just one example, the hominid lineage leading to modern humans has undergone drastic phenotypic change in the last 5+ million years, and yet I doubt that any evolutionary biologist would argue that this was due primarily to mutation accumulation. Instead, changing environments and selective pressures drove human evolution. Our ape and chimp cousins took a different path, due to different environmental pressures, not due to differences in mutation rates. At the organismal level, it is environmental perturbations that lead to evolutionary change as organisms adapt to new environments. So why do cancer biologists so often ignore the role of altered selection driven by environmental (i.e. tissue microenvironment) changes when considering links between cancer incidence and factors such as aging, smoking, obesity, etc.?

      When the dynamic evolutionary concept of fitness is incorporated into our understanding of cancer, then cancer progression, as a type of somatic evolution, can primarily be understood as a microenvironment-dependent process. While the natural selection driven maintenance of tissues through periods of likely reproduction promote stabilizing selection in stem and progenitor cell pools (limiting somatic evolution), alterations in tissue landscapes (whether from aging, smoking or other insults) will change adaptive landscapes, promoting selection for mutations that are adaptive to this new microenvironment. Some of these mutations can be oncogenic, and thus contexts that promote tissue change like aging promote selection for adaptive oncogenic mutations. Of course, mutations are still necessary (and thus cell divisions are necessary), but mutations without the other evolutionary forces of selection and drift would be insufficient to account for increased rates of cancer in old age, in smokers, and for other cancer-promoting contexts. Hopefully, the impact of etiologic factors on cancer risk will be more frequently considered in terms of how they impact tissue microenvironments and selection, in addition to how they impact mutation frequency.

      This comment was also posted on the Nature website linked to this same paper.

      James DeGregori Department of Biochemistry and Molecular Genetics University of Colorado School of Medicine james.degregori@ucdenver.edu

      1 Tomasetti, C. & Vogelstein, B. Cancer etiology. Variation in cancer risk among tissues can be explained by the number of stem cell divisions. Science 347, 78-81, doi:10.1126/science.1260825 (2015).

      2 Rozhok, A. I., Wahl, G. M. & DeGregori, J. A Critical Examination of the “Bad Luck” Explanation of Cancer Risk. Cancer Prevention Research 8, 762-764, doi:10.1158/1940-6207.capr-15-0229 (2015).

      3 Nowell, P. C. The clonal evolution of tumor cell populations. Science 194, 23-28 (1976).

      4 Rozhok, A. I. & DeGregori, J. Toward an evolutionary model of cancer: Considering the mechanisms that govern the fate of somatic mutations. Proceedings of the National Academy of Sciences of the United States of America 112, 8914-8921, doi:10.1073/pnas.1501713112 (2015).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 24, Randi Pechacek commented:

      Jonathan Eisen wrote a blog post on microbe.net praising this paper for its careful accuracy.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 11, Todd Lowe commented:

      We thank Isidore Rigoutsos and co-authors for the helpful, detailed comments. We have addressed all the issues in their commentary.

      In brief:

      1) The legacy names have been fixed and updated.

      2) For now, we have put the pseudogene designations back into the database — this will likely change in the future as we develop a more objective, less arbitrary cutoff for what is called a tRNA pseudogene. We will post release notes when we do make these types of changes in the future, and welcome feedback from users.

      3) The name mismatches between the GtRNAdb, NCBI, and HGNC for human tRNAs is due to an incomplete update of HGNC records requested previously, but we are coordinating with them to get these updated quickly.

      4) We have put a disclaimer on the front page of the GtRNAdb letting users know that this database is a reflection of what we believe is the state of the art in tRNA detection and annotation. Our own understanding of tRNAs has been greatly enhanced by the use of the new isotype-specific models and increased sensitivity from tRNAscan-SE 2.0 (manuscript in preparation), so the criteria that we previously applied are, in some cases, clearly outdated and can be improved. A number of manuscripts are in preparation detailing these new insights based on our improved tRNA detection methods.

      5) If users prefer a static, historical view of tRNAscan-SE gene calls, the prior GtRNAdb will still be available for reference at http://gtrnadb2009.ucsc.edu/ for the foreseeable future.

      6) Some endpoints have indeed changed slightly (generally by 3-10 nucleotides total, for just 15 of 600+ total genes) because low-scoring tRNAs are aligned & scored slightly differently by Infernal 1.1 and the new tRNA covariance models, compared to the older software. We have over 60 new isotype-specific covariance models that we are still improving and refining, so small adjustments are expected over the next few months.

      7) Of the "new" tRNAs in the GtRNAdb from the human genome, the vast majority are either very low scoring (i.e., they were just below the 20.0 bit covariance model score reporting cutoff used by the old version of tRNAscan-SE; with Infernal and new models, those scores may shift by 2-5 bits, bringing some above the reporting threshold, and some dropping below it). Because these differences are only affecting very low-scoring tRNAs, they appear to have little to no effect on the high-scoring tRNAs used in translation. Also, a fair number appear to be mitochondrial-derived tRNA genes, which tRNAscan-SE 2.0 is now able to detect with high sensitivity. These nuclear-encoded mitochondrial tRNA genes are now recognized to be commonly found in nuclear genomes, just as many mitochondrial proteins have migrated to the nuclear genome.

      We regret any inconvenience these issues have caused users in the first few weeks since the database went live in December 2015. We encourage users to email us directly if they have questions or issues we can address. We are working hard to make this a useful, powerful resource to support the rapidly growing field of tRNA research.

      Todd M. Lowe, Biomolecular Engineering, University of California, Santa Cruz


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jan 21, Isidore Rigoutsos commented:

      Comments on the contents of release v2.0 of gtRNAdb

      The gtRNAdb repository is a great resource for researchers studying transfer RNAs (tRNAs). With the increasing interest in the roles tRNAs and tRNA fragments play outside the confines of codon translation into amino acids, databases such as gtRNAdb are expected to provide invaluable reference information.

      We focused on the Homo sapiens portion of gtRNAdb v2.0 and found the following:

      Sweeping, undocumented changes in v2.0. In the above article, the authors state that they populated gtRNAdb v2.0 using a method that underwent major development but has not yet been peer-reviewed (the method is cited as “Chan et al., in preparation”). Specifically, for Homo sapiens the new method resulted in changes affecting 28% of the human tRNA records; the changes comprise

      • the deletion of 74 tRNA records originally in v1.0

      • the “elevation” of 41 pseudo-tRNAs of v1.0 to tRNAs in v2.0

      • the modification of endpoints in v2.0 compared to v1.0 for 20 tRNAs

      • changes in the claimed anticodon for 6 tRNAs

      • the addition of 60 new human tRNAs in v2.0

      Many new/modified human tRNAs have secondary structures that deviate substantially from the tRNA cloverleaf. We inspected manually the secondary structures of those human tRNA entries that are new to or have been corrected in v2.0 and found that

      • at least 9 of the 41 elevated pseudo-tRNAs,

      • at least 15 of the 20 entries whose endpoints changed in v2.0, and

      • at least 18 of the 60 newly added tRNAs

      (i.e. nearly 35% of the new/corrected entries) have abnormal secondary structures that deviate greatly from the tRNA cloverleaf.

      Many v2.0 tRNA records are linked to incorrect legacy identifiers. 66% of the human tRNA records (405 of the 606) contained in gtRNAdb v2.0 have been linked to incorrect legacy identifiers. Specifically:

      • legacy identifiers were given to the 60 new tRNAs of v2.0 even though they did not exist in v1.0

      • 331 tRNAs whose coordinates did not change between v1.0 and v2.0 have been associated with a legacy identifier in v2.0 that does not match the original identifier in v1.0

      • 14 entries that had their endpoints modified in v2.0 have been assigned legacy identifiers that do not match the entries’ original v1.0 identifier

      Many v2.0 tRNA records contain data that are in conflict with their counterpart NCBI Gene and HGNC records. For 116 of the 606 human tRNAs, their gtRNAdb v2.0 records list different chromosome, strand, and endpoint information than the respective NCBI Gene record. These chromosomal location incompatibilities also extend to HGNC (HUGO Gene Nomenclature Committee) records that are linked to directly from within gtRNAdb records.

      Competing interests: The above observations were compiled by Phillipe Loher, Venetia Pliatsika, Aristeidis G. Telonis, Yohei Kirino and Isidore Rigoutsos all of whom are with the Computational Medicine Center of Thomas Jefferson University and are actively involved in tRNA research or have published previously in this area. PL, VP, AGT, YK and IR declare no competing financial interests.

      More information: a more detailed description on the above together with accompanying Figures and Tables can be found here.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 28, Atanas G. Atanasov commented:

      Whereby known or predicted usefulness of the synthesized molecules is lately a major driving factor dictating whether chemical synthesis projects are financed or not by funding bodies, it is still important to consider that generation of new chemical molecules might also lead to a following discovery of physical, chemical, or biological properties that are not possible to be predicted in advance. Therefore, new structure synthesis could also be seen as a tool for “exploring the unknown”, which might later lead to unpredictable but very beneficial expansion of existing knowledge. In the context of drug discovery, for example, screening approaches can lead to the discovery of new pharmacological activities of chemical structures that were previously generated without knowledge for possible effects on the biological target molecules that are later found to be affected (e.g., a recent example from this journal: Huang et al. Allosteric ligands for the pharmacologically dark receptors GPR68 and GPR65. Nature. 2015 Nov 26;527(7579):477-83. doi: 10.1038/nature15699).

      Atanas G. Atanasov http://pharmakognosie.univie.ac.at/people/atanasov-atanas-g/ http://homepage.univie.ac.at/atanas.atanasov/ http://about.me/Atanas_At


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 25, Egon Willighagen commented:

      I was wondering earlier today if a possible cause if this could be a change in research funding, which has become more project-based and often more focusing on societal impact (or so is my perception) [0]. Chris Evelo argued [1] that if that to be true, one would expect words like important, valuable, and beneficial. Have you considered including words that could hint at societal impact? Do you even have data on such words too? Does that show different patterns than the positive words you measured?

      0.https://twitter.com/egonwillighagen/status/713341316244107265 1.https://twitter.com/Chris_Evelo/status/713361254941908992


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 11, thomas samaras commented:

      The paper provides excellent information on the body fat differences in different ethnic groups. Heymsfield et al. also provide convincing evidence that contemporary cohorts usually follow the height squared law [Quetelet Index (QI)] instead of the height cubed law [Ponderal Index (PI)]. However, there are significant exceptions such as data from the HES and HANEs findings (Advancedata Nov. 1976, p 8.). Based on about 6500 individuals, these findings show increasing weight follows or exceeds the PI.

      Men 18-74 years: Actual weight: 172 lb vs. Predicted height cubed 
      weight (PI): 171.2 lb
      
      Women 18-74 years: Actual weight: 143 lb vs. PI: 140 lb
      

      While there was a 10 year difference in the two populations, the QI did not apply. The increase in weight with height followed the PI (wt/ht3).

      Another study (Heymsfield, et al. 2007) showed taller and shorter contemporary cohorts of men followed the PI as shown below:

      Height 177cm vs. 174 cm: Actual weight of taller males = 80.6 kg vs.predicted height cubed weight
          (PI) = 80.8 kg 
      

      Data from Cameron and Demerath (2002) showed the PI applied to contemporary 9-year old children as well. See below:

      Heights: 135.8 cm vs. 132.1 cm: Actual weight of taller cohort = 32.4 kg vs.31.1 kg predicted by
          the PI.
      

      Data from Heude et al. (2003) also found that 10-11 year old French children experienced a weight increase that was slightly higher than the PI between 1992 and 2000.

      Over 80 populations worldwide follow or exceed the PI when different generations are compared. A few are reported in Medical Hypotheses 2002, vol 58 (Table 1): The actual weight for the taller group is compared to the predicted weight based on the assumption that weight increases as the cube of the increase in height or PI and not the square of height as in the Quetelet Index):

      Harvard entrants (males)in 1930s vs. 1958-9: Actual wt: 73.7 kg vs. PI prediction: 71.5 kg                  
      
      Wellesley entrants (females)in 1930s vs. 1958-59: Actual wt: 57.9 kg vs. PI prediction: 58.2 kg                         
      
      Male school children in 1934-35 vs. 1958-59: Actual wt: 51.0 kg vs. PI prediction: 51.3 kg
      
      Swedish males in 1971 vs. 1995 (n = 488,732): Actual wt: 72.1 kg vs. PI prediction: 68.0 kg                     
      

      The reasons for taller people having the same or lower BMI/QI compared to contemporary shorter people may be that food portions are standardized so that taller people consume fewer calories per day and shorter people get relatively more. For example, independent of height, most people drink or eat one glass of milk, one hamburger, and the same size meal when dining out. Another factor is socioeconomic status (SES). Taller people are more often from higher SES compared to shorter people, and we know that in the West, poorer people tend to be more overweight or obese than higher SES people. The most likely explanation for this condition is that higher SES people follow healthier eating habits and are more attentive to weight gain. However, the BMI of taller people appears to be increasing. For example, Cohen and Sturm (2008) reported that shorter Americans had significantly higher BMIs than taller people in the past. However, in recent years, they found that taller people have been gaining in BMI at a faster rate than shorter individuals.

      In conclusion, as the world population increases in height, weight increases at a rate that matches the Ponderal Index. That’s why the average US male in 1900 had a BMI of 21-23 compared to 26-28 today. Height since the 1900s has increased by 3 to 4 inches. Of course, much of weight increase is related to fat mass.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 25, Andrea Messori commented:

      Application of the budget-threshold pricing model to PCSK9 inhibitors: detailed description of the the pharmacoeconomic calculations

      By Andrea Messori, HTA Unit, Regional Health Service, 50100 Firenze (Italy)

                                                                                                              .
      

      Table 1 of this Comment describes in detail the calculation steps that, in the budget-threshold pricing model developed by the Institute for Clinical and Economic Review (ICER), lead to the estimation of the annual treatment cost of $2,177 per patient.

                                                                                                              .
      


      Table 1. Price estimation method proposed by ICER for PCSK9 inhibitors: steps involved in the estimation of the annual treatment cost of $2,177 per patient



              Parameter                                                             Estimate  
      

      A) Total drug spending …………………. : $410 billion (estimated as 13.3% of $3.08 trillion, which is the total health care spending).


      B) Annual threshold for net health care

      cost growth for ALL new drugs.............…: $ 15.4 billion (3.75% of $410 billion, where 3.75% is the growth in US GDP, 2015-2016+1%).


      C) Annual threshold for

      average cost growth per

      individual new molecular

      entity....................................................... : $452 million (estimated as $15.4 billion/34, where 34 is the average annual FDA entity

                                                                          approvals, 2013-2014). 
      

      D) Annual threshold for estimated potential

      budget impact for each individual new

      molecular entity…………………….....……: $904 million (estimated as 2 x $452 million).*


      E) Five-year annual budget impact

      threshold per new moldecular entity........… : $4.52 trillion (calculated as 5 x $904 million).


      F) Total five-year health-care savings for<br> each PCSK9 inhibitor……....................…… : $1.22 trillion (estimated by ICER).


      G) Five-year annual budget impact

      threshold per each PCSK inhibitor

      corrected according to health-care

      savings…..............…………................….… : $5.74 trillion [sum of (E)+(F)]


      H) Annual treatment cost per patient for

      each PCSK9 inhibitor………………………… $2,177 (estimated as $5.74 trillion / 2,636,179 (assuming that 2,636,179 patients are treated).


      *This step is not perfectly clear. |It seems that the maximum absolute budget allowed to a new molecular entity is the double of the budget expected on average for a new molecular entity.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jan 25, Andrea Messori commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Jan 25, Andrea Messori commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Jan 25, Andrea Messori commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Jan 25, Andrea Messori commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2016 Jan 25, Andrea Messori commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 28, Stuart RAY commented:

      Ten genomic sequences reported in this publication (which should be linked to this PubMed entry) are: KT427414.1 KT427413.1 KT427412.1 KT427411.1 KT427410.1 KT427409.1 KT427408.1 KT427407.1 KU159665.1 KU159664.1 Note that NC_027998.2, the reference genome for HPgV-2, is a duplicate of KT427414.1 on which it's based.

      Metagenomic reads are in the NCBI Sequence read archive with accession SRP066211


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 28, Michael Bunce commented:

      Thank you for your comments from the 5th June 2016 we welcome the chance to respond.

      The main point to be taken from this paper is that there is an urgent need for an accurate testing method including DNA, toxicological and heavy metal screening of complementary medicines. It is clear from the results of this paper that some products that are available for over the counter sale to the general public, whether or not they have been correctly listed with the TGA, could pose a health risk to consumers or contain ingredients that are not declared on the packaging. The authors agree with the comment that the ‘actions of a few have the potential to tarnish the reputation of the majority whose behaviour is professional’, and this is precisely why we believe that stronger regulation and pre and post-market auditing of products should occur. Consumers of TCM and other complementary medicines should be made aware that some products do not conform to TGA regulations, and consumers are encouraged to contact the TGA for more information about particular products if they have a concern.

      A few more replies to your specific queries:

      1) These 26 TCM were purchased across a number of retailers as a consumer would purchase them (they were not sold as ‘food’). The fact that they are not randomised does not compromise the study as we simply report on the results of the 26 TCM tested here – noting the results warrant concern. We welcome future efforts to expand sampling to determine a more holistic overview of compliance or lack thereof. The salient point here is perhaps auditing should occur before market?

      2) We fully concede (and refer to it in the paper) that salicylic acid can be naturally derived from many plant species.

      3) The blue/red colour coding in Figure 1 refers to TCM that are listed and unlisted – as such the data is clearly portrayed.

      4) With regard to ‘measurement error’ and ‘false positives’ – we document fully the clean room precautions, replication and controls implemented in the DNA workflows. Our lab specialises in trace DNA analyses on a variety of biological substrates. As noted in the paper assignments based on DNA metabarcoding data (publically available – see link in paper) are conservative.

      5) In response to your query regarding Supplemental Table 1, the allocation of TCM8 and TCM11 to the TGA unlisted category of products was an error in the Supplemental Table and we can confirm that these two products do have AUSTL numbers and are formally listed with the TGA, as correctly reported in the published manuscript (Tables 1, 2 and 3). We will correct this minor error in the supplemental table with the publishers – many thanks for making us aware of it.

      6) Finally, we would have been happy to write a reply to the ‘Journal of Chinese Medicine and Science’ but the journal has no content online and is not formally listed as a journal, the link at www.fcma.org.au (as of July 2016) lists no content. Can you please provide a formal link to the Journal so readers at PubMed can assess the legitimacy of the publication.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jun 05, Sherman Gu commented:

      We are writing to express our concerns about some reporting errors and therefore conclusions reached in this paper. Firstly, there is a discrepancy in the reporting of which group, the Therapeutic Goods Administration (TGA) unlisted or listed products, a few of the products belong to. For example, according to the Supplementary Table (S1) of the report, TCM8 and TCM11 belong to the TGA unlisted product group, yet the results for these products appear in the TGA listed group in Tables 1, 2 and 3. It would thus appear from Table S1 that there are nine listed TCM products and 17 unlisted products, not 12 listed products as is stated in the Discussion Section of the report. Secondly, with respect to Table 2 which reports adulterants or undeclared products, trace amounts of salicylic acid were found in five listed TCMs including TCM22. TCM22 contains Panax ginseng, and one of components of this herb is known to be salicylic acid. This might explain the trace amounts found in this TCM sample. Nonetheless, it is of concern that three (excluding TCM8 and TCM11 which are apparently unlisted TCMs according to Table S1) of the nine listed TCM products were adulterated with either pseudo/ephedrine, methylephedrine (TCM2, TCM7) or sildenafil (TCM23). Thirdly, it should also be noted that five of the TCM samples were herbal tea bags (TCM11, TCM13, TCM24, TCM25, TCM26) and one was ‘flakes’ (TCM10), all of which are unlisted, according to Table S1. In our opinion, these would not be considered conventional TCM herbal medicines that a registered Chinese medicine practitioner would prescribe. Without any information as to where these six products were purchased, it is difficult to form any conclusions as to whether these are potentially prescribed by any practitioner or whether they were sold as food in a shop. These findings potentially skew the summary set out in Figure 1 (i.e. if TCM8, TCM11 do not belong in the listed group and if the salicylic acid found in TCM22 were to be consistent with that expected in a TCM containing Panax ginseng, and if the six herbal tea bags and flake samples were deleted). The percentages of contaminated/adulterated products may well be different between the unlisted and listed products. Fourth, by reporting the results of all the products sampled together (92%) in the Abstract, rather than separating them out into percentages relating to the listed group of products and the unlisted group of products, important information and an important distinction is missed. Those TCMs that are considered safe for use are those that are listed (or registered) under the TGA. Those that have not been listed (or registered) on the Australian Register of Therapeutic Goods have not been assessed by the TGA for safety, quality and in the case of registered products, efficacy. The use of unlisted (or unregistered) proprietary TCMs by a registered Chinese medicine practitioner is against the Chinese Medicine Board of Australia (CMBA)’s Code of conduct and Guidelines for safe Chinese herbal medicine practice. Fifth, the sample collection method is somewhat vague and there is no evidence of any randomisation of the process of selecting the products. There is no information about whether the Chinese medicine practitioners were registered with the CMBA and how many practices the products were purchased from, nor how many nor what kind of retail stores were targeted for the selection of products. These factors are important to consider in any discussion of the ramifications of findings, in particular in relation to the unlisted products. Finally most measurement instruments have a level of error. The possibility of false positives with respect to, for example, the DNA testing has not been mentioned. These details are important in the analysis of findings and also constitute some of the limitations of the study. Unfortunately limitations of the study do not appear to have been discussed at all, as would be expected of a scientific study. Whilst there may be valid reasons for not divulging the names of the TCM products (i.e. the potential for litigation), non-disclosure makes this investigation impossible to replicate by other researchers and therefore verify the results. The results will be of concern to responsible Chinese medicine practitioners who need to know which products should be avoided. These criticisms of the paper are not meant to detract from some of the findings of the paper. The fact that unlisted products are available, albeit possibly from only a small number of sources, is unfortunate since the actions of a few have the potential to tarnish the reputation of the majority whose behavior is professional. It is commendable that the authors of the above-named paper have conducted an independent analysis of several proprietary TCMs. However, greater care and tighter reporting of results would have gone a long way to improving the quality of this report. The popular press will always pick up what makes for a sensational headline. The consequences of reporting the combined result of 92%, the figure itself open to debate, has been to cause alarm. The general public is unlikely to read the research paper nor be able to interpret the research findings and may be left feeling concerned. We look forward to reading the results of future research which investigates such an important issue, conducted in a large and more representative sample of TCM products.

      Re-printed with permission from “Gu, S., O' Brien, K., & Cheung, T. (2016). Reporting Errors in the Coghlan et al report 2015. Australian Journal of Chinese Medicine and Science, 3(1), 66-67.”


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 05, ROBERT HURST commented:

      Unfortunately, KU-7 is not bladder cancer but is, instead, HeLa cells. PMC3805942


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 25, Kristina Hanspers commented:

      Figures EV6b and EV7j are available as a pathway in the WikiPathways Open Access Collection: http://www.wikipathways.org/index.php/Pathway:WP3596 and http://www.wikipathways.org/index.php/Pathway:WP3595. This pathway can be downloaded in gpml format and used for analysis and visualization in applications like PathVisio and Cytoscape.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 17, Kirk O'Reilly commented:

      McIntyre et al. have published a series of papers evaluating the use of soil bioretention systems to reduce the toxicity of stormwater and highway runoff (McIntyre et al. 2014, 2016, 2016b). Their recent paper (McIntyre et al. 2016) investigates the use of soil bioretention to treat artificial runoff from a test plot treated with a refined tar sealant (RTS). What readers may not know is that there is an on-going controversy concerning the environmental implications of RTS use (LeHuray 2015; USGS 2016). According to McIntyre’s acknowledgement, U.S. Geological Survey personnel responsible for the agency’s effort to ban RTS assisted in project planning. The goal of this comment is not to criticize McIntyre’s research on the use of soil bioretention systems but to discuss the results in the context of other studies so that the paper is not misused by those advocating for RTS product bans. The technical basis for these comments are summarized in Exponent (2016).

      Key Points:

      The title overstates the toxicity of RTS runoff.

      The title suggests that sealant runoff causes “severe” toxicity. But severe is neither defined nor used in the text of the article. Severe toxicity is not a commonly used term in aquatic toxicology. Use of “acute toxicity” or just “toxicity” in the title would be less inflammatory and more consistent with the study results.

      As noted by the authors in a prior publication (McIntyre et al. 2014), “Developing fish embryos are particularly vulnerable to the harmful effects of chemical contaminants and have long been a focus for toxicity screening. More recently, model species such as the zebrafish have provided an increasingly sophisticated experimental context for evaluating the developmental toxicity of individual chemical constituents in stormwater.” McIntyre et al. (2016b) says that their tests measure subtle biological effects. A positive bioassay result with a particularly vulnerable fish embryo test system is insufficient to support the suggestion of severe toxicity.

      While the survival of another test species, juvenile Coho salmon, was less in runoff from a sealant test plot than the control, only the runoff collected within a few hours of sealant application killed all the organisms. Mortality decreased substantially for subsequent samples and remained significantly different from controls with runoff collected two weeks but not three weeks after application.

      The toxicity of sealant runoff is consistent with the toxicity of runoff from unsealed surfaces.

      McIntyre et al. (2014, 2016b) describe tests conducted on highway runoff collected during six storm events. Two of the six stormwater samples killed all Zebrafish embryos, and a third sample resulted in a significant reduction in survival. All six of the stormwater samples caused sublethal effects similar to those discussed in the RTS paper.

      Greenstein et al (2004) tested the toxicity of artificial runoff from an operating asphalt parking lot in Southern California. There was no evidence that RTS was ever applied on the lot. Using a sensitive marine aquatic bioassay, sea urchin egg fertilization, toxicity was noted in all runoff samples.

      Soil Bioretention treatment of RTS and highway runoff can eliminate toxicity and significantly reduced the response of sensitive molecular indicators.

      Soil bioretention is a sustainable approach in which runoff is filtered by soil. It mimics natural processes that can reduce the toxicity of runoff from urban surfaces including sealed parking lots.

      Acknowledgment

      The author of this comment has conducted research funded by the Pavement Coating Technology Council.

      References

      McIntyre JK, Edmunds RC, Anulacion BF, Davis JW, Incardona JP, Stark JD, Scholz NL. 2016. Severe coal tar sealcoat runoff toxicity to fish and reversal by bioretention filtration. Environmental Science & Technology 50(3): 1570–1578.

      LeHuray, A. 2015. In response to Bales (2014). Integrated Environmental Assessment and Management, 11(2), 185–187.

      USGS. 2016. Information Quality - Information Correction Request. At: https://www2.usgs.gov/info_qual/archives/coal_tar_sealants.html

      Exponent 2016. Summary of McIntyre et al. 2016. “Severe Coal Tar Sealcoat Runoff Toxicity to Fish Is Prevented by Bioretention Filtration” Environ. Sci. Technol. 50:1570−1578. Pavement Coatings Technology Council. 2016-08-14. URL:http://www.pavementcouncil.org/wp-content/uploads/2016/08/Tech-Memo-McIntyre-2016-review-Final.pdf. Accessed: 2016-08-14. (Archived by WebCite® at http://www.webcitation.org/6jlOpIS8t)

      McIntyre JK, Davis J, Incardona J, Stark J, Anulacion B, Scholz N. 2014. Zebrafish and clean water technology: Assessing soil bioretention as a protective treatment for toxic urban runoff. Science of the Total Environment 500:173–178. Open Access: http://www.sciencedirect.com/science/article/pii/S0048969714012455

      McIntyre JK, Edmunds RC, Redig MG, Mudrock EM, Davis JW, Incardona JP, Stark JD, Scholz NL. 2016b. Confirmation of stormwater bioretention treatment effectiveness using molecular indicators of cardiovascular toxicity in developing fish. Environmental Science & Technology 50(3): 1561–1569.

      Greenstein D, Tiefenthaler L, and Bay S. 2004. Toxicity of parking lot runoff after simulated rainfall Arch Environ Contam Toxicol. 47:199–206.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 21, EDUARDO FRANCO commented:

      PLease note that this seems to be a duplicate of PMID: 28417802. I posted a similar note in the latter.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 22, Christopher Southan commented:

      The GtoPdb release used to compile this series is described in the 2016 NAR Database issue http://www.ncbi.nlm.nih.gov/pubmed/26464438


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 07, Ruth Gabizon commented:

      Definitely agree. Indeed the utility of EAE as an MS model has always been questioned. Many treatment strategies that were usefull for EAE had no effects on humans, which is the case for many reagents in all animal models of diseases. However, it is important to acknowledge that all MS treatments used today were first shown to be active in EAE models


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Sep 30, Alessandro Rasman commented:

      Very interesting study. It is important now to test it on humans because there are doubts about the utility of the EAE model in the MS (1). References: 1. Behan, Peter O., and Abhijit Chaudhuri. "EAE is not a useful model for demyelinating disease." Multiple Sclerosis and Related Disorders 3.5 (2014): 565-574.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 19, Stephanie Sivell commented:

      Dear H Benalia and N Novell,

      Thank you for the comments from your team. Like you and your colleagues in the Cicely Saunders Institute, we also felt this to be an important and underrepresented area of the literature. A key remit of the Marie Curie Palliative Care Research Centre (MCPCRC), Cardiff is to advance the research methodology within the field of palliative and end of life care. Our discussion, and the resulting paper, was an opportunity to explore the issues our colleagues were regularly facing and reflecting upon. As methodologists within each of our varying disciplines in the MCPCRC (also co-authors of our paper), we thought that it was timely to come together as a team to discuss these issues and document them. We hoped that our paper would resonate with our colleagues in the wider academic community both within the world of palliative and supportive care and more generally in health and social care. We were therefore delighted that you and your team not only took the time to read and critique our paper in your journal club, but also shared your thoughts to widen up the debate.

      By ‘consensus’ we were referring to the definition of a general agreement; we were not aiming to undertake a ‘rigorous’ piece of work, collecting primary data etc., and/or undertaking some kind of adapted Delphi or similar. This was simply beyond the parameters of this discussion. Rather, and as our paper describes, this was a discussion which we felt was something important for our team to reflect upon and potentially benefit from. We also felt that if we were having these thoughts and concerns within the MCPCRC, then the wider academic community are likely to encounter similar issues and conversations. Ethical approval therefore, was not required but we sought consent for the recording of the discussion; furthermore, and as required by the governance of the journal, we documented the roles of all co-authors who were also part of the discussion which stimulated the resulting paper. We were also interested in your thoughts on comparing interview settings. Again, this was not possible within the parameters of our discussions and our paper, but we would be interested in any papers which specifically focus on such issues in their work.

      As is often the case, we were limited by the house style of the journal we submitted our manuscript to, not least of all the word limits. That being said, we did attempt to summarise the key issues to at least give the reader a flavour of each aspect we have discussed in the paper and addressing all issues raised by the peer reviewers, including practical and safety issues. We did offer quite a detailed account of such issues, within the boundaries of the journal’s requirements; that being said we also did not feel that reviewers and readers would be interested in minutia of how to develop a protocol per se. However, we would be more than happy to extend this discussion should there be any specific queries concerning health and safety issues and practice amongst our researchers.

      We look forward to extending the discussion and debate; we welcome and encourage our colleagues undertaking qualitative palliative and end of life care, and beyond, to discuss and reflect upon their own experiences of undertaking qualitative research interviews.

      Regards,

      Dr Stephanie Sivell, Dr Emily J Harrop and Dr Annmarie Nelson on behalf of the Marie Curie Palliative Care Research Centre, Division of Population Medicine, Cardiff University.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 03, Cicely Saunders Institute Journal Club commented:

      The Cicely Saunders Institute journal club discussed this paper on 03/08/2016

      Considerations around conducting qualitative interviews with palliative and end of life care patients, particularly in the home setting, is an important and underrepresented area of the literature.

      We enjoyed discussing this paper and felt it raised some important questions. We were particularly interested in the methods, and felt that more information would have been valuable. For example how were participants for the consensus meeting chosen, what topics were selected for discussion and how these were selected/ shortlisted, levels of agreement and consensus, and area of discordance during the discussion, ethical approval and consent processes prior to the meeting, and at what point consensus was reached (was this during or after the meeting)? A section detailing limitations of this research would be helpful when interpreting the results.

      We had an interesting discussion around potential conflicts of interest that might arise when all participants of a consensus group are authors of a subsequent publication, and we are not aware of any evidence around this? During our journal club we discussed many of the benefits of conducting interviews in participant’s home setting, which might have been interesting to emphasise more in the paper. Practical issues and the importance of safety are areas that require consideration when planning and conducting research in home settings, and we would have been interested to see more discussion of this. We thought it might also be helpful to compare interviews settings (hospital, hospice, home, etc) and highlight key differences.

      Commentary by H Benalia & N Lovell


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 05, Marko Premzl commented:

      The third party data gene data set of eutherian tumor necrosis factor ligand genes LN874312-LN874522 was deposited in European Nucleotide Archive under research project "Comparative genomic analysis of eutherian genes" (https://www.ebi.ac.uk/ena/data/view/LN874312-LN874522). The 211 complete coding sequences were curated using tests of reliability of eutherian public genomic sequences included in eutherian comparative genomic analysis protocol including gene annotations, phylogenetic analysis and protein molecular evolution analysis (RRID:SCR_014401).

      Project leader: Marko Premzl PhD, ANU Alumni, 4 Kninski trg Sq., Zagreb, Croatia

      E-mail address: Marko.Premzl@alumni.anu.edu.au

      Internet: https://www.ncbi.nlm.nih.gov/myncbi/mpremzl/cv/130205/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 07, Angelo Calderone commented:

      The paper by Tamura and colleagues has provided the interesting observation that a subpopulation of cardiac resident neural crest-derived stem cells differentiated to an adrenergic phenotype following heterotopic transplantation of the mouse heart. Previous work from this lab (JBC 2005) reported that cardiac resident neural-crest derived stem cells grew as spheres in the presence of EGF and FGF2, expressed nestin, musashi-1 and differentiated to a neuronal phenotype in vitro. Work from our lab has likewise revealed the presence of neural crest-derived stem cells in the rodent heart and expressed nestin (El-Helou et al, 2008, J Molecular Cellular Cardiology). Furthermore, nestin(+)-cells isolated from the scar of the infarcted heart grew as spheres when cultured in EGF/FGF2 and differentiated to a neuronal phenotype in vitro (El-Helou et al, 2008, J Molecular Cellular Cardiology). Furthermore, work from our lab has reported that a subpopulation of nestin(+)-cells in the infarct region expressed neurofilament-M and contributed in part to the reported innervation of the scar (Beguin et al, 2011, J Cellular Physiology; Chabot et al, 2013 Cardiovascular Diabetology). Moreover, we further demonstrated that following isogenic heterotopic transplantation of the rat heart, the superimposition of an ischemic injury to the transplanted heart led to de novo cardiac innervation mediated by the expression of neurofilament-M by nestin(+)-cells (Beguin et al, 2011, J Cellular Physiology). However, we did not further assess whether these nestin(+)-cells that expressed neurofilament-M in the ischemically damaged transplanted heart acquired an adrenergic phenotype. Thus, in the true spirit of scientific research, the paper by Tamura and colleagues should have cited previous work from our lab that recapitulated in part the findings provided in their recent paper published in Cardiovascular Research.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 03, Peter Gøtzsche commented:

      Antidepressants are addictive and increase the risk of relapse

      In their systematic review, Amick et al. write that reasons for preferring psychotherapy over drugs for depression include concerns about side effects and ‘perceived “addictiveness”’ of drugs (1). This addictiveness is not hypothetical, it is very real (2,3) and affects about half of those treated with antidepressants (3,4).

      The authors do not discuss what might be their most important finding, that psychotherapy leads to fewer relapses than drug therapy, which was expected, as it is related to the drugs’ addictiveness. It is tricky that withdrawal symptoms and disease symptoms can be the same, but there are clear differences. Withdrawal-induced, depression-like symptoms usually come rather quickly and disappear within hours when the full dose is resumed, whereas it takes weeks before the patients get better if they have a true depression (3).

      A large trial of patients with remitted depression illustrates this (5). After the patients had become well, they continued with open maintenance drug therapy for 4-24 months. They then suddenly had their therapy changed to a double-blind placebo for 5-8 days at a time that was unknown to the patients and clinicians. Forty of 122 patients (33 %) on sertraline or paroxetine had an increase in their Hamilton depression score of at least eight, which is a clinically relevant increase. This study illustrates why most doctors get it wrong when they think the disease has come back upon lowering or stopping the dose. In a group of 122 patients whose depression has been in remission for 4-24 months, likely only one or none would get a true relapse of the depression during 5-8 random days.

      Antidepressants trap people into what often becomes life-long treatment. Of 260,322 persons in Finland who were on such a drug in 2008, 45 % were on an antidepressant drug five years later (3).

      1 Amick HR, Gartlehner G, Gaynes BN, et al. Comparative benefits and harms of second generation antidepressants and cognitive behavioral therapies in initial treatment of major depressive disorder: systematic review and meta-analysis. BMJ 2015;351:h6019. 2 Nielsen M, Hansen EH, Gøtzsche PC. What is the difference between dependence and withdrawal reactions? A comparison of benzodiazepines and selective serotonin re-uptake inhibitors. Addiction 2012;107:900–8. 3 Gøtzsche PC. Deadly psychiatry and organised denial. Copenhagen: People’s Press; 2015. 4 Kessing L, Hansen HV, Demyttenaere K, et al. Depressive and bipolar disorders: patients’ attitudes and beliefs towards depression and antidepressants. Psychological Medicine 2005;35:1205-13. 5 Rosenbaum JF, Fava M, Hoog SL, et al. Selective serotonin reuptake inhibitor discontinuation syndrome: a randomised clincial trial. Biol Psychiatry 1998;44:77-87.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 09, Karsten Suhre commented:

      Very interesting paper - The TXNIP cg19693031 association with Type 2 Diabetes clearly replicates everywhere. The authors write "While finalizing this manuscript,three studies were published and observed an association of DNA methylation at cg19693031 in TXNIP, and type 2 diabetes (36–38)." Actually, there is a fourth paper that came out in Jan 2016: "Epigenetic associations of type 2 diabetes and BMI in an Arab population" by Al Muftah et al. (Clin Epigenetics. 2016 Jan 28;8:13. doi: 10.1186/s13148-016-0177-6, PMID: 26823690). You may be interested to take a look at Table 4 of the Al Muftah paper - it puts the recent Petersen et al. metabolomics findings (ref 35 in the present Soriano-Tárraga et al. paper) into the context of the TXNIP association: The associated metabolic phenotype (metabotype) of the TXNIP CpG association is characteristic of a diabetes state. What is more, the obesity (BMI) associated CpG loci cg06500161 (ABCG1) and cg00574958 (CPT1A) also display the same diabetes metabotype. This suggests that the methylation of all of these sites could be a regulatory response to obesity induced deregulated metabolism. It would be interesting to test whether the TXNIP CpG association holds in a cohort of non-obese diabetics.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 22, Anthony Jorm commented:

      The Royal Australian and New Zealand College of Psychiatrists (RANZCP) has recently published clinical practice guidelines on eating disorders Hay P, 2014, mood disorders Malhi GS, 2015 and schizophrenia Galletly C, 2016. These guidelines contain a mixture of evidence-based recommendations where there were relevant intervention studies, and consensus-based recommendations where relevant studies did not exist. The consensus-based recommendations comprised a substantial proportion of the guidelines for mood disorders (59%) and schizophrenia (46%), but less so for eating disorders (10%), indicating that expert consensus is an important source of guidance on best practice in psychiatry.

      Given the substantial contribution of expert consensus to these guidelines, it is important that the methods for establishing this consensus are adequate. The Australian National Health and Medical Research Council (NHMRC) has published requirements for development of clinical practice guidelines, but these do not give much guidance on how this should be done, simply mandating that “The method used to arrive at consensus-based recommendations or points (e.g. voting or formal methods, such as Delphi) is documented”. (National Health and Medical Research Council. Procedures and requirements for meeting the 2011 NHMRC standard for clinical practice guidelines. Melbourne: National Health and Medical Research Council; 2011.)

      Another potential source of criteria for evaluating the quality of methods for developing consensus-based recommendations comes from research on ‘wisdom of crowds’ Lorenz J, 2011 Kattan MW, 2016 Baumeister RF, 2016. Based on such research, Surowiecki has proposed four conditions necessary for a group to make good decisions (Surowiecki J. The wisdom of crowds: why the many are smarter than the few. London: Abacus; 2004.): 1. Diversity of expertise. A heterogeneous group of experts will produce better quality decisions than a homogeneous one. For guidelines developers, this may mean that the experts should come from a range of relevant disciplines, including consumer experts where appropriate. 2. Independence. The experts must be able to make their decisions independently, so that they are not influenced by others. For guidelines developers, this means that voting on consensus-based recommendations is carried out privately so that strong individuals cannot dominate the group. 3. Decentralization. Expertise is held by autonomous individuals working in a decentralized way. For guidelines developers, it is important to specify what sources of information the experts had available to them. 4. Aggregation. There is a mechanism for coordinating and aggregating the group’s expertise. For guideline developers, this could involve an independent person who runs the voting and gives feedback to the group.

      If we take these four conditions as appropriate for judging the quality of methods for developing consensus-based recommendations, how well do the RANZCP guidelines meet them?

      An indication of diversity of expertise is the disciplinary composition of the guideline working groups. There was limited diversity for all working groups, with non-psychiatrists comprising 3 of the 8 members for eating disorders, 4 out of the 12 members for mood disorders and 2 out of the 10 members for schizophrenia working group. There were no consumer or carer members of any of the working groups. While the mood disorder and schizophrenia guidelines included consensus-based recommendations s for indigenous peoples, it is not stated whether any of the working groups included indigenous members.

      The mood disorders and schizophrenia working groups did not appear to involve independent decision making. Both groups had discussions until consensus was reached. The eating disorders guidelines did not give relevant information about whether there was independence.

      All three guidelines state that consensus-based recommendations were based on collective clinical and research knowledge and experience. The eating disorder guidelines additionally state that level IV articles were considered where higher-level evidence was lacking and this informed the consensus-based recommendations.

      After drafting, all guidelines had input from a broader group of expert advisers with a wide diversity of expertise. However, it is not clear whether these advisers had the potential to persuade working group members to change consensus-based recommendations.

      Where the guidelines included consensus-based recommendations relevant to indigenous peoples, it is not clear what sources of cultural expertise these were based on.

      None of the guidelines state how judgements were aggregated to determine consensus. The mood disorders guidelines state that agreement on consensus-based recommendations was “in most cases unanimous but allowed one committee member to abstain”. The other guidelines did not define what constituted consensus.

      In conclusion, there are major weaknesses in the procedures used to determine consensus-based recommendations for all three guidelines. These are lack of independence in decision making by experts, a lack of a formal mechanism for aggregating judgments, and a lack of diversity of expertise, particular in areas where consumers and carers could contribute and where cultural expertise is relevant.

      While NHMRC gives quite detailed guidance on how to develop evidence-based recommendations, there is little guidance on best practice for developing consensus-based recommendations. While many of these weaknesses would be overcome by using formal consensus methods such as the Delphi process, there is a need for NHMRC and similar agencies to produce more rigorous quality standards for development of consensus-based recommendations.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 21, Christopher Southan commented:

      Please could the authors open the Dryad link (still closed 21 Dec) to what I assume includes specification of the 50 structures - now open 28 Dec, thanks. See molecular mapping efforts at http://cdsouthan.blogspot.se/2015/12/resolving-tb-actives-shouldnt-be-this.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 05, Lydia Maniatis commented:

      Some additional examples of the casual approach to theory and method that's exhibited here and seems to have become the norm in the vision literature:

      1. A tolerance for extremely ad hoc suggestions: "Meese (personal communication) has suggested an alternative explanation for why β declines with signal area that does not preclude the possibility that the maximum, i.e., whole-area signal condition is detected by a global linear integrator. He suggests that the visual system might employ linear filters matched in shape to the various signal-shape conditions. Thus for the single pie-wedge and windmill conditions these would be pie-wedge and windmill-shaped filters matched to the signal area, culminating in full-circle global linear integrators for the 100% signal area conditions."

      There is no rationale for the suggestion that there are special mechanisms for "wedge and windmill -shaped" areas; they just happen to be the shapes of the stimuli the authors chose (also without a rationale). If they had used square or heart-shaped stimuli, the existence of the corresponding "filters" would apparently have been conceivable.

      1. In their introduction the authors indicate they are studying basic visual perception. However, their definition of "external noise" is completely contingent, not on the spontaneous appearance of stimuli, but on the instructions given to observers to attempt to locate in the spontaneously-arising percept. They are instructed to detect a particular "texture" in a surface that contains more than one such, and in which the different textures tend to blend perceptually. The non-target texture is labelled "external noise" for the purpose of creating the noise terms demanded by the "model." If the task had been to estimate the presence or proportion of vertical bars in the entire stimulus, the noise term would presumably have been all the non-vertical bars. The definition of the term is completely arbitrary, designated without consulting the visual system, so to speak, as to functionally relevant concepts. In a recent article, Solomon, May and Tyler (2016) defined "external noise" in terms of the standard deviations from which they draw their stimuli; this standard deviation is given two different values simply because the model they choose to fit calls for two "external noise" terms.

      2. The title itself (as well as the text) indicates that the vague conclusions are to be applied to the particular stimuli used, stimuli contained in a round envelope (with wedge or windmill-shaped targets areas).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jan 27, Lydia Maniatis commented:

      A little way into their introduction, the authors of this article make the following clear and unequivocal assertion:

      “These findings underscore the idea that the encoding of objects and shapes is accomplished by a hierarchical feedforward process along the ventral pathway (Cadieu et al., 2007; Serre, Kouh, Cadieu, Knoblich, & Krei- man, 2005; Serre, Oliva, & Poggio, 2007; Van Essen, Anderson, & Felleman, 1992). The question that arises is how local information detected in the early visual areas is integrated to encode more complex stimuli at subsequent stages.”

      As all vision scientists are aware, the processes involved at every level of vision are both hierarchical and parallel, feedforward and feedback. These processes do not consist of summing “local information” to produce more complex percepts; a small local stimulus change can reconfigure the entire percept, even if the remaining “local information” is unchanged. This has been solidly established, on a perceptual and neural level, on the basis of experiment and logical argument, for many decades. (The authors' use of the term “complex stimuli,” rather than “complex percepts” is also misjudged, as all stimuli are simple in the sense that they stimulate individual, retinal photoreceptors in the same, simple way. Complexity arises as a result of processing - it is not a feature of the retinal (i.e. proximal) stimulus).

      The inaccurate description of the visual process aligns with the authors' attempt to frame the problem of vision as a “summation” problem (using assumptions of signal detection theory), which, again, it decidedly is not. If the theoretical relevance of this study hinges on this inaccurate description, then it has no relevance. Even on its own terms, methodological problems render it without merit.

      In order to apply their paradigm, the authors have constructed an unnatural task, highly challenging because of unnatural conditions - very brief exposures resulting in high levels of uncertainty by design, resulting in many errors, and employing unnaturally ambiguous stimuli. The task demands cut across detection, form perception, attention, and cognition (at the limit, where the subjects are instructed to guess, it is purely cognitive). (Such procedures may be common and old (“popular” according to the authors), but this on its own doesn't lend them theoretical merit).

      On this basis, the investigators generate a dataset reflecting declining performance in the evermore difficult task. The prediction of their particular model seems to be generic: In terms of the type of models the authors are comparing, the probability of success appears to be 50/50; either a particular exponent (“beta”) in their psychometric function will decline, or it will be flat. (In a personal communication, one of the authors notes that no alternative model would predict a rising beta). The fitting is highly motivated and the criteria for success permissive. Half of the conditions produced non-significant results. Muscular and theory-neutral attempts to fit the data couldn't discover a value of “Q” to fit the model, so the authors “have chosen different values for each experiment,” ranging from 75 to 1,500. The data of one of five subjects were “extreme.” In addition, the results were “approximately half as strong as some previous reports, but “It ...remains somewhat of a mystery as to why the threshold versus signal area slopes found here are shallower than in previous studies, and why there is no difference in our study between the thresholds for Glass patterns and Gabor textures.” In other words, it is not known whether such results are replicable, and what mysterious forces are responsible for this lack of replicability.

      It is not clear (to me) how a rough fit to a particular dataset, generated from an unnaturally challenging task implicating multiple, complex, methodologically/theoretically undifferentiated visual processes, of a model that makes such general, low-risk predictions (such as can be virtually assured by a-theoretical methodological choices) can elucidate questions of physiology or principle of the visual, or any, system.

      Finally, although the authors state as their goal to decide whether their model “could be rejected as a model of signal integration in Glass pattern and Glass-pattern-like textures” (does this mean they think there are special mechanisms for such patterns?)” they do not claim to reject the only alternative that they compare (“linear summation”), only that “probability and not linear summation is the most likely basis for the detection of circular, orientation-defined textures.”

      It is not clear what the “most likely” term means here. Most likely that their hypothesis about the visual system is true (what is the hypothesis)? Most likely to have fit their data better than the alternative? (If we take their analysis at face value, then this is 100% true). Is there a critical experiment that could allow us to reject one or the other? If no alternatives can be rejected, then what is the point of such exercises? If some can be, what would be the theoretical implications? Is there a value in simply knowing that a particular method can produce datasets that can be fit (more or less) to a particular algorithm?

      The "summation" approach seen here is typical of an active and productive (in a manner of speaking) subdiscipline (e.g. Kingdom, F. A. A., Baldwin, A. S., & Schmidtmann, G. (2015). Modeling probability and additive summation for detection across multiple mecha- nisms under the assumptions of signal detection theory. Journal of Vision, 15(5):1, 1–16; Meese, T. S., & Summers, R. J. (2012). Theory and data for area summation of contrast with and without uncertainty: Evidence for a noisy energy model. Journal of Vision, 12(11):9, 1–28; Tyler, C. W., & Chen, C.-C. (2000). Signal detection theory in the 2AFC paradigm: Attention, channel uncertainty and probability summation. Vision Research, 40, 3121–3144.)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 13, Judith Finlay commented:

      This article is not free even with an ASH account set up. It appears membership/subscription is required.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 30, Christopher Southan commented:

      Reference 31 for the 2011 version of IUPHAR-DB is superceded by the 2014 and 2016 publications for GtoPdb (http://www.ncbi.nlm.nih.gov/pubmed/26464438). This also provides the substrate for "The Concise Guide to PHARMACOLOGY 2015/16: Nuclear hormone receptors" (http://www.ncbi.nlm.nih.gov/pubmed/26650443)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 03, Seth Bordenstei commented:

      Response / Preprint at bioRxiv entitled Getting the hologenome concept right: An eco-evolutionary framework for hosts and their microbiomes

      http://biorxiv.org/content/early/2016/02/02/038596

      Given the recently appreciated complexity of symbioses among hosts and their microbes, significant rethinking in biology is occurring today. Scientists and philosophers are asking questions at new biological levels of hierarchical organization – What is a holobiont and hologenome? When should this vocabulary and associated concepts apply? Are these points of view a null hypothesis for host-microbe systems or limited to a certain spectrum of symbiotic interactions such as host-microbial coevolution? Legitimate questions, advancements and revisions are warranted at this nascent stage of the field. However, a productive and meaningful discourse can only commence when skeptics and proponents alike use the same definitions and constructs. For instance, critiquing the hologenome concept is not synonymous with critiquing coevolution, and arguing that an entity is not necessarily the primary unit of selection is not synonymous with arguing that it is not a unit of selection in general. Here, we succinctly deconstruct and clarify these recent misconceptions. Holobionts (hosts and their microbes) and hologenomes (all genomes of the holobiont) are multipartite entities that result from ecological, evolutionary and genetic processes. They are not restricted to one special process but constitute a wider vocabulary and framework for host biology in light of the microbiome. We invite the community to consider these new perspectives in biology.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Dec 10, Seth Bordenstei commented:

      Response / Blog Post on "Getting the Hologenome Concept Right". This critique will serve as a framework for a formal response by several of the authors mentioned in the paper.

      http://symbionticism.blogspot.com/2015/12/getting-hologenome-concept-right.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 09, Lydia Maniatis commented:

      The first sentence of the abstract reads: "Auditory noise is a sound, a random variation in air pressure." A sound is not random variation in air pressure. It's probably more appropriate to define noise in terms of the abilities of the system that is analysing the stimulation, rather than as a characteristic of the stimulus.

      The same may be said when we turn to vision, where we're told that: "“Noise” in perception experiments generally [generally?] means unpredictable variation in some aspect of the stimulus." Can we really make a distinction between "noise" and "signal" on the basis of predictability? When I turn on the TV and I don't know what I'll see, does that make what I see "noise"? There really needs to be a clarification.

      Having read more of this literature since my last comment, it's become obvious that basic terms and concepts have been fudged for too long, producing a literature without substance (but full of complexity and contradiction).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Dec 10, Lydia Maniatis commented:

      This seems rather an odd choice of theme for a "Research Topic" in that it inspired a collection of papers with no conceptual coherence, out of many others that could have been selected on the basis of a common technique. It is somewhat like saying we'll put together an issue of physics papers that all used spectrometers. It's not really a "topic" unless you're specifically focussing on, e.g., pros and cons of the method.

      The editors summary expresses the situation well: "In sum, this Research Topic issue shows several ways to use diverse kinds of noise to probe visual processing." As discussed in their exposition, noise has been historically used in multifarious ways for multifarious purposes.

      I think the emphasis on technique rather than on theoretical problems is symptomatic of the conceptual impoverishment of the field. The use of the term "probe" has become common in this field, at least, indicating that a study is an exercise in a-theoretical data collection, rather than a methodic attempt to answer a question.

      I would also add that noise as a technique to probe normal perception in normal conditions should be employed with caution, since it does not characterise normal scenes, but rather places unusual stress on the system which may respond in unusual ways.

      Predictably, the results of the articles described seem undigested and of unclear value: E.g. "Hall et al. (2014) find that adding white noise increased the center spatial frequency of the letter-identification channel for large but not small letters;" (so...? how large is large...?) "Gold (2014) use pixel noise to investigate the visual information used by the observer during a size-contrast illusion. By correlating the observers頣lassification decision with each pixel of the noise stimuli, they find that the spatial region used to estimate the size of the target is influenced by the size of surrounding irrelevant elements" (or your theoretical definition of "irrelevant" needs adjustment).

      If the goal of this issue was to show that you can make noise and get published, then it's a big success.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 15, J Stone commented:

      In an earlier disclosure from 2015 which I cannot find in this article it is stated:

      "HJL serves on the Merck Vaccines Strategic Advisory Board and is a consultant to GSK."

      http://www.thelancet.com/journals/langlo/article/PIIS2214-109X(15)70139-7/fulltext

      Although not mentioned in this article Merck and GSK are the manufacturers of the two brands of HPV vaccine, Gardasil and Cervarix, so this may be considered a serious omission.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 23, NephJC - Nephrology Journal Club commented:

      This trial was discussed on Dec 23nd 2015 in the open online nephrology journal club, #NephJC, on twitter. Introductory comments are available at the NephJC website . The discussion was quite detailed, with about 24 participants, including nephrologists, fellows, residents and patients. A transcript of the tweetchat are available from the NephJC website. The highlights of the tweetchat were:

      • The authors should be commended for designing and conducting, and the German Federal Ministry of Education and Research for funding this trial.

      • The choice of population was very astute, in whom there is genuine clinical equipoise on the value of added immunosuppression to optimal conservative management. Given the results, there was some discussion about whether, in the future, use of biomarkers or other risk scoring systems would allow selection of patients at higher risk of the outcomes. This was speculative in nature, and likely impractical considering that only 162 could be randomized for the most common primary glomerular disease.

      • There were some minor quibbles (open label study design, use of 0.75gram/day rather than 1 gram/day threshold for proteinuria as an inclusion, use of dual renin-angiotensin system blockade in some patients) that were brought up, but no major weaknesses. This trial does definitively establish the lack of efficacy of immunosuppression in preventing kidney failure in this population as compared to optimal medical management alone.

      Interested individuals can track and join in the conversation by following @NephJC, #NephJC, signing up for the mailing list, or visit the webpage at NephJC.com.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 05, Siddharudha Shivalli commented:

      Siddharudha Shivalli (2015-12-04 11:25) Yenepoya Medical College, Yenepoya University, Mangaluru, Karnataka, India email

      I read the article by Salomão CA et al, with a great interest. Authors’ efforts are commendable. Based on a large scale hospital based survey, authors demonstrate the poor adherence to the new guidelines for malaria treatment among health care workers in Mozambique and higher prescription of ACT to malaria negative patients.

      In method section, authors stated that only districts that were accessible by road were selected. Authors should have mentioned the approx proportion of districts in 11 provinces of Mozambique which are not accessible by road. If it amounts to a substantial proportion then study findings may not be applicable for these districts. In addition, method of selection of each health centre from each stratum is not clear (i.e. random or purposive).

      It appears that both healthcare worker and clinician can prescribe anti-malarial drugs in the study setting. If so, overuse of ACT in malaria negative patients according to health cadre (clinician vs. healthcare worker) would have been more interesting and informative. Authors should have briefly explained the existing healthcare delivery system and strategies in the study setting with reference to malaria.

      In Table 1 and throughout the article, authors have mentioned the p value as ‘0.000’. Statistical software, by default setting, displays p value as zero if it extends beyond 3 decimal points (i.e. p=0.0000001 would be displayed as p=0.00). Practically, the value of p cannot be zero and hence, I would suggest to report it as p<0.0001.

      Nonetheless, I must congratulate the authors for exploring an important public health problem in the study area.

      Competing interests:The author declares that there is no conflict of interest about this publication


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 17, Misha Koksharov commented:

      This is an interesting work on the use of Gaussia luciferase as a recombinant label for immunoassays. Hopefully it will encourage further use and development of new GLuc-based enzymatic tools.

      In that case it will be interesting to try the Gaussia luciferase mutant GlucM23 (Park SY, 2017; Lindberg et al, Chem. Sci., 2013, 4, 4395-4400; Lindberg E, 2013) which appears to be the brightest and the smallest luciferase of all.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 24, Judy Slome Cohain commented:

      I just was sent a paper to peer review, which cited this paper to prove that higher cesarean rates (over 20%) are not associated with higher maternal mortality. This paper does not show that. This paper definitely shows direct relationship between maternal mortality and cesarean. It also appears that maternal mortality increases with low CS rates because the poorest countries happen to have both low cesarean rates and high rates of maternal mortality for all kinds of reasons unrelated to cesarean rates. All research shows the death rate from all major abdominal surgery due to hemorrhage or anesthesia is still a steady 1 in 10,000. Do more cesareans, more women die.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 15, Ehtibar Dzhafarov commented:

      In response to Bernard Baars: one should discuss well-defined concepts rather than mere words. The opening lines of our paper will reveal that we use the term "contextuality" in a rigorously defined sense derived from its use in quantum mechanics. In psychology, linguistics, and other areas the word "contextuality," when a definite meaning thereof can be extracted at all, usually means what we call inconsistent connectedness. The latter is indeed "absolutely routine." In the last sentence of the abstract it is referred to as "ubiquitous dependence of response distributions on the elements of contexts other than the ones to which the response is presumably or normatively directed."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Dec 10, Bernard Baars commented:

      I'm sorry --- I'm sure the physics on this is correct. But in psychobiology and social psychology contextuality is absolutely routine. It is usually called context-dependence, or ambiguity, or even sensory uncertain, and is covered in hundreds of PubMed abstracts.

      Here is one example from the Journal of Vision, called "What is White?"

      That problem was partly solved by Isaac Newton, but not entirely: J Vis. 2015 Dec 1;15(16):5. doi: 10.1167/15.16.5.What is white? Bosten JM, Beer RD, MacLeod DI.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 24, Randi Pechacek commented:

      Ashley Ross, first author of this paper, wrote a blog post about this paper on microBEnet describing some of the background.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2018 Feb 02, Morten Oksvold commented:

      This article has been found to be affected by research misconduct and after an investigation at Karolinska Institutet it was concluded that it should be retracted.

      This study should therefor not be cited.

      Link from Karolinska Institutet: https://ki.se/en/news/researchers-found-guilty-of-scientific-misconduct


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 06, Ardeshir Rastinehad commented:

      One of the aims of the paper was to determine the clinical impact of the new PI-RADS 2 on the detection of clinically significant prostate cancer in order to re-enforce the practice of collecting sequence specific data to be to analyze and adapt to any future changes in PIRADS.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 28, Randi Pechacek commented:

      Katherine Dahlhausen wrote a brief blog about the implications of this paper on microBEnet.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 09, Gwinyai Masukume commented:

      More than a year prior to this article, similar observations were made. This illustrates how national surveys, surveillance and other information sources can provide an early robust picture that can help inform public health policy and practice.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 28, John Tucker commented:

      As is so often the case in Cochrane Reviews, the authors find the data supporting the efficacy of the topic medical intervention wanting. While much of the criticism found herein is valid, one is struck by the authors' proposal that the only way to accurately assess the utility of methylphenidate in ADHD is a "nocebo trial", in which the test article would be compared to a control substance having no efficacy but having an identical side effect profile.

      What would such a substance be? How would one determine that its side effect profile was identical? Or that it had no intrinsic efficacy in ADHD of its own? Perhaps by running one or more RCTs? But since the nocebo is designed to have side effects, wouldn't these trials be unblinded too? Given this unblinding, could a Cochrane Review confidently reach the conclusion that the nocebo had no intrinsic efficacy? What would be the ethical issues involved in performing safety trials of an agent designed to have side effects and no benefits?

      While the contributions of the Evidence-Based Medicine movement are incontrovertable, at some point its advocates need to take a deep breath and ask themselves if they have followed their logic into a world that exists only in theory. We need to apply the best standards that can reasonably be applied in evaluating any medical intervention. But holding interventions to standards of evidence that for all practical purposes are unachievable risks drifting into therapeutic nihilism.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 29, Sujai Kumar commented:

      The peer-reviewed version of the biorxiv paper showing that the extensive HGT was an artefact of bacterial contamination is now available at

      The supplementary info section of that paper systematically refutes each of the lines of evidence for HGT presented in this paper (PCR, PacBio sequencing, coverage, and phylogenetic trees).

      Since then, three more peer-reviewed papers have independently shown extensive contamination rather than extensive HGT:

      See also #tardigate and Mark Blaxter's blog post Eight things I learned from tardigate


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jan 12, Sujai Kumar commented:

      Disclaimer: I'm an author on the biorxiv preprint below that conclusively (in our opinion) demonstrates that the "extensive HGT" is almost entirely bacterial contamination that was not identified by Boothby et al.

      We'd be happy to answer any questions about our rebuttal here or on http://twitter.com/sujaik #tardigate


      The genome of the tardigrade Hypsibius dujardini

      Abstract

      Background: Tardigrades are meiofaunal ecdysozoans that may be key to understanding the origins of Arthropoda. Many species of Tardigrada can survive extreme conditions through adoption of a cryptobiotic state. A recent high profile paper suggested that the genome of a model tardigrade, Hypsibius dujardini, has been shaped by unprecedented levels of horizontal gene transfer (HGT) encompassing 17% of protein coding genes, and speculated that this was likely formative in the evolution of stress resistance. We tested these findings using an independently sequenced and assembled genome of H. dujardini, derived from the same original culture isolate.

      Results: Whole-organism sampling of meiofaunal species will perforce include gut and surface microbiotal contamination, and our raw data contained bacterial and algal sequences. Careful filtering generated a cleaned H. dujardini genome assembly, validated and annotated with GSSs, ESTs and RNA-Seq data, with superior assembly metrics compared to the published, HGT-rich assembly. A small amount of additional microbial contamination likely remains in our 135 Mb assembly. Our assembly length fits well with multiple empirical measurements of H. dujardini genome size, and is 120 Mb shorter than the HGT-rich version. Among 23,021 protein coding gene predictions we found 216 genes (0.9%) with similarity to prokaryotes, 196 of which were expressed, suggestive of HGT. We also identified ~400 genes (<2%) that could be HGT from other non-metazoan eukaryotes. Cross-comparison of the assemblies, using raw read and RNA-Seq data, confirmed that the overwhelming majority of the putative HGT candidates in the previous genome were predicted from scaffolds at very low coverage and were not transcribed. Crucially much of the natural contamination in both projects was non-overlapping, confirming it as foreign to the shared target animal genome.

      Conclusions: We find no support for massive horizontal gene transfer into the genome of H. dujardini. Many of the bacterial sequences in the previously published genome were not present in our raw reads. In construction of our assembly we removed most, but still not all, contamination with approaches derived from metagenomics, which we show are very appropriate for meiofaunal species. We conclude that HGT into H. dujardini accounts for 1-2% of genes and that the proposal that 17% of tardigrade genes originate from HGT events is an artefact of undetected contamination.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 26, Deepa Bhartiya commented:

      This group’s inability of detect VSELs in mouse bone marrow is indeed surprising. They could not detect VSELs in bone marrow (0.002% events in the size range of 2-4 microns appeared debris). Bone derived VSELs were greater than 6 microns in size and did not express pluripotent markers. A careful look at their protocols shows that all processing was done at 1500 rpm. We know from our experience that VSELs pellet down at 3000 rpm and this could be one of the reasons for their negative results. We have detected 0.022+0.002% cells as VSELs in mouse BM which express pluripotent markers by immuno-localization and confocal microscopy as well as by qRT-PCR. We have earlier reported that when cord blood is subjected to density gradient centrifugation, VSELs are invariably discarded along with the red blood cells. Apparently this group needs to revise their protocols to isolate VSELs. We have discussed this point in details in our recent paper also(http://www.ncbi.nlm.nih.gov/pubmed/25976079). Furthermore, presence of VSELs in few numbers should not be an issue of concern for regenerative medicine. In contrast to pluripotent ES/iPS cells which exist only in a Petri dish, VSELs are endogenous pluripotent stem cells present in adult tissues. We need to understand and learn how to manipulate them in the body. VSELs will self-renew themselves and give rise to committed cells which in turn are expected to divide rapidly and undergo clonal expansion into large numbers of tissue committed progenitors.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 29, James C Coyne commented:

      This study makes some dubious claims that should be subject to independent scrutiny and re- evaluation. It was published in APA journal, which requires sharing of data upon request. However, as I detail and document below, the author responded to a request for just a few variables with an invoice for $450 and a demand that an independent researcher sign a contract not to depart from some arbitrary limits on reanalysis. This sort of behavior threatens routine data sharing. It is deplorable that the American Psychological Association does not support their members to exercise their right to data. See the blog post below for documentation.

      https://jcoynester.wordpress.com/2016/11/29/a-quixotic-quest-to-obtain-a-dataset-on-media-violence-with-an-unexpected-price-tag/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 13, Randi Pechacek commented:

      Christopher Mason, co-author of this paper, wrote a blog post on microBEnet providing extensive background for the paper.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Dec 16, Laura Williams commented:

      Our online journal club discussed this paper on February 24, 2015. https://www.youtube.com/watch?v=TDWQpRBtbAU


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 14, Víctor Bustamante commented:

      The Figure 1 shows the effect of the presence or absence of glucose on the activity of the Csr system, and, as consequence of this, the expected effect on some phenotypes controlled by Csr. Based on these depicted findings, the presence of glucose would favor motility through the glucose-specific PTS and Csr system, which is consistent with many reports indicating that glucose is a major attractant for motility (Deepika et al., 2015; Kim et al., 2011; Kim and Kim, 2010; Lai et al., 1997). Certainly, there are some reports indicating that the presence of glucose prevents synthesis of flagella. Since the expression of flagella is a process highly regulated by multiple mechanisms acting independently in response to the presence or not of different environmental cues, discrepancies between these studies could be due to the conditions tested.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Dec 11, Martine Crasnier-Mednansky commented:

      The contention 'motility occurs in the presence of glucose', as depicted in Figure 1, is erroneous. Presence of glucose prevents synthesis of flagella in E. coli (Adler J, 1967), and cAMP is 'absolutely' required for flagella formation (Yokota T, 1970); see also Fahrner KA, 2015.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 09, Marcus Munafò commented:

      Jones and colleagues [1] tested for small study bias (which may be caused by publication bias) in the literature on inhibitory control training for appetitive behavior change, using funnel plots and the Egger test. However, these methods are limited when the studies included in the analysis include only a narrow range of sample sizes, as is the case here. Other methods may be more sensitive to publication bias. We used the significance test, developed by Ioannidis and Trikalinos [2], to test this.

      The excess of significance test uses the best estimate of any true underlying effect size (e.g., the estimate from the largest single study, or the fixed effects meta-analysis) to estimate the statistical power of each individual study in a literature to detect that effect. The sum of these values provides the number of studies that can be expected to be statistically significant in that literature. This can be compared to the observed number of significant studies using a binomial test. Using the pooled effect size estimate under a fixed effects model for alcohol (d = 0.43) and food (d = 0.28), the expected number of significant studies is 4.2 and the observed number is 13 (P < 0.001), indicating an excess of significance in this literature.

      Another way of characterizing this is to describe the average statistical power of studies within this literature, which is 24%. This is consistent with evidence from other fields [3], and suggests that most studies are underpowered. In order to achieve 80% power using a 5% alpha, studies on alcohol would require at least 172 participants and studies on food at least 404 participants, based on the effect size estimates indicated by the meta-analysis.

      Marcus R. Munafò, Andrew Jones and Matt Field

      1. Jones, A., et al., Inhibitory control training for appetitive behavior change: a meta-analytic investigation of mechanisms of action and moderators of effectiveness. Appetite, 2016. 97, p. 16-28.

      2. Ioannidis, J.P.A. and Trikalinos, T.A., An exploratory test for an excess of significant findings. Clinical Trials, 2007. 4, p. 245-53.

      3. Button, K.S. et al., Power failure: why sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 2013. 14, p. 365-76.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 07, David Keller commented:

      What chemical forms of aluminum and types of exposure raised dementia risk most?

      The U.S. Government Centers for Disease Control (CDC) hosts a Toxic Substances Portal which provides a 117-page full toxicological profile for aluminum, including a discussion of the effects of aluminum ingestion on dementia at the following URL (accessed on 8/15/2016):

      http://www.atsdr.cdc.gov/toxprofiles/tp22-c3.pdf

      I cannot summarize the discussion better than the following quote taken directly from the CDC website:

      "The contrast between the results of the drinking water studies, many of which found a weak association between living in areas with high aluminum levels in drinking water and Alzheimer’s disease, and the tea and antacid studies may be due to the difference in aluminum bioavailability. The presence of tannins and other organic constitutes found in tea may significantly reduce aluminum absorption; the aluminum hydroxide found in antacids is poorly absorbed. Although the aluminum speciation was not provided in most drinking water studies, in a study by Gauthier et al. (2000), organic monomeric aluminum was the only aluminum species significantly associated with Alzheimer’s disease. The bioavailability of organic aluminum compounds such as aluminum citrate, aluminum lactate, and aluminum maltolate is much greater than for inorganic aluminum compounds (Froment et al. 1989a; Yokel and McNamara 1988). In conclusion, the available data suggest that aluminum is not likely the causative agent in the development of Alzheimer’s disease. However, aluminum may play a role in the disease development by acting as a cofactor in the chain of pathological events resulting in Alzheimer’s disease (Flaten 2001)."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 18, ZHONGMING ZHAO commented:

      My lab recently moved to the University of Texas Health Science Center at Houston. The database is now available at https://bioinfo.uth.edu/TSGene/.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 28, Randi Pechacek commented:

      David Coil, first author of this paper, wrote a blog on microBEnet providing some background and the process involved in describing this new species.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 27, Siim Pauklin commented:

      Additional information: The work, and the data and information represented in Figure 1 of this paper, come from the lab of Dr. Svend Petersen-Mahrt (now DNA Editing in Immunity and Epigenetics, IFOM-Fondazione Istituto FIRC di Oncologia Molecolare, Via Adamello 16, 20139 Milano IT, svend.petersen-mahrt@ifom.eu). It was generated with the help of Dr. Heather Coker, and jointly belonging to Cancer Research UK. Data are used under permission. The work was supported by CRUK and a CRUK/LRI Group Leader Grant to Dr. Svend Petersen-Mahrt.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 29, David Keller commented:

      Possible explanations why treated subjects rated as responders thought they received sham treatments

      This comment will focus on the actively treated subjects who were rated as "responders" to therapy, yet, when surveyed at the end of the study, answered that they thought they had received sham therapy. A responder to therapy is a subject who reports perceived benefits from therapy, which is inconsistent with the belief that he received sham therapy. A subject who believes he was treated with sham therapy must not have perceived any benefit from therapy, or he would not think it was sham. Since tinnitus is a purely subjective phenomenon, a lack of perceived benefit is inconsistent with response to therapy.

      Each of the "responders" who nevertheless believed they had received sham therapy must fall into one of the following categories:

      1) The subject perceived benefit from therapy, but did not understand that, by definition, sham therapy does not provide benefit.

      2) The subject perceived no benefit from therapy, but replied erroneously to questions in the Tinnitus Functional Index (TFI), causing it to mis-categorize him as a responder to therapy.

      3) The TFI is a faulty metric for the assessment of tinnitus, mis-categorizing subjects as "responders" to therapy even though these subjects perceived no benefit from therapy.

      Categories 1 and 2 above represent experimental errors resulting from failure to properly instruct the subjects of the trial. In category 1, the subjects must be taught, and understand, the defining distinction between active and sham therapy before being asked which they think they received. In category 2, the subjects must be instructed how to properly reply to the questions in the TFI. With improved instruction and education of the experimental subjects, the contradictory results noted in the results of this trial could be reduced or disappear in future trials.

      Category 3 represents experimental error resulting from erroneous measurement of the effects of therapy, which would require fundamental redesign of the Tinnitus Functional Index (TFI), the metric employed to assess and report the results of this trial.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Nov 26, David Keller commented:

      If treated subjects thought they received sham therapy, how could their tinnitus scores improve?

      The main outcome of this trial was based on improvements in the Tinnitus Functional Index (TFI). In his reply to my letter, Dr. Folmer indicated that there were subjects who received active treatment in his study, who exhibited significant improvements on their TFI score, and yet these subjects believed that they received sham treatments.

      Experimental subjects should be informed that sham treatments are, by definition and design, not capable of causing any true benefit. Thus, if properly informed, subjects should only guess that they received sham treatment when they truly cannot perceive any benefit from treatment. If the TFI scores of such subjects nevertheless improved significantly, then the reported TFI scores are not measuring tinnitus in a way that is clinically meaningful. That is, the TFI seems to be reporting clinical benefits which are not perceived by the subjects. This calls into question the results of the whole study.

      Tinnitus is a subjective problem. When a metric like the TFI measures significant benefits in a subject who thinks he received sham treatment, the metric is measuring something that must not be relevant to the subject's condition. Folmer's paper informs us that the American Academy of Otolaryngology (AAO) recommends against using repetitive transcranial magnetic stimulation (rTMS) to treat tinnitus. Folmer attributes the failure of rTMS to ameliorate tinnitus in past studies in part on older tinnitus rating scales not being sensitive enough to detect the benefits. The TFI seems to address the lack of sensitivity of older scales to small improvements in tinnitus, but was this achieved by making it so sensitive that it detects improvements that are too small for subjects to perceive? If so, then the only purpose it serves is to convert failed studies into ones that can report statistically significant improvements in tinnitus.

      The small, perhaps imperceptible, benefits detected by the TFI may have been artifacts of unblinding and expectation effects, which were ascertained by asking the blinding question once at the end of the study, when these effects confound each other. If the blinding question had been tracked throughout the study, we would have unconfounded data from the beginning of the study, and could see how expectation effects, treatment effects and unblinding evolved throughout the study.

      Park's editorial warned that asking the blinding question before the end of the study could cause patients to drop out, by reminding them that they may have been randomized to sham treatment. However, any patients who forgot that they might be randomized to sham treatment are not in a state of fully informed consent, and they must be reminded. Further, Park's advice to only ask the blinding question at the end of the study seemed to be conjecture based on anecdotal experience, and did not present supportive randomized data.

      The purpose of PubMed Commons is to discuss study results in greater depth, answer open questions, rebut criticisms and debate controversies. A one-line dismissive reply is as unhelpful as no reply at all.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Nov 26, Robert L Folmer commented:

      These issues were already discussed in correspondence published by JAMA Otolaryngology-Head & Neck Surgery 2015;141(11):1031-1032.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2015 Nov 20, David Keller commented:

      Why the blinding of experimental subjects should be tracked during a study, from start to finish

      I wish to address the points raised by Folmer and Theodoroff in their reply [1] to my letter to the editor of JAMA Otolaryngology [2] concerning issues they encountered with unblinding of subjects in their trial of therapeutic MRI for tinnitus. These points are important to discuss, in order to help future investigators optimize the design of future studies of therapies for tinnitus, which are highly subject to the placebo, nocebo, Pygmalion and other expectation effects.

      First, Folmer and Theodoroff object to my suggestion of asking the experimental subjects after each and every therapy session whether they think they have received active or sham placebo therapy in the trial so far (the "blinding question"). They quote an editorial by Park et al [3] which states that such frequent repetition of the blinding question might increase "non-compliance and dropout" by subjects. Park's statement is made without any supportive data, and appears to be based on pure conjecture, as is his recommendation that subjects be asked the blinding question only at the end of a clinical trial. I offer the following equally plausible conjecture: if you ask a subject the blinding question after each session, it will soon become a familiar part of the experimental routine, and will have no more effect on the subject's behavior than did his informed consent to be randomized to active treatment or placebo in the first place. Moreover, the experimenters will obtain valuable information about the evolution of the subjects' state of mind as the study progresses. We have no such data for the present study, which impairs our ability to interpret the subjects' answers to the blinding question, when it is asked only once at the end of the study.

      Second, Folmer and Theodoroff state that I "misinterpreted" their explanation of why so many of their subjects guessed they had received placebo, even if they had experienced "significant improvement" in their tinnitus score. They object to my characterization of this phenomenon as due to the "smallness of the therapeutic benefit" of their intervention, but my wording summarizes their lengthier explanation, that their subjects had a prior expectation of much greater benefit, so subjects incorrectly guessed they had been randomized to sham therapy even if they exhibited a small but significant benefit from the active treatment. In other words, the "benefit" these subjects experienced was imperceptible to them, truly a distinction without a difference.

      A therapeutic trial hopes for the opposite form of unblinding of subjects, which is when the treatment is so dramatically effective that the subjects who were randomized to active therapy are able to answer the blinding question with 100% accuracy.

      Folmer and Theodoroff state that, in their experience, even if subjects with tinnitus "improve in several ways" due to treatment, some will still be disappointed if their tinnitus is not cured. Do these subjects then answer the blinding question by guessing they received placebo because their benefit was disappointing to them, imperceptible to them, as revenge against the trial itself, or for some other reason? Regardless, if you want to know how well they were blinded, independent of treatment effects and of treatment expectation effects, then you must ask them early in the trial, before treatment expectations have time to take hold. Ask the blinding question early and often. Clinical trials should not be afraid to collect data. Data are good; more data are better.

      References:

      1: Folmer RL, Theodoroff SM. Assessment of Blinding in a Tinnitus Treatment Trial-Reply. JAMA Otolaryngol Head Neck Surg. 2015 Nov 1;141(11):1031-1032. doi: 10.1001/jamaoto.2015.2422. PubMed PMID: 26583514.

      2: Keller DL. Assessment of Blinding in a Tinnitus Treatment Trial. JAMA Otolaryngol Head Neck Surg. 2015 Nov 1;141(11):1031. doi: 10.1001/jamaoto.2015.2425. PubMed PMID: 26583513.

      3: Park J, Bang H, Cañette I. Blinding in clinical trials, time to do it better. Complement Ther Med. 2008 Jun;16(3):121-3. doi: 10.1016/j.ctim.2008.05.001. Epub 2008 May 29. PubMed PMID: 18534323.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 02, Gregory Francis commented:

      The journal Perspectives on Psychological Science used to have an on-line commenting system. They seem to have discontinued it and removed all past comments. In January 2016, I published a comment on this article. I reproduce it below.

      Clarifying the role of data detectives

      Since my work was presented as an example of the "gotcha gang" and "witch hunt" activity (p. 892), I feel it is necessary to post a comment that more accurately describes the work in Francis (2012) and adds some relevant information.

      Francis (2012) did not simply critique Galak and Meyvis (2011) for having "too many replications", rather the critique was that Galak and Meyvis (2011) had too many replications relative to the estimated power of the studies. Power is the probability of a randomly drawn sample producing a significant outcome for a given effect size (in this case, estimated from the studies reported by Galak and Meyvis (2011)). In Galak and Meyvis (2011) a high rate of success coincided with only modest estimated power: and that pattern suggests that there were missing studies or that the reported studies were run and analyzed improperly.

      As noted in Spellman's article, my analysis was essentially validated by the response from Galak and Meyvis (2012), who reported that there was a file drawer of unsuccessful experiments. The title of their response was "You could have just asked", meaning that it was not necessary to perform the power analysis to detect the existence of missing studies because they would have told anyone who asked about those unpublished studies. I was stunned by Galak and Meyvis' (2012) response because it suggests that the standard process of reading a scientific article involves thinking to yourself, "That's really interesting. I wonder if it is true? I will ask the authors." I thought the ludicrousness of this suggestion was too obvious to need clarification, but Spellman's implication that it was a "cool reply" indicates such a need.

      It is true, as Spellman notes, that "not everything can be, or should be, shared in papers," but a scientific article is supposed to present the facts that are relevant to the conclusions. Selective reporting (or improper data collection or analysis) withholds relevant facts and thereby calls in doubt the conclusions. Moreover, the impracticality of "just asking" the authors about missing experiments becomes clear when we think about the (inevitable) death of a scientist; are we supposed to discount a scientist's lifetime of work when they die? Since it was brought up in Spellman's manuscript, I feel obligated to mention that after reading the Galak and Meyvis (2012) response, I formally asked them for the details of their file drawer. Although we had a nice phone conversation discussing power and replication, they never provided the requested data (nether raw data nor summary statistics).

      When expressing her concerns about a "witch hunt" and "reviling people", Spellman confuses Galak and Meyvis (2011), which referes to the experimental findings and conclusions in a manuscript, with Jeff Galak and Tom Meyvis, who I suspect are nice guys trying to do good science. The observation that Galak and Meyvis (2011) is not as good science as it first appeared actually benefits Jeff Galak and Tom Meyvis, and other scientists, who might want to build on those studies. Spellman calls my analysis an "unnecessary attack", but attacking ideas is a necessary part of larger scientific practice. I would hope other critics would have written a similar comment if they had identified flaws in the experimental design, realized that the questions were poorly phrased, or noticed that the statistics were mis-calculated. Scientists should expect (and even hope) that their work will be critiqued, so that future studies and theories will be better.

      Although, it is not central to the discussion about the "gotcha gang", I thought I would also mention that I object to Spellman's characterization of these discussions as part of a "war" or "revolution." This framing implies antagonism and animosity that, I hope, is largely absent. I believe that (nearly) everyone in the field wants to do good science and that psychological science addresses important topics that deserve the best science. I think a better analogy is one that is very familiar for most of us: education. Regardless of how much we already know, and regardless of our current standing in the field (from graduate students to editors of prominent journals), the recent debates about replication and data analysis indicate that we all have quite a bit to learn about both good scientific practice and the various ways that scientific investigations can be compromised. We are not on opposite "sides" and there are no teachers to tell us the right way to do things; we have to help each other learn. Sometimes that learning process involves criticism, other times it involves kudos; we cannot develop a healthy scientific field with just one approach or the other.

      Conflict of Interest: None declared


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 06, Dan Chitwood commented:

      Erratum: In the Results of this publication, the Vitis hybrids analyzed in this work are incorrectly referred to as "V. vinifera hybrids" when in fact they are hybrids either spontaneously occurring from wild North American Vitis species or of various parentages other than V. vinifera.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 23, Ben Goldacre commented:

      This trial has the wrong trial registry ID associated with it on PubMed: both in the XML on PubMed, and in the originating journal article. The ID given is NCT00265926. We believe the correct ID, which we have found by hand searching, is NCT02265926.

      This comment is being posted as part of the OpenTrials.net project<sup>[1]</sup> , an open database threading together all publicly accessible documents and data on each trial, globally. In the course of creating the database, and matching documents and data sources about trials from different locations, we have identified various anomalies in datasets such as PubMed, and in published papers. Alongside documenting the prevalence of problems, we are also attempting to correct these errors and anomalies wherever possible, by feeding back to the originators. We have corrected this data in the OpenTrials.net database; we hope that this trial’s text and metadata can also be corrected at source, in PubMed and in the accompanying paper.

      Many thanks,

      Jessica Fleminger, Ben Goldacre*

      [1] Goldacre, B., Gray, J., 2016. OpenTrials: towards a collaborative open database of all available information on all clinical trials. Trials 17. doi:10.1186/s13063-016-1290-8 PMID: 27056367

      * Dr Ben Goldacre BA MA MSc MBBS MRCPsych<br> Senior Clinical Research Fellow<br> ben.goldacre@phc.ox.ac.uk<br> www.ebmDataLab.net<br> Centre for Evidence Based Medicine<br> Department of Primary Care Health Sciences<br> University of Oxford<br> Radcliffe Observatory Quarter<br> Woodstock Road<br> Oxford OX2 6GG


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 09, Chao Liu commented:

      Dr. Chen:

      Your work is quiet interesting. I agree that dexamethasone can increase aquaporin-2 protein expression in in ex vivo inner medullary collecting duct suspensions. However, I have a different opinion about the effect of glucocorticoids dexamethasone on renal water and sodium excretion. Instead causing renal water and sodium retention, the emerging evidence show that glucocorticoids can promote renal water and sodium excretion (Glucocorticoids and renal Na+ transport: implications for hypertension and salt sensitivity. J Physiol. 2014 Apr 15;592(Pt8):1731-44.). “ Despite the widespread assumption that cortisol raises blood pressure as a consequence of renal sodium retention, there are few data consistent with the notion. Although it has a plethora of actions on brain, heart and blood vessels, kidney, and body fluid compartments, precisely how cortisol elevates blood pressure is unclear. Candidate mechanisms currently being examined include inhibition of the vasodilator nitric oxide system and increases in vasoconstrictor erythropoietin concentration.” (Cushing, cortisol, and cardiovascular disease. Hypertension. 2000 Nov;36(5):912-6. Review.) Moreover, there are a good body evidence, both from animal and human, shows that glucocorticoids could promote renal water and sodium excretion.

      See the following citations:

      1. Landínez RAS, Romero MFL, Maurice EH (2014). Efectos de la prednisona sobre la función renal a corto plazo en pacientes con Insuficiencia Cardíaca Descompensada. Venezolana de Medicina Interna 30(3): 176-192.

      2. Liu C, Chen Y, Kang Y, Ni Z, Xiu H, Guan J, et al. (2011). Glucocorticoids Improve Renal Responsiveness to Atrial Natriuretic Peptide by Up-Regulating Natriuretic Peptide Receptor-A Expression in the Renal Inner Medullary Collecting Duct in Decompensated Heart Failure. J Pharmacol Exp Ther 339(1): 203-209.

      3. Liu C, Liu G, Zhou C, Ji Z, Zhen Y, Liu K (2007). Potent diuretic effects of prednisone in heart failure patients with refractory diuretic resistance. Can J Cardiol 23(11): 865-868.

      4. Liu C, Liu K (2014a). Effects of glucocorticoids in potentiating diuresis in heart failure patients with diuretic resistance. Journal of cardiac failure 20(9): 625-629.

      5. Liu C, Liu K (2014b). Reply to Day et al.--hypouricemic effect of prednisone in heart failure: possible mechanisms. Can J Cardiol 30(3): 376 e373.

      6. Liu C, Zhao Q, Zhen Y, Gao Y, Tian L, Wang L, et al. (2013). Prednisone in Uric Acid lowering in Symptomatic Heart Failure Patients With Hyperuricemia (PUSH-PATH) study. Can J Cardiol 29(9): 1048-1054.

      7. Liu C, Zhao Q, Zhen Y, Zhai J, Liu G, Zheng M, et al. (2015). Effect of Corticosteroid on Renal Water and Sodium Excretion in Symptomatic Heart Failure: Prednisone for Renal Function Improvement Evaluation Study. J Cardiovasc Pharmacol 66(3): 316-322.

      8. Meng H, Liu G, Zhai J, Zhen Y, Zhao Q, Zheng M, et al. (2015). Prednisone in Uric Acid Lowering in Symptomatic Heart Failure Patients with Hyperuricemia - The PUSH-PATH3 Study. The Journal of rheumatology 42(5): 866-869.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 11, David Reardon commented:

      First, it should be noted that this study of self-selected sample of women is non-representative of the general population of women having abortions. 63% of women approached to participate declined, 15% dropped out before even the baseline interview, and only 27% of the eligible women were still in the study at the three year followup.

      Regarding the self-assessment of physical complications, this study is far less reliable a prior study which combined self-assessment with assessments of the women's general practitioner which reached the opposite conclusion.Ney PG, 1994 Indeed, record linkage studies which have actually examined women's medical records after abortion and childbirth have consistently shown an increase in demand for medical care (i.e., a decline in health) following abortion.Berkeley D, 1984 & Østbye T, 2001

      Regarding the alleged evaluation of mortality, the authors limited their investigation to deaths within 42 days of the birth or abortion, ignoring the fact that the CDC definition for abortion related deaths has no time limit . . . a recognition of the fact that abortion related deaths may occur well after 42 days. Specifically, the CDC defines as an abortion related death to be any death due to "1) a direct complication of an abortion, 2) an indirect complication caused by the chain of events initiated by the abortion, or 3) an aggravation of a preexisting condition by the physiologic or psychologic effects of the abortion, regardless of the amount of time between the abortion and the death"Bartlett LA, 2004 This definition includes deaths due to suicide and risk taking behavior that has been aggravated by abortion associated psychological stress.

      Clearly, the authors should have, at least, examined the data for any evidence of suicides after birth or abortion over the entire period of time since the women were first contacted. As the authors well know, record linkage studies have shown that age adjusted risk of suicide increases three fold in the year after abortion compared to non-pregnant women and is over six times greater compared to women who give birth.Gissler M, 1996 In addition, a record linkage study of 173,279 women in California showed that elevated risk of death following abortion persists for several years Reardon DC, 2002 while a more recent record linkage study from Denmark has shown there is also a does effect, with each abortion increasing the risk of death.Coleman PK, 2013

      In short, the bold assertion in this paper that the authors' statistically insignificant, non-representative sample of women supports the view that abortion has no physical health risks is spurious and clearly driven by ideological aspirations rather than a careful review of the evidence.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 07, Lydia Maniatis commented:

      Readers should read the last paragraphs of this article first. It indicates that the current results contradict the authors' previous results, and that they have no idea why that is. Nevertheless, they assume that one of the two must be right, and use a crude rule of thumb (the supposedly "simpler" explanation) to make their choice. I would take the third option.

      "We see two possible explanations for the inconsistency between our previous work and that here:

      The correct conclusions about the extent of contrast integration are drawn in our current work, with previous work being compromised by the loss of sensitivity with retinal eccentricity. For example, Baker and Meese (2011) built witch's hat compensation into their modeling, but not their stimuli (in which they manipulated carrier and modulator spatial frequencies, not diameter). A loss of experimental effect in the results (such as that in Figures 2a and 3a here) limits what the analysis can be expected to reveal. Indeed, Baker and Meese (2011) found it difficult to put a precise figure on the range of contrast integration, and aspects of their analysis hinted at a range of >20 cycles for two of their three observers. Baker and Meese (2014) made no allowance for eccentricity effects in their reverse correlation study. The contrast jitter applied to their target elements ensured they were above threshold, and so the effects of contrast constancy should come into play (Georgeson, 1991); however, we cannot rule out the possibilities that either (a) the contrast constancy process was incomplete or (b) internal noise effects not evident at detection threshold (e.g., signal dependent noise) compromised the conclusions.

      The correct conclusions about the extent of contrast integration come from our previous work. Our current work points to lawful fourth-root summation, but not necessarily signal integration across the full range. On this account, signal integration takes place up to a diameter of about 12 cycles and a different fourth-root summation processes take place beyond that point. For example, from our results here we cannot rule out the following possibility: Beyond an eccentricity of ∼1.5° the transducer becomes linear and overall sensitivity improves by probability summation (Tyler & Chen, 2000), but uncertainty (Pelli, 1985; Meese & Summers, 2012) for more peripheral targets causes the slope of the psychometric function to remain steeper than β = 1.3 (May & Solomon, 2013).

      We think Occam's razor would favor the first account over the second."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 02, Martine Crasnier-Mednansky commented:

      More needs to be said to fully understand the implication of the pioneering work by Novick and Weiner Novick A, 1957. The first action of an inducer is to induce a specific permeability thereby allowing the inducer concentration inside the cells to reach a higher level than the one in the medium. Such concentration dictates to the cell the production of the necessary enzymes for catabolism, and is generally sufficient to insure the same production in the cell’s descendants. Novick and Weiner reported that, at low concentration of an artificial inducer (TMG), descendants of a bacterial cell which had not yet been induced were not induced which resulted in two populations of bacteria, i.e., 'fully induced' and 'not induced', the all-or-none 'enzyme induction phenomenon'. The culture maintains its previous state of induction (so to speak) because induced cells grow more slowly than non-induced cells. Therefore, a minimum concentration of inducer is insufficient to maintain a population of induced cells. Novick and Weiner then concluded: "Some differences which arise in a clone of organisms may be the result of changes in cellular systems other than the primary genetic endowment of the cell". Even though Novick and Weiner used a non-metabolized inducer, their work is of consequence to contemporary work.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 16, Md. Shahidul Islam commented:

      In the supplementary figure 1 the authors show dramatic (almost 100%) inhibition of glucose- and GLP-1 induced electrical activity and insulin secretion by the TRPM4 inhibitor 9-phenanthrol which they used at a concentration of 50 microM. This could possibly be due to the toxic effect of 9-phenanthrol on the beta cells. It is unclear why the authors needed to use 9-phenanthrol at a concentration of 50 microM (as little as 20 microM would have been enough).

      It may be noted that in human beta cells TRPM4 is heavily expressed wheras TRPM5 is almost absent. Pancreas. 2017 Jan;46(1):97-101. Expression of Transient Receptor Potential Channels in the Purified Human Pancreatic β-Cells. Marabita F1, Islam MS. PMID: 27464700 DOI: 10.1097/MPA.0000000000000685


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 30, Benjamin Haibe-Kains commented:

      Our letter describing the fundamental differences between our analysis (Haibe-Kains et al, Nature 2013; http://www.ncbi.nlm.nih.gov/pubmed/24284626) and the GDSC/CCLE reanalysis has been published in F1000Research: http://f1000research.com/articles/5-825/v1


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Apr 18, John Quackenbush commented:

      The authors’ new analyses violate many basic principles of an unbiased assessment of the reproducibility of drug sensitivity predictors across independent data sets. One cannot pick and chose different measures between experiments that measure many different things and claim agreement when the same parameters, measures by both studies are not well correlated. Nor can one arbitrarily replace a subset of data in one study with data from a second study and then compare the two studies. Anyone should realize the fallacy of conclusions derived from such inappropriate analyses.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Apr 18, Benjamin Haibe-Kains commented:

      The GDSC and CCLE investigators recently reported that their respective studies exhibit reasonable agreement and yield similar molecular predictors of drug response, seemingly contradicting our previous findings (Haibe-Kains B, 2013). Reanalyzing the authors' published methods and results, we found that their analysis failed to account for variability in the genomic data and importantly, compared different drug sensitivity measures from each study, which substantially deviate from our more stringent consistency assessment. The authors’ new analyses violate basic principles of an unbiased assessment of the reproducibility of drug sensitivity predictors across independent datasets.

      We submitted a brief response to the GDSC/CCLE paper to Nature, pointing out the errors. Although reviewers agreed that the inappropriate analytical designs should be brought to the attention of the community, the Nature Editors declined to publish our letter. The reason the Editors gave was that this was a specialized discussion subject to personal interpretation. In our opinion, the fundamentals of good analytical design are neither specialized nor subject to interpretation.

      Our rejected letter is available on bioRxiv http://biorxiv.org/content/early/2016/04/13/048470. We encourage the GDSC and CCLE investigators to publish their response and we hope that others share their insights on this important issue.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 09, Leigh Jackson commented:

      This systematic review is a textbook example of when not to undertake a meta-analysis. (Fletcher, 2007)<sup>1</sup> (Usichenko et al., 2008)<sup>2</sup> To conflate RCTs of many different forms of acupuncture treatment (with the notable exception of laser acupuncture) with many different study methodologies, for many different clinical conditions, must lead to an inordinate degree of heterogeneity of data. The table of heterogeneity in this review is an instructive tool to show the inablity of meta-analysis to make sense of such data.

      <sup>1</sup> Fletcher J., What is heterogeneity and why is it important? BMJ 2007;334:94-6

      <sup>2</sup> Usichenko et al., Auricular acupuncture for postoperative pain control: a systematic review of randomised clinical trials. Anaesthesia. 2008 Dec;63(12):1343-8.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 07, Daniel Haft commented:

      This article describes sequencing and analysis of the two type I-E CRISPR system repeat arrays in E. coli K-12. The arrays are found on Escherichia coli K-12 substr. MG1655 reference sequence U00096.3 at positions 2877884-2878463 and 2904013-2904408, between the iap and ygbF(cas2). A part of the first array had been noted previously by these authors two years earlier (PMID:3316184), and discussed in the final paragraph of that paper.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 28, Friedrich Thinnes commented:

      Arguments for plasmalemmal VDAC-1 to form the channel part of VRAC

      The inclusion of VDAC-1 = voltage dependent anion channel of isotype-1 into the plasma membrane of mammalian cells was first demonstrated in 1989, this by its immuno-topochemical flagging on human B lymphocyte, and those data were meanwhile corroborated by several laboratories using manifold approaches world wide (1-4).

      Concerning the function of plasmalemmal VDAC-1 (5-9) it has been shown that the channel is involved in cell volume regulation. Cell outside applied monoclonal mouse anti-human type-1 porin antibodies blocked the RVD of HeLa cells, proving that VDAC-1 is involved in the process. HeLa cells pre-incubated with the antibodies dramatically increased their volume within about 1 min after a stimulus by hypotonic Ringer solution, but did not move backward towards their starting volume, thus indicating abolished RVD.

      To notice, corresponding blocking effects were induced by the established anion channel inhibitor DIDS or BH4BClXL peptides, respectively. Video camera monitoring of cell size over time was used in this direct and noninvasive approach (9; www.futhin.de Supplement 1). Corroboration of these data came from the laboratory of Dr. R. Boucher (10) using VDAC knock out mice, this study, furthermore, pointing to the channel as an ATP pathway.

      First data concerning the involvement of plasmalemmal VDAC-1 in the apoptotic process came from Dr. F. Elinder´s laboratory, demonstrating that opening of plasma membrane voltage-dependent anion channels (VDAC) precedes caspase activation in neuronal apoptosis induced by toxic stimuli (11). In line, the laboratory of Dr. Raquel Marin demonstrated that voltage dependent anion channel (VDAC-1) participates in amyloid Aß-induced toxicity and also interacts with the plasma membrane estrogen receptor alpha (mERa) in septal and hippocampal neurons (12). Noteworthy: Alzheimer Disease disproportionally affects women.

      To notice, plasmalemmal VDAC and amyloid Aß, too, carry GxxxG peptide interaction motifs (2-4).

      Concerning VDAC-1 agonists there are many data on low molecular weight agonists working on VDAC in varying settings, which may be helpful in studies on VRAC: DIDS, cholesterol, ATP, König's polyanion, dextran sulfate, Ga3+, Al3+, Zn2+, polyamines, compound 48/80, ruthenium red, fluoxetine, cisplatin, curcumin. Further studies looked for corresponding effect of peptides e.g. BH4-BClXL peptides, peptides including the free N-terminal part of VDAC-1 and amyloid Aβ peptides (4,9-16).

      There is increasing evidence on interactions of VDAC-1 and proteineous modulators: e.g. α-synuclein shows high affinity interaction with voltage dependent anion channel, suggesting mechanisms of regulation and toxicity in Parkinson Disease (17). It has, furthermore, been shown that interaction of human plasminogen kringle 5 and plasmalemmal VDAC-1 links the channel to the extrinsic apoptotic pathway (18). Finally, an early study pointed to cancer cell cycle modulation by functional coupling between sigma-1 receptors and Cl- channels, here GxxxG motifs putatively playing a role (19,20).

      Noteworthy, a SwissProt alignment of the LRC8A-D sequences shows two GxxxG motifs in a critical loop of LRC8E (Thinnes, unpublished).

      Conclusion

      While the expression of VDAC-1 in in the plasma membranes is beyond reasonable doubt (1-4) its function in this compartment is still in debate (5-20, 21-23).

      VDAC-1 shows ubiquitous multi-toplogical expression, standing in outer mitochondrial membranes, the endoplasmic reticulum, as well as in the plasmalemma. To fulfill putatively varying functions in differing compartments, from the beginning on, my laboratory postulated proteineous channel modulators, which in varying heteromer complexes may adjust membrane-standing VDAC-1 to local needs.

      Meanwhile, several of those come to the fore. VRAC/VSOAC candidates appear to be amongst them.

      Finally, concerning medical relevance VDAC-1 complexes are involved in the pathogenesis of e.g. Cystic Fibrosis (13), Alzheimer Disease (3,4,12) and cancer (4).

      References

      1) De Pinto V, Messina A, Lane DJ, Lawen A. FEBS Lett. 2010 May 3;584(9):1793-9. doi: 10.1016/j.febslet.2010.02.049. Epub 2010 Feb 23. Review. PMID: 20184885 Free Article

      2) Thinnes FP. Biochim Biophys Acta. 2015 Jun;1848(6):1410-6. doi: 10.1016/j.bbamem.2015.02.031. Epub 2015 Mar 11. Review. PMID: 25771449

      3) Thinnes FP. Front Aging Neurosci. 2015 Sep 30;7:188. doi: 10.3389/fnagi.2015.00188. eCollection 2015. No abstract available. PMID: 26483684 Free PMC Article

      4) Smilansky A, Dangoor L, Nakdimon I, Ben-Hail D, Mizrachi D, Shoshan-Barmatz V. J Biol Chem. 2015 Nov 5. pii: jbc.M115.691493. [Epub ahead of print] PMID: 26542804 Free Article

      5) Morris AP, Frizzell RA. Am J Physiol. 1993 Apr;264(4 Pt 1):C977-85. PMID: 7682780

      6) Blatz AL, Magleby KL. Biophys J. 1983 Aug;43(2):237-41. PMID: 6311302 Free PMC Article

      7) Dermietzel R, Hwang TK, Buettner R, Hofer A, Dotzler E, Kremer M, Deutzmann R, Thinnes FP, Fishman GI, Spray DC, et al. Proc Natl Acad Sci U S A. 1994 Jan 18;91(2):499-503. PMID: 7507248 Free PMC Article

      8) Schwiebert EM, Egan ME, Hwang TH, Fulmer SB, Allen SS, Cutting GR, Guggino WB. Cell. 1995 Jun 30;81(7):1063-73. PMID: 7541313 Free Article

      9) Thinnes FP, Hellmann KP, Hellmann T, Merker R, Brockhaus-Pruchniewicz U, Schwarzer C, Walter G, Götz H, Hilschmann N. Mol Genet Metab. 2000 Apr;69(4):331-7. PMID: 10870851 10) Okada SF, O'Neal WK, Huang P, Nicholas RA, Ostrowski LE, Craigen WJ, Lazarowski ER, Boucher RC. J Gen Physiol. 2004 Nov;124(5):513-26. Epub 2004 Oct 11. PMID: 15477379 Free PMC Article

      11a) Elinder F, Akanda N, Tofighi R, Shimizu S, Tsujimoto Y, Orrenius S, Ceccatelli S. Cell Death Differ. 2005 Aug;12(8):1134-40. PMID: 15861186 Free Article

      11b) Akanda N, Tofighi R, Brask J, Tamm C, Elinder F, Ceccatelli S. Cell Cycle. 2008 Oct; 7(20):3225-34. Epub 2008 Oct 20. PMID: 18927501

      12a) Marin R, Ramírez CM, González M, González-Muñoz E, Zorzano A, Camps M, Alonso R, Díaz M. Mol Membr Biol. 2007 Mar-Apr;24(2):148-60. PMID: 17453421

      12b) Herrera JL, Diaz M, Hernández-Fernaud JR, Salido E, Alonso R, Fernández C, Morales A, Marin R. J Neurochem. 2011 Mar;116(5):820-7. doi: 10.1111/j.1471-4159.2010.06987.x. Epub 2011 Jan 7. Review. PMID: 21214547 Free Article

      13) Thinnes FP. Mol Genet Metab. 2014 Apr;111(4):439-44. doi: 10.1016/j.ymgme.2014.02.001. Epub 2014 Feb 13. Review. PMID: 24613483

      14 Thinnes FP. PMID: 15781203 [PubMed - indexed for MEDLINE] Mol Genet Metab. 2005 Apr;84(4):378.

      15) Thinnes FP. Mol Genet Metab. 2009 Jun;97(2):163. doi: 10.1016/j.ymgme.2009.01.014. Epub 2009 Feb 3. No abstract available. PMID: 19251445

      16) Thinnes FP. Am J Physiol Cell Physiol. 2010 May;298(5):C1276. doi: 10.1152/ajpcell.00032.2010. No abstract available. PMID: 20413797 Free Article

      17) Rostovtseva TK, Gurnev PA, Protchenko O, Hoogerheide DP, Yap TL, Philpott CC, Lee JC, Bezrukov SM. J Biol Chem. 2015 Jul 24;290(30):18467-77. doi: 10.1074/jbc.M115.641746. Epub 2015 Jun 8. PMID: 26055708

      18) Li L, Yao YC, Gu XQ, Che D, Ma CQ, Dai ZY, Li C, Zhou T, Cai WB, Yang ZH, Yang X, Gao GQ. J Biol Chem. 2014 Nov 21;289(47):32628-38. doi: 10.1074/jbc.M114.567792. Epub 2014 Oct 8. PMID: 25296756 Free PMC Article

      19) Renaudo A, L'Hoste S, Guizouarn H, Borgèse F, Soriani O. J Biol Chem. 2007 Jan 26;282(4):2259-67. Epub 2006 Nov 22. PMID: 17121836 Free Article

      20) Chu U, Ruoho AE. Mol Pharmacol. 2015 Nov 11. pii: mol.115.101170. [Epub ahead of print] PMID: 26560551 Free Article

      21) Liu HT, Tashmukhamedov BA, Inoue H, Okada Y, Sabirov RZ. Glia. 2006 Oct;54(5):343-57. Erratum in: Glia. 2006 Dec;54(8):891.

      22) Sabirov RZ, Merzlyak PG. Biochim Biophys Acta. 2012 Jun;1818(6):1570-80. doi: 10.1016/j.bbamem.2011.09.024. Epub 2011 Oct 1. Review. PMID: 21986486 Free Article

      23) Pedersen SF, Klausen TK, Nilius B. Acta Physiol (Oxf). 2015 Apr;213(4):868-81. doi: 10.1111/apha.12450. Epub 2015 Jan 28.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 29, Lydia Maniatis commented:

      Here's the thing. First, describing an image as a case of "shape-from-shading" is jumping the gun. Pre-viewing the image contains luminance variations which may or may not be interpreted as changes in illumination. This should be obvious when we are using pictorial rather than real-world stimuli. So the question is when are the changes in luminance seen as changes in illumination? The answer is that we (our visual system, in effect) judge the possible shapes that will arise under various reflectance/illumination options

      If we take a clear "shape-from-shading" figure and we make the edges of the shadows hard, make the shadowed areas solid black, or remove the luminance changes but simply trace them out with a hard line, we will in most cases still see the same 3D shapes; it will be a kind of cartoon version of the shape-from-shading impression. We'll have in effect contour lines. This will happen b/c treating the lines in any other way (as delineating shapes in themselves) will produce worse shapes, specifically shapes will an implicitly smaller volume/area ratio. So "shape-from-shading" should be called "shading-from-shape."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 May 25, Lydia Maniatis commented:

      When Todd et al say that understanding their demonstrations requires “a broader theoretical analysis of shape from shading that is more firmly grounded in ecological optics,” do they mean that there are things about the physics of how light interacts with surfaces that we don't understand? What kind of empirical investigations are they suggesting need to be performed? What kind of information do they think is missing, optically-speaking?

      The fundamental issues are formal (having to do with form) not the details of optics and probabilities of illumination structure - as these authors have shown.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 25, S A Ostroumov commented:

      I think that the importance of microbial pollution will increase in future. Therefore, I consider this paper useful and relevant.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 09, Hilda Bastian commented:

      This is an excellent trial on an important subject, but the authors go beyond what the data in this study can support here: "The overall conclusion is that supported computerised cognitive behaviour therapy confers modest or no benefit over usual GP care..."

      As others have pointed out in rapid responses at the BMJ, this study primarily shows that particularly low adherence to online CBT had little, if any, impact. The study was powered only to detect a difference at the effect sizes for supported computer-based/online CBT, while the type of support provided in this trial was minimal (and not clinician or content-related). The participants were more severely depressed than the groups for whom online CBT was offered in other trials (and in recommendations for its use), other care in each arm often included antidepressants, and the extent of use of CBT (online or otherwise) in the GP group is not known. The results are very relevant to policy on offering online CBT. But I don't think there is enough certainty from this one trial to support a blanket statement about the efficacy of the intervention rather the potential impact of a policy of offering it.

      The size of this study, while large, is smaller than the other studies combined, and without a systematic review it is not clear that this study would shift the current weight of evidence. An important difference between this trial and studies in this field generally is that personal access to the internet was not required. I couldn't locate any data on this in the report. It would be helpful if the authors could provide information here on the level of personal, private access to the internet people had in each arm of the trial, so that it's possible to take this potential confounder into account in interpreting the results.

      Free online CBT is also an option for those who cannot (or will not) get in-person therapeutic care. Many people with mild or moderate depression do not get professional care for it, and it doesn't seem reasonable on the basis of this to discourage people from trying free online CBT out. Yet, the press release for this study was headlined, "Computer assisted cognitive behavioural therapy provides little or no benefits for depression" (PDF), setting off media reports with that message. That far exceeds what the data from this one trial can support.

      Disclosure: I have not been involved in the development of any online, or in-person, therapy for depression. I was co-author of a 2003 systematic review on the impact of the internet, which concluded that CBT-based websites for mental health issues at that time had mixed results (PDF), and I have since written favorably about online CBT.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 12, Wayne Butler commented:

      The title and keywords have no connection to the abstract. I thought it was a PubMed error, but the manuscript on the BJU website is the source of the error. It appears the title and keywords were mistakenly pasted onto the manuscript from some other paper. The manuscript itself is an interesting review of urethral recurrence after radical cystectomy, but the reviewers at BJU should have noticed the disconnect between title and subject.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 30, Chris Mebane commented:

      Kudos to the authors for researching and publishing this article. One quibble. In the Table 2 criteria for Data Reusability, authors were marked down if they archived data "in a format that is designed to be machine-readable with proprietary software (e.g., Excel)."

      That is not a correct criticism. "Excel" is a proprietary software but the "xlsx" Office Open XML format is not proprietary and is a standardized format. Between 2006 and 2009, the Office Open XML format was standardized by the international standards organizations Ecma and ISO [1]. The U.S. Library of Congress has a wealth of information on Sustainability of Digital Formats in archiving, including xlsx [2]. For Library of Congress staff preference, they note that "For works acquired for its collections, the list of Library of Congress Recommended Formats Statement for Datasets/Databases, as of June 2016, includes XLSX (.xlsx) as a preferred format for datasets." [2]

      The misnomer in Table 2 is repeated in the table 3 "Key recommendations to improve public data archiving , ...Use standard formats: Use file formats that are compatible with many different kinds of software (e.g., csv rather than excel files)." Office Open XML are not "Excel" files, rather it is the default format used by Excel. As the Library of Congress staff noted, direct editing of XLSX files can be done, for example, in Google Sheets without conversion.

      If the expectation is that reuse would be via Big Data automated data-mining without need for a human to reading the associated paper, then by all means csv flat files and metadata in standard data dictionaries are the way to go. However, for smaller datasets or studies in which the context of the data matters and reuse would entail another researcher looking at the article and study, and semantic structure of formulas and their relationship to cells with values matter, then the Office Open XML format or the similar Open Document Format (odf) is fully appropriate.

      It may be prudent to hedge our bets and publish their data in more than one format. I did that for example in a Dryad data release [3], and it only took a few minutes longer. All the work was in the curation, structuring, and labeling (that is, beyond the work of generating the data in the first place). The fundamental point of this comment is not to recommend a specific format other than the format be fit for purpose and that since the 2006-2009 approval of the xlsx Office Open XML format by international standardization organizations, "xlsx" is in fact a standard, non-proprietary format.

      [1] https://en.wikipedia.org/wiki/Office_Open_XML

      [2] https://www.loc.gov/preservation/digital/formats/fdd/fdd000398.shtml

      [3] http://datadryad.org/resource/doi:10.5061/dryad.67n20


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 02, Christopher Southan commented:

      Download availability, at least of protein IDs, should have been a condition of publication


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 14, Arnaud Chiolero MD PhD commented:

      A very elegant study showing the importance of having multiple blood pressure measurements to better evaluate the predictive value of elevated blood pressure in childhood for the occurrence of diseases.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 24, James Yeh commented:

      Editor's Comment

      Blood-Pressure Control — Polling Results

      James S. Yeh, M.D., M.P.H., Edward W. Campion, M.D., John A. Jarcho, M.D., and Jonathan N. Adler, M.D.

      The goal of blood-pressure control is to reduce mortality and morbidity from macrovascular and microvascular causes. The NIH-funded SPRINT study published in the November 26 issue of the Journal suggests that targeting a systolic blood-pressure goal of less than 120 mm Hg, which is lower than current guideline recommendations, is associated with reduced mortality.(1) The study showed that the rate of the primary composite outcome of myocardial infarction, acute coronary syndrome, stroke, acute decompensated heart failure, or death from cardiovascular causes was lower by 0.54 percentage points per year among those who were randomly assigned to a target systolic blood pressure of 120 mm Hg than among those assigned to a target systolic blood pressure of 140 mm Hg (1.65% per yr vs. 2.19% per yr). The difference in the rates was driven mostly by a lower rate of acute heart failure and death from cardiovascular causes among those with the lower systolic blood-pressure target (hazard ratio, 0.62 and 0.57, respectively).

      In November, we presented the case of Ms. Weymouth, a 75-year-old woman with a blood pressure of 136/72 mm Hg. Readers were invited to vote on whether her current antihypertensive regimen should be maintained or should be modified to lower the systolic blood pressure further. This patient had a history of well-controlled hypertension, as well as peripheral vascular disease and atrial fibrillation. She was being treated with metoprolol succinate, chlorthalidone, apixaban, aspirin, and atorvastatin.(2) She was a nonsmoker who walked regularly for exercise. Laboratory studies included a total cholesterol level of 174 mg per deciliter (4.5 mmol per liter), a low-density lipoprotein cholesterol level of 87 mg per deciliter (2.2 mmol per liter), a high-density lipoprotein cholesterol level of 65 mg per deciliter (1.7 mmol per liter), a serum creatinine level of 0.9 mg per deciliter (80 μmol per liter), and an estimated glomerular filtration rate of 65 ml per minute per 1.73 meters squared of body-surface area.

      A total of 1379 readers from 93 countries responded to the poll. The largest group of respondents, representing one third of the votes, was from the United States and Canada. A vast majority of the readers (81%) voted to maintain the current antihypertensive regimen. This result suggests that the findings of the SPRINT trial did not suddenly change physicians’ approach to treatment, at least for a patient such as the one described in the case vignette.

      A substantial proportion of the 94 Journal readers who submitted comments emphasized caution for older patients, given concerns about side effects such as hypotension, which could cause injurious falls. Many commented on the “small benefit” seen with blood-pressure reduction. Some readers argued in favor of further reduction of systolic blood pressure, observing that this benefit is not inconsequential, given the mortality outcome over the short period of time that patients were followed. Similarly, some readers noted that physicians currently recommend treatments for other disease conditions that provide similar or less benefit. Readers who advocated further adjustment of her blood-pressure regimen generally recommended doing so judiciously, using low doses of medications first, then increasing the doses, with close monitoring for medication side effects.

      There were several related recurring themes among the comments submitted. Commenters emphasized the need for individualized risk assessment and the importance of shared decision making regarding the benefit and risk of further blood-pressure reduction. A number of commenters mentioned the importance of recommending further lifestyle modification before changing medication. Readers also emphasized the need to balance the quality of life in the present versus the additional future gains in mortality outcome. Several readers also commented on the variability of blood-pressure measurements depending on the context and the time of measurement. The readers emphasized the importance of obtaining ambulatory blood-pressure measurements to help guide clinical decisions.

      REFERENCES [1] SPRINT Research Group. A randomized trial of intensive versus standard blood-pressure control. N Engl J Med 2015;373:2103-2116. [2] Yeh JS, Bakris GL, Taler SJ. Blood-pressure control. N Engl J Med 2015;373:2180-2182.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 25, CREBP Journal Club commented:

      After discussion in our journal club, we concluded that this is a high quality randomised controlled trial. The results show the intervention results in a statistically significant decrease in death from any cause and major cardiovascular events (myocardial infarction, acute coronary syndrome, stroke, heart failure), but at the risk of an increase in severe adverse events (principally hypotension, syncope, electrolyte abnormalities and kidney injury). For treatment benefit, 16 deaths or major cardiovascular events were prevented per 1000 patients treated for 3 ¼ years, whereas 23 patients per 1000 had a serious adverse event. We also had the following concerns about the trial:

      a) The treatment benefit is possibly an overestimate due to the early stopping of the trial;

      b) The standard treatment arm of the trial reduced blood pressure lowering medication dose if the patient went below the specified target.

      c) Even in this selected and carefully monitored population less than half of the patients achieved the target blood pressure.

      d) The more stringent measurement of blood pressure used in the trial are not used in routine clinical practice.

      e) There was greater utilisation of blood pressure lowering medication in the intensive arm of the trial, and this could have led to the observed difference rather than the achievement of the blood pressure target.

      Whether the interventions are beneficial for an individual patient appears to be dependent on the individual clinical circumstances and the preferences of the patient. We would strongly recommend the development of methods for improving shared decision making with patients on this topic before recommending this intervention be part of routine practice.

      See CREBP Journal Club for more information.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Apr 12, NephJC - Nephrology Journal Club commented:

      This trial was discussed on Nov 10th and 11th 2015 in the open online nephrology journal club, #NephJC, on twitter. Introductory comments are available at the NephJC website. The discussion was quite dynamic, with more than 100 participants, including nephrologists, cardiologists, geriatricians, emergency medicine physicians, hypertension specialists, clinical pharmacists, fellows, residents and patients. The transcript of the entire tweetchat, along with a round up of the many commentaries that were published, is available on the NephJC website. Some of the highlights of the tweetchat were:

      • The team of investigators should be commended for designing and conducting this trial, and the NIH institutes – NHLBI, NIDDK and the NIA for funding this important trial.

      • Some of the crucial elements of the trial were the method of blood pressure measurement (5 minutes resting followed by three readings without patient-provider interaction), the specific medications used (often long acting and potent antihypertensive agents such as chlorthalidone, use of combinations) and the frequent evaluations to titrate therapy, all of which should be considered when applying these findings.

      • Overall the results are quite robust across different subgroups, with an impressive NNT of 90 for mortality; at the same time, the NNH (number needed to harm) for acute kidney injury (56), syncope (167) and electrolyte disturbances (125) should also be considered when applying this in clinical practice, as also the important exclusion criteria (eg frail patients) for whom this data may not be applicable to. Subsequent studies, such as the effect on cognition, quality of life and ambulatory blood pressure data are also eagerly awaited.

      Interested individuals can track and join in the conversation by following @NephJC on twitter, liking #NephJC on facebook, signing up for the mailing list, or visit the webpage at NephJC.com.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Feb 09, Geriatric Medicine Journal Club commented:

      This article was critically appraised at the January 2016 Geriatric Medicine Journal Club (follow #GeriMedJC on Twitter). The SPRINT Research Group conducted a study to address the question of what the optimal systolic blood pressure target ought to be in patients at high risk for cardiovascular outcomes. Over a quarter of the patients in the study were 75 years or older, making this study somewhat relevant to geriatricians. However, looking at the extensive exclusion criteria, certainly there were not too many frail folks there. Concerns raised included the following: 1) The increased risk of syncope raises some eyebrows, 2) There was a preference for the use of chlorthalidone in the treatment algorithm; one must exercise caution in using this drug in the elderly due to the risk of hypokalemia, 3) It’s too bad the trial was stopped early as we know that truncated randomized trials are usually associated with greater effect sizes than if not stopped early. It will be interesting to see how this trial may change practice guidelines and the outcomes on cognitive impairment is expected to be reported in a separate publication.

      Did you miss the #GeriMedJC tweetchat? Check out the transcript in our Archives section: http://gerimedjc.utorontoeit.com/index.php/2015/12/01/missed-gerimedjc-all-archived-here/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2015 Nov 17, Johan van Schalkwyk commented:

      SPRINT strikes me as a work of pure genius. Conceive the following scenario: Take a carefully selected mixture of high-risk patients with a variety of blood pressures and risk factors, making sure that the low threshold for selection into the study is below the current sytolic blood pressure 'standard' of 140 mmHg. Apply two protocols, one that will aggressively reduce blood pressure in the intermediate term, another that's far more conservative. It's not beyond the bounds of possibility that a group of smart statisticians using current simulation methods and access to large hypertension databases might even predict the intermediate-term outcomes with a fair degree of confidence.

      The fact that this trial is exactly what every manufacturer of anti-hypertensives needs at this point, that it was stopped very early, and that it seems to contradict the prior evidence that has informed treatment guidelines to date should make us pause and think. Particularly as the results from a highly selected group, treated for a few years, may well be extrapolated to lifetime treatment of many or even most people with a systolic blood pressure of over 130 mmHg. Let's see how this is marketed.

      You may well choose to ignore the fact that one of the principal authors has received "personal fees" and/or grants from Bayer, Boeringer Ingelheim, GSK, Merck, AstraZeneca, Novartis, Arbor, Amgen, Medtronic and Forest. Your choice. Everyone has to make a living.

      Of greater concern might be the near tripling of the rate of acute kidney injury or acute renal failure in the intensive-treatment group, as these conditions are not cheap to manage. We might also be a bit puzzled that almost half of the "extra deaths" in the standard therapy group were NOT from cardiovascular causes. How on earth does this work?

      But anyone who understands a bit about MCMC methods should be in awe of the way this study was put together.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2015 Nov 14, Arnaud Chiolero MD PhD commented:

      This study will probably change practice. Nevertheless, the absolute CVD risk reduction remains to be considered when these blood pressure levels are targeted.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 10, Paul Brookes commented:

      Although the title uses the term "mitoflash", it is clear from the abstract and the article text that the authors believe these flashes equate to superoxide. This is demonstrably incorrect.

      Since the original report on these flash events (Wang W, 2008), there have been a number of papers questioning their attribution to superoxide, because this simply cannot be explained in terms of the fundamental chemistry of superoxide and the sensor cpYFP.

      The first questions were raised in 2009 (Muller FL, 2009), and since that time several papers (Schwarzländer M, 2011, Schwarzländer M, 2014, Schwarzländer M, 2012) have provided experimental evidence, firmly demonstrating that cpYFP is NOT A SUPEROXIDE INDICATOR. The authors of the original paper have so-far not provided a satisfactory rebuttal to these data.

      The contradictory papers are high profile (Nature) and are known to virtually anyone with a passing interest in this field, and yet they are not cited in this new Cell Metabolism paper. This is a huge omission, and at the very least is suggestive of inadequacies during the peer review process. At the worst, publication of a paper with such demonstrably false claims is suggestive of irresponsibility at the editorial level.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 06, Stuart Buck commented:

      This article has been critiqued by Shariff et al, "What is the association between religious affiliation and children's altruism?," Current Biology, available at http://www.cell.com/current-biology/fulltext/S0960-9822(16)30670-4. Among other things, they point to a clear and seemingly indisputable example of miscoding with regard to countries. For example, "United States" was coded as "1" while Canada was coded as "2," and so forth, as if "country-ness" was a variable in which Canada was twice as much of a country as the US. They also find other statistical errors that make this article look unreliable.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 03, Siddharudha Shivalli commented:

      I read the article by El Berbri I et al, with a great interest. Authors’ efforts are commendable. Based on a large scale community based survey, authors highlight the burden and socio-demographic disproportional distribution of Cystic echinococcosis (CE) in Sidi Kacem Province, Morocco. In addition, authors also emphasize the higher infection rate in slaughter animals and the main drivers of CE transmission in the study area.

      Authors state that 543 community members in 39 douars (villages) across the 27 communes studied. And they have followed multistage random sampling. Authors should have justified the adequacy and representativeness of sample size, although, sample size appears good enough. This is essential to ensure the internal validity of the study findings. In my opinion, sample size should have been calculated based on the total population, anticipated community knowledge of CE, design effect (owing to multistage random sampling), power of the study, confidence levels and anticipated non-response. Same should have been done for the Abattoir study component also. In addition, authors should have explicitly stated the inclusion and exclusion criteria for the study participants.

      It is recommended to follow the STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) checklist while reporting cross sectional studies. It is endorsed by many biomedical journals. In Table 4 of the article, authors have mentioned the p value as ‘0.000’. Statistical software, by default setting, displays p value as zero if it extends beyond 3 decimal points (i.e. p=0.0000001 would be displayed as p=0.00). Practically, the value of p cannot be zero and hence, I would suggest to report it as p<0.0001. In addition, median values for various continuous variables in Table 4 should have been reported with inter quartile range.

      Nonetheless, I must congratulate the authors for exploring an important public health problem in the study area.

      Competing interests: The author declares that there is no conflict of interest about this publication.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 17, Miguel Lopez-Lazaro commented:

      New ways to prevent cancer

      This article provides an excellent assessment of the status of cancer chemoprevention research and gives useful recommendations for moving the field of cancer prevention research toward meaningful practice guidelines. However, there are very recent developments in the field that, in my opinion, will change the way we prevent cancer.

      Knowing neither the cellular origin of cancer nor the main biological cause of the disease has been the most important obstacle to cancer prevention. It is actually very difficult to prevent a process without really knowing where it starts and how it develops. Recent evidence strongly suggests that cancer arises from normal stem cell, and that the main reason we have cancer is that our stem cells divide. The division of stem cells is necessary to form, maintain and repair our tissues. But when they divide, their DNA gets damaged and our risk of cancer increases. Recent data have revealed that the more stem cell divisions a tissue accumulates over a lifetime, the higher is the risk of cancer in that tissue. This explains why cancer is diagnosed even millions of times more often in some tissues than in others, and why cancer incidence increases so dramatically with age (1, 2).

      This fresh knowledge about the etiology of cancer opens new ways to prevent the disease. Controlling the division rates of stem cells will be a key strategy to prevent cancer, like controlling hypercholesterolemia and hypertension is a key strategy to prevent cardiovascular disease. This can be achieved by identifying and controlling environmental factors that promote the division of stem cells, and by identifying and controlling internal signals that regulate their division rates. Importantly, some of these factors and signals are already known and can be controlled (2).

      Some stem cells will acquire DNA alterations no matter what we do. Stem cells have to divide, and some errors arising during cell division are unavoidable. Cancer prevention will partially protect stem cells from getting damaged and will lead to a cancer-free life in many cases. In other cases, primary prevention efforts will not stop some stem cells from becoming malignant. But prevention is still possible in these cases (secondary prevention). The accumulation of DNA alterations in stem cells can make them vulnerable to specific pharmacological and non-pharmacological interventions. An important challenge will be to find ways to selectively kill mutated stem cells before they give rise to cancer. Premalignant stem cells might be eliminated, for example, by selective restriction of specific amino acids (2).

      (1) Cancer arises from stem cells: opportunities for anticancer drug discovery. Drug Discov Today. 2015 20(11):1285-7. (https://www.researchgate.net/profile/Miguel_Lopez-Lazaro)

      (2) Understanding why aspirin prevents cancer and why consuming very hot beverages and foods increases esophageal cancer risk. Controlling the division rates of stem cells is an important strategy to prevent cancer. Oncoscience, 2015, 2(10), 849-856. http://www.impactjournals.com/oncoscience/files/papers/1/257/257.pdf


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 25, David Keller commented:

      There is no safe way to expose the skin to solar radiation. Obtain vitamin D from food or pills, not sunbathing

      This paper concludes: "The best way to obtain a given dose of vitamin D with minimal carcinogenic risk is through a non-burning exposure in the middle of the day, rather than in the afternoon or morning." This message is misleading, and potentially very dangerous from a public health point of view. Dermatologists and other physicians are trying to convince patients that there is no safe time of day to sunbathe, and no safe amount of solar radiation to the skin. To send patients out to obtain "non-burning exposure in the middle of the day" is courting disaster. They should be informed that the only recommended way to meet their vitamin D requirement is through diet or supplementation [1]. This paper could pose a setback to anti-sunbathing efforts and thereby contribute to the rising death rate from melanoma [2].

      References:

      1: "American Academy of Dermatology recommends that an adequate amount of vitamin D should be obtained from a healthy diet that includes foods naturally rich in vitamin D, foods/beverages fortified with vitamin D, and/or vitamin D supplements; it should not be obtained from unprotected exposure to ultraviolet (UV) radiation. Unprotected UV exposure to the sun or indoor tanning devices is a known risk factor for cancer."

      2: National Cancer Institute website, SEER (Surveillance, Epidemiology, and End Results Program. accessed on 1/7/2016. http://seer.cancer.gov/statfacts/html/melan.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 23, Clive Bates commented:

      The following conclusion drawn by the authors has no foundation in the study:

      In conclusion, our study strongly suggests that electronic cigarettes are not as safe as their marketing makes them appear to the public.

      There are three main concerns about how this conclusion is drawn:

      Firstly, the paper makes no assessment of electronic cigarette marketing claims. I am not aware of any marketing claims for complete safety and the authors do not provide any evidence that this is a common or even exceptional claim made by those marketing e-cigarettes.

      Secondly, a cell study of this nature can provide no useful information on the magnitude of the human health risks or whether such risks are even material at all. It is impossible, therefore, to say if any marketing claims are reasonable or misleading on the basis of this study.

      Thirdly, the study does provide evidence - though this is buried and barely referred to - that e-cigarette vapour has a much lower impact than tobacco smoke, at least when measured using this methodology. Cells were still alive in the e-cigarette vapour medium after 8 weeks, but all were dead in the tobacco smoke extract within 24 hours. It tends to support, therefore, the more common argument that e-cigarettes are much less hazardous than smoking, but may not be 100% safe.

      This has most recently been expressed by the Royal College of Physicians (London) in its major assessment of e-cigarette science, Nicotine with smoke: tobacco harm reduction

      Although it is not possible to precisely quantify the long-term health risks associated with e-cigarettes, the available data suggest that they are unlikely to exceed 5% of those associated with smoked tobacco products, and may well be substantially lower than this figure. (5.5)

      A more detailed and critical discussion of this paper and the related irresponsible media handling was published in May 2016 and should be read by anyone reading or citing Yu V, 2016

      Holliday R, Kist R, Bauld L. Commentary, Evidence-Based Dentistry (2016) 17, 2–3. doi:10.1038/sj.ebd.6401143

      Media handling

      In a quote in the media release one of the authors, Dr Wang-Rodriguez, asserted that evidence so far available shows no difference in risk between cigarette smoking and e-cigarette use.

      The overarching question is whether the battery-operated products are really any safer than the conventional tobacco cigarettes they are designed to replace. Wang-Rodriquez doesn't think they are. "Based on the evidence to date," she says, "I believe they are no better than smoking regular cigarettes."

      This extraordinary and unqualified assertion created worldwide news headlines (for example, E-cigarettes are no safer than smoking tobacco, scientists warn - The Telegraph).

      However, this assertion has no foundation whatsoever in the study. To the extent that the study shows anything about health risks, it shows this assertion to be false. Looking beyond this study, there is no science that supports such a claim anywhere and plenty that bluntly refutes it.

      As a result, the interpretation of the findings in this study and the subsequent media handling have attracted severe criticism, for example:


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 May 19, Clive Bates commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 16, Arnaud Chiolero MD PhD commented:

      Like in breast cancer screening, being aware of the risk of overdiagnosis does not prevent from being screened.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 28, Clive Bates commented:

      The advocacy campaign criticised in this paper, Not Blowing Smoke, has responded to these criticisms in a letter to Dr. Jackler

      It has also provided a public response E-Cig Advocates Respond to Misleading Stanford Research: Paper published in British Medical Journal contains significant factual errors..

      Dr Michael Siegel of Boston University School of Public Health has scrutinised the authors' claims here and finds them to be unsubstantiated:

      in the case of the CDC ad parodies, the campaign is actually correcting misinformation being disseminated by the CDC and the California Department of Health Services. The CDC ads imply that the attempt to switch to vaping was the cause of the pneumothorax experienced by the smoker. This is misleading because it wasn't the vaping that caused the problem; it was the smoking. Had the smoker actually switched to vaping, it is highly unlikely that she would have experienced the lung collapse. It was her failure to switch to vaping that resulted in an adverse health consequence. Since the CDC is apparently urging smokers not to quit using e-cigarettes, we can expect many more adverse health consequences to occur as a result of this campaign, as thousands of smokers who might otherwise have quit by switching to vaping will no longer do so.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 12, Erick H Turner commented:

      To say "no publication bias" overstates the findings. Nonpublication is one aspect of publication bias (also known as reporting bias). Another important type is outcome reporting bias--also known as P-hacking, HARKing (hypothesizing after the results are known), statistical alchemy, or simply spin--but that aspect was unexamined.

      Even if nonpublication were the only type of nonpublication, it would overstate the findings to suggest that this is a non-issue for industry-sponsored studies. With well under half of the studies unpublished, nonpublication is a significant problem for studies that were industry-sponsored AND for studies that without such funding. Granted, there is no difference between the two categories, but that's different from suggesting, as the title does, that virtually all of the trials were published.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 21, Jacob H. Hanna commented:

      In this elegant review, Wu and Belmonte discuss in detail (Table 1 and page 515) experiments conducted by Theunissen et al. Cell Stem Cell 2014 Theunissen TW, 2014 claiming complete failure to detect human naïve PSC derived cell integration in chimeric mouse embryos obtained following microinjection into mouse blastocysts, as was reported for the first time by our group (Gafni et al. Nature 2013). However, in this review the authors failed to discuss that among different caveats, imaging and cell detection methods applied by Theunissen et al. Cell Stem Cell 2014 were (and still) not at par with those applied by Gafni et al. Nature 2013.

      Regardless, we find it important to alert the readers that Theunissen and Jaenisch have now revised (de facto, retracted) their previous negative results, and are able to detect naïve human PSC derived cells in mouse embryos at more than 0.5-2% of embryos obtained (Theunissen et al. Cell Stem Cell 2016 - Figure 7) Theunissen TW, 2016 < http://www.cell.com/cell-stem-cell/fulltext/S1934-5909(16)30161-8 >. They now also apply GFP and RFP flourescence detection and genomic PCR based assays, which were applied by the same group to elegantly claim contribution of human neural crest cells into mouse embryos (albeit at low efficiency (Cohen et al. PNAS 2016 Cohen MA, 2016).

      While the authors of the latter recent paper avoided conducting advanced imaging and/or histology sectioning on such obtained embryos, we also note that the 0.5-2% reported efficiency is remarkable considering that the 5i/LA (or 4i/LA) naïve human cells used lack epigenetic imprinting (due to aberrant loss of DNMT1 protein that is not seen in mouse naive ESCs!! http://imgur.com/M6FeaTs ) and are chromosomally abnormal. The latter features are well known inhibitors for chimera formation even when attempting to conduct same species chimera assay with mouse naïve PSCs.

      Jacob (Yaqub) Hanna M.D. Ph.D.

      Department of Molecular Genetics (Mayer Bldg. Rm.005)

      Weizmann Institute of Science | 234 Herzl St, Rehovot 7610001, Israel

      Email: jacob.hanna at weizmann.ac.il

      Lab website: http://hannalabweb.weizmann.ac.il/

      Twitter: @Jacob_Hanna


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jul 14, Jacob H. Hanna commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 28, Steven Finney commented:

      Schultz and van Vugt (2015) (henceforth SVV) compare their Tap Arduino device against a PC-based system using the FTAP software package (Finney SA, 2001a), and claim that FTAP has high latency and variability (M = 14.6 ms, SD = 2.8). This claim is incorrect: as SVV show later in their paper, the single specific percussion pad that they used in their FTAP condition is responsible for the bulk of their measured latency, not FTAP. SVV also contains a number of additional false and misleading claims.

      FTAP is a software package that runs on a Linux PC; it must be attached to a MIDI input device (a keyboard or percussion pad) for input, and to a MIDI output device (a tone generator) for output. The end-to-end latency of an experimental system using FTAP (as measured by SVV) will be the (linear) sum of the input device latency, the output device latency, and the latency of the FTAP-MIDI system itself. As reported in Finney SA, 2001a and Finney SA, 2001b, an FTAP-MIDI system running on a standard 200 MHz Linux PC processed MIDI data with millisecond accuracy and precision, and this is easily replicated on current hardware (SVV themselves report such a result on p 5 of their article, and further data is provided on the FTAP web page at http://www.sfinney.com/ftap). Since FTAP adds no latency that is relevant at the millisecond level, the "FTAP" latency reported by SVV must be due to their input and/or output device.

      SVV thus err in confounding the MIDI I/O devices with the FTAP software. To compound the error, in their FTAP condition SVV used a single input device (a Roland HPD-15 percussion pad) that they knew to be defective (they report that it both missed responses and generated superfluous responses). They measure the audio latency of the percussion pad to be 9 milliseconds, and then repeatedly present the "FTAP" and "percussion pad" measurements as if they were independent, when the percussion pad latency is actually the major component of their "FTAP" latency. (This lack of independence in their conditions arguably invalidates their statistical measures). Their latency data thus demonstrate that their specific percussion pad was faulty but say nothing about FTAP.

      SVV also repeatedly and mistakenly assert that USB and MIDI processing will add significant latency in a PC system (e.g., p 2, "delays can be introduced by the [USB] polling speed"; p 7, "temporal noise...probably due to the MIDI-USB and USB-MIDI conversions"). This claim is also disproved by their own data. The FTAP distribution provides a loop test that rigorously tests MIDI and USB input and output (both software and hardware), along with FTAP itself. SVV report (p 5) that their FTAP configuration had a loop test result of 1.01 ms; this shows that neither USB nor MIDI processing added latency that was significant at the millisecond level. (See Finney SA, 2001a, Finney SA, 2001b, and the FTAP web page for further discussion of the configuration and interpretation of the loop test)

      Finally, SVV suggest that Tap Arduino is an inexpensive replacement for FTAP; this is also incorrect. Tap Arduino is simply a button box that can produce auditory feedback; it cannot run any experiments without being attached to a PC. SVV provide no data that such a complete Tap Arduino system has millisecond accuracy. In fact, they explicitly note that using Tap Arduino in a synchronization experiment has issues with the "asynchrony between the Arduino response and the onset of computer-generated audio" (p 11) that may require "expensive options" to solve.

      SVV are to be commended for highlighting the importance of end-to-end measurement of systems such as FTAP and MAX, and for making a valiant, if flawed, first attempt at such measurements. However, the only useful thing that their article demonstrates is that the Roland HPD-15 percussion pad (with one particular set of configuration parameters) is not a suitable input device for millisecond data collection.

      A validated Arduino-based MIDI input device that could be used with FTAP or MAX would be a valuable contribution; unfortunately, Tap Arduino is not that device.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 10, C Baum commented:

      NB to readers: The article itself was not retracted; the journal merely removed its electronic reproduction that had been created in error. See publisher's message below.

      This article has been removed: please see Elsevier Policy on Article Withdrawal (http://www.elsevier.com/locate/withdrawalpolicy). Please note that a special edition of Canadian Journal of Diabetes (Volume 39 Supplement 4) was published electronically in error and has since been removed. This edition was planned as a print-only reproduction of a selection of earlier published articles from Canadian Journal of Diabetes: electronic publication of the edition created duplicate items, and were therefore removed. The original articles remain and are unaffected. Andrew Miller, Executive Publisher.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 19, David Keller commented:

      Higher physician spending on medical tests is associated with lower risk of malpractice lawsuit

      The observed association of higher physician spending on medical procedures and tests with lower risk of malpractice lawsuits does not seem surprising. In general, the need for a diagnostic test is assessed based on the pretest likelihood of disease, which cannot be determined with perfect accuracy. Playing the percentages in this way works well most of the time, except when it doesn't. All it takes is one missed diagnosis to ruin your whole decade. Even a completely useless test will force the physician to think about the patient, if only briefly, when the result is reported. That additional attention may be enough to trigger thoughts that lead to the correct diagnosis. In addition, there is the possibility that "shotgun" testing may return an informative result due to serendipity. Many effective pharmaceuticals are discovered by indiscriminate screening, and while this method cannot be justified on a cost-benefit-harm basis in clinical medicine, it is sometimes helpful when the approved diagnostic algorithm is stuck in the mud. You can't win the lottery if you don't buy a ticket.

      The above comment was published online by British Medical Journal as a "rapid response"; it is included here for the record, for convenience, and for the opportunity to engage readers in the U.S.

      Reference:

      BMJ 2015;351:h5516 URL: http://www.bmj.com/content/351/bmj.h5516/rr-5


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 25, Michael Peabody commented:

      We have received questions from a few researchers about our paper. Below are some answers in case they are of broader interest.


      Q: Did you evaluate how well methods performed without clade exclusion (i.e. no species were removed from the database being compared to)?

      A: Yes we did; the results are shown in Figure S1 in Additional File 2.

      Q: Why are there so few species in your test datasets for the evaluations?

      A: A smaller number of species in the test sets makes the system easier to comprehend, which enables us to better understand the performance of the different methods. The MetaSimHC dataset was used since it was previously published and proposed as an evaluation dataset, plus it contains diverse taxa. For the in vitro dataset, making a mock community containing many species is non-trivial. Note we chose species relevant to a larger study we were performing and encourage researchers to customize any test dataset to suit their analysis needs. We also purposefully included some more closely related organisms, to evaluate how well methods differentiated more closely related taxa.

      Q: If I'm just starting out, which method should I try?

      A: MEGAN5 has some great features for performing all sorts of analyses and visualizations - good for exploring your data and potentially a good first method for starting out. We did find in this analysis (and subsequent analyses) that MEGAN4 and MEGAN5 with default settings tend to overpredict reads (e.g. assign a read to a similar species in the same genus if the actual species was removed from the database), however they otherwise perform quite well with BLASTX-like programs (BLASTX, RAPSearch2, Diamond). If the goal is trying to analyze community composition rather than classify all of the reads, a marker based method (which we would expect to have high precision and run relatively quick) such as MetaPhyler or MetaPhlAn should work well. MetaPhlAn2 was recently released and should be worth checking out (http://www.nature.com/nmeth/journal/v12/n10/full/nmeth.3589.html). Kraken is an incredibly quick method to run and should identify species if they are in the database used. CARMA3 and DiScRIBinATE were methods we found to be more conservative/less prone to overpredict reads relative to other methods with similar sensitivity, so are worth checking out if you are concerned about reads being overpredicted (and thus predicting species that aren't actually in the sample). See the paper discussion for more information.

      Q: Why do certain methods give false species predictions?

      A: This is method dependent. For example, MEGAN4 and MG-RAST rely on relatively simple LCA approaches. If a read makes a single hit to one species and it reaches the bit-score threshold, the read will be assigned directly to the species level, regardless of how good the hit/alignment is in terms of other metrics such as % identity (although depending on parameter choices such as the minimum support parameter the read may be reassigned by MEGAN4).

      Q: Why is MG-RAST not in the clade exclusion analysis? I see it only in the analysis without clade exclusion.

      A: Many methods, including MG-RAST, could not be evaluated with clade exclusion because we didn't have access to manipulate their underlying database. Others were not evaluated simply due to time constraints, as there are a very large number of methods to perform metagenomics taxonomic sequence classification (and more keep coming out). Hopefully more methods will be evaluated in this way and we have evaluated more methods already since this publication (see below).


      We also have a few additional comments:

      Strain level classification can be quite difficult, and complicated by notable within-strain variability that can occur, so we only looked at classifications down to the species level.

      Pseudomonas fluorescens Pf-5 is now known as Pseudomonas protegens Pf-5.

      Due to the way we evaluated sensitivity and precision, more conservative methods which tend to assign reads to higher taxonomic levels if the species is not in the database showed higher levels of sensitivity and precision. However, if you are only interested in more specific taxonomic ranks like species and genus level classifications, these methods may not be as useful if they end up classifying few reads to these taxonomic levels. Less conservative methods make the trade-off of assigning reads to more specific taxonomic levels, with higher rates of overprediction.

      We have evaluated MEGAN5 on BLASTX, RAPSearch2, RAPSearch2 fast mode, and Diamond for the MetaSimHC dataset with 250bp simulated reads and overpredictions considered incorrect. As we move from BLASTX to Diamond in this list of heuristic search methods, we generally see a tradeoff of slightly decreased sensitivity, for a substantially improved (i.e. shortened) running time - see http://www.slideshare.net/Mpeabody/comparison-of-megan5-with-different-similarity-search-methods. However, precision stays about the same. We have compared MEGAN4 vs MEGAN5 using default parameters for each, and find MEGAN5 has slightly increased sensitivity and similar precision relative to MEGAN4.

      Hope this is helpful.

      -Michael Peabody


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 10, Peter Karp commented:

      Although I like the notion that access statistics could be used to direct curation efforts for a given database, note there is a converse view of this problem: if I'm going to write programs that compute across an entire database, I want the entire database to be as accurate as possible, otherwise my large-scale compute will not return accurate results. For example, if I'm going to build a metabolic model from a database, I want the whole metabolic network within the database to be well curated, regardless of how often researchers happen to query a particular enzyme. If I'm going to search the whole human genome for a gene that has certain properties, I want the whole genome to be annotated/curated as accurately as possible, regardless of how often each gene is visited by the research community. Thus, taking shortcuts in curation could backfire when we want to query across an entire biological system.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 12, Yuri Lazebnik commented:

      A role for parasites and cell fusion in the emergence of transmissible cancers? Yuri Lazebnik & George Parris. The finding of cancerous tapeworm cells in the human body calls for a comprehensive mechanistic explanation to understand the significance of this discovery. One clue may be the prominent feature of the cells – aberrant cell fusion. Normally, cell fusion shapes some of the tapeworm organs. The observed promiscuous fusion, however, would be expected to generate abnormal cells Duelli D, 2003 and promote their clonal evolution by causing chromosomal instability Duelli DM, 2007, Duelli D, 2007, Lazebnik Y, 2014, by changing gene expression or causing dedifferentiation Bulla GA, 2010, Koulakov AA, 2012, and by serving as an equivalent of sexual reproduction, a powerful force in the evolution of species Parris G, 2006. Fusion with human cells, followed by unilateral genome reduction common to interspecies hybrids, or acquiring human DNA through other forms of gene transfer (both possibilities can be tested by searching for the stretches of DNA containing both human and worm sequences) would “humanize” the cells, thus helping them bypass host defenses. Such a breeding ground could produce evolving and transmissible cancer cells that are a species of its own. If so, the study by Muehlenbachs et al. may be a glimpse into how transmittable cancers, including those in humans Lazebnik Y, 2015, can originate.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 24, Clive Bates commented:

      This article has received an extensive critique: Smears or science? The BMJ attack on Public Health England and its e-cigarettes evidence review November 2015. [Disclosure: I am the author].

      The investigative article was published in the BMJ following PHE's publication of its E-cigarette evidence review update. The highly publicised finding of PHE's expert reviewers was that e-cigarettes are likely to be at least 95% lower risk than smoking - including an allowance for uncertainty.

      The BMJ's article attempted to undermine that assessment and link it, erroneously, to the tobacco industry. The critique linked above makes ten distinct criticisms of the BMJ's investigative journalism and its unwarranted hostility to PHE's work.

      1. Playing the man: the descent into personal attacks at the expense of substance
      2. Exploiting the ambiguity of graphics: creating misleading connections between people
      3. Failure to examine the underlying science: is the PHE 95% relative risk estimate actually reasonable?
      4. Failure to acknowledge the problem PHE is tackling: widespread misperception of e-cigarette risks compared to smoking
      5. Inappropriate dismissal of quantified estimates: these are useful to help people anchor risk perceptions
      6. Hypocritical and abusive use of conflict of interest disclosure: it is for transparency, not disparagement
      7. Bias and imbalance: selective quoting and inadequate scrutiny of PHE's critics
      8. Unaccountable sources: reliance on anonymous hostile briefing by public officials
      9. Activism rather than objectivity: are BMJ and Lancet becoming protagonists and losing their neutrality?
      10. A new 'scream test': why has PHE's claim created such consternation?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 01, GT Hayman commented:

      The abstract above is not the correct one for the title. It is actually the abstract for the first paper under 'Similar articles'.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 24, Andrea Messori commented:

      Meta-analysis on thrombectomy in acute ischemic stroke: is the degree of heterogeneity influenced by the type of device?

      In the overall analysis of 8 randomized controlled trials (RCTs), the meta-analytic results reported by Badhiwala et al (1) for the end-point of functional independence at 90 days contained a significant degree of heterogeneity (I<sup>2</sup> = 75.4%, p-value for heterogeneity <0.01 according to the results shown in Figure 1 published on page 1839 by these authors; data from 8 RCTs). Since the presence of heterogeneity is considered a negative factor in terms of reliability of the meta-analytic results (2-5), this finding represents an important limitation of the study by Badhiwala et al.; in their article, these authors have correctly pointed out this limitation. Likewise, the meta-analysis recently published by Elgendy et al. (6) on the same topic shared the same limitation and in fact contained a significant degree of heterogeneity (I<sup>2</sup> = 54.0%, p-value for heterogeneity= 0.021 according to the results shown in Figure 1 published on page 2501 by these authors; data from 9 RCT).

      In their papers, Badhiwala et al. (1) and Elgendy et al.(6) have discussed which factors could be responsible for this significant heterogeneity. For example, the time window for the thrombectomy procedure, the type of device, and the use of functional perfusion imaging before randomization were thought to be implicated. In this context, the type of device is likely to play an important role as shown in previous studies (7).

      For this reason, given that the Solitaire device was the one most commonly employed in this clinical material, we identified which trials among those examined Badhiwala et al. and by Elgendy et al. were based the comparison between a group receiving the Solitaire device (treatment group) and the control. group. The following four trials were found to be based on this comparison: ESCAPE, EXTEND-A, SWIFT-PRIME, and REVASCAT. In these four trials, the crude rates for the end-point of functional independence at 90 days were the following (comparison: Solitaire group vs control group): ESCAPE, 43 /147 vs. 87 /164; EXTEND-A, 14 /35 vs. 25 /35; SWIFT-PRIME, 33 /83 vs. 59 /98; REVASCAT, 29 /103 vs. 45 /103 (data shown in Figure 1 published on page 1839 by Badhiwala et al).

      We have analyzed these data by application of the random-effect model (DerSimonian and Laird method as implemented in the Open Meta-analysis software) and we have obtained the following meta-analytic results: odds-ratio = 2.47 (95% confidence interval[CI], 1.84 to 3.33; I<sup>2</sup> = 0%, p-value for heterogeneity = 0.689); relative risk = 1.66 (95%CI, 1.40 to 1.97; I<sup>2</sup> = 0%, p-value for heterogeneity = 0.821). This re-analysis has some interest because, after the selection of a single device, the degree of heterogeneity was markedly reduced and changed from a statistically significant level in the overall analysis to 0%. In conclusion, our re-analysis indicates that the type of device can have an important role influencing the results of these two meta-analyses.

      References

      1) Badhiwala JH, Nassiri F, Alhazzani W, Selim MH, Farrokhyar F, Spears J, Kulkarni AV, Singh S, Alqahtani A, Rochwerg B, Alshahrani M, Murty NK, Alhazzani A, Yarascavitch B, Reddy K, Zaidat OO, Almenawer SA. Endovascular Thrombectomy for Acute Ischemic Stroke: A Meta-analysis. JAMA. 2015 Nov 3;314(17):1832-43.

      2) Gagnier JJ, Morgenstern H, Altman DG, Berlin J, Chang S, McCulloch P, Sun X, Moher D; Ann Arbor Clinical Heterogeneity Consensus Group. Consensus-based recommendations for investigating clinical heterogeneity in systematic reviews. BMC Med Res Methodol. 2013 Aug 30;13:106.

      3) Pigott T, Shepperd S. Identifying, documenting, and examining heterogeneity in systematic reviews of complex interventions. J Clin Epidemiol. 2013 Nov;66(11):1244-50.

      4) Laurin D, Carmichael PH. Combining or not combining published results in the presence of heterogeneity? Am J Clin Nutr. 2010 Sep;92(3):669-70,

      5) Bollen CW, Uiterwaal CS, van Vught AJ. Pooling of trials is not appropriate in the case of heterogeneity. Arch Dis Child Fetal Neonatal Ed. 2006 May;91(3):F233-4.

      6) Elgendy IY, Kumbhani DJ, Mahmoud A, Bhatt DL, Bavry AA. Mechanical Thrombectomy for Acute Ischemic Stroke: A Meta-Analysis of Randomized Trials. J Am Coll Cardiol. 2015 Dec 8;66(22):2498-505.

      7) Messori A, Fadda V, Maratea D, Trippoli S. New endovascular devices for acute ischemic stroke: summarizing evidence by multiple treatment comparison meta-analysis. Ann Vasc Surg. 2013 Apr;27(3):395-6.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 19, Kenneth Witwer commented:

      Shu et al have conducted a fascinating analysis of miRNA sequences. It is unclear, however, how this analysis relates to dietary, circulating, and "transportable" categories of RNA that are hypothesized to be absorbed from the diet in functional form. Although the stated intention was "to heavily rely on experimental data to identify features that can differentiate secreted miRNAs from the rest," the data in question are either unreliable (the circulating miRNA data is questionable, based on only one biological and technical measurement), incompletely described (assignment of "exosomal" status based on the two cited databases is unclear) or missing (the pivotal milk uptake experiment--unless I'm missing something). Thus, the practical validity of the sequence analysis cannot be assessed.

      Endogenous miRNAs are classified by Shu et al as circulating or not based on a list from Weber JA, 2010. This preliminary publication reported only one qPCR threshold cycle measurement for each of several hundred miRNAs using only one sample of pooled plasma. Other issues, such as a lack of correlation with results of other studies and a failure to detect abundant plasma miRNAs, such as miR-16 and miR-223, were previously noted in Watson AK, 2012. Thus, the "circulating" classification made by the authors is not supported by reliable data. Perhaps the authors might wish to revisit their study with a more comprehensive ranking of plasma miRNAs supported by reliable public sequencing and microarray data.

      Which miRNAs are packaged into extracellular vesicles is an even more complicated question than simple presence in circulation. Since the majority of miRNAs in circulation appear to be in free protein complexes, not EVs, contaminants of EV preparations have strong potential to skew experimental results. It would be helpful if the authors could clarify how they used the EVpedia and ExoCarta databases to identify EV-packaged miRNAs and how this information (presumably including abundance ranks?) was used in the study.

      Also unclear was where to obtain the sequencing data from the described milk feeding experiments. Although all data were said to be found on a university website, I could not find the sequencing data there or elsewhere. A public link to these data and further clarification of how they were used to validate the findings would be very helpful, as well as consistent with journal guidelines. Perhaps I missed this link?

      I would note that the evidence in support of the dietary miRNA transfer hypothesis described as "unambiguous" consists of a study by the authors. The results of this study have not been confirmed. Alternative hypotheses (Witwer KW, 2014) were omitted, as well as published evidence that contradicts the hypothesis, most strikingly a recent study (Title AC, 2015) in which no miRNA uptake was observed from milk in miR-200c and miR-375 knockout mouse pups.

      In conclusion, the sequencing analysis looks quite interesting, but the underlying assumptions are debatable at best.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 08, UFRJ Neurobiology and Reproducibility Journal Club commented:

      The article by Schreiber et al. touches on an interesting subject: namely, the relation between actin remodeling and synaptic plasticity. However, some of the experiments presented raise concerns related to the analysis of nested data, which violates the assumption of independency between experimental units required by conventional statistical methods – a rather common problem in neuroscience papers (Aarts E, 2014).

      On figure 3 (K-O) the legend mentions a sample size is of clusters (i.e. GFP-positive PURA-containing mRNP particles), observed in hipocampal cell cultures derived from transgenic and wild-type animals. However, there is no mention of how many different animals were used. If more than one cluster measured came from the same animal, they do not constitute independent samples; thus, statistics based on clusters should not be used to evaluate hypotheses relating to the effect of genotype. However, it is unclear on the text whether this was the case. On figure 7 (F) it is stated that 15/16 slices used were obtained from 9/10 animals per genotype; therefore, some of the animals contributed with more than one slice, leading to the same problem of non-independence between units. Still on figure 7 (A – B) and on figure 8 (E) the sample size is given in number of cells, and again it is unclear whether they came from different animals or not. The presence of nested data in an experiment tends to lead experimental values obtained from the same animal/cell to be more similar than if they were obtained from different animals. This causes differences between units within experimental groups to be smaller than those between groups, and thus increases the type 1 error rate of statistical tests that assume independent observations (such as t tests and ANOVA), leading to more false-positive results. A simple correction for this problem would be to calculate a mean value for each animal, and then use animal-level statistics for each comparison. Alternatively, multi-level analysis can be used to try to separate variances at the different levels (e.g.slice-level vs. animal-level variation). (Aarts E, 2014).

      Another concern in this article is the analysis of subgroups. On figure 3 (F-O), the clusters were divided into 2 groups based on their mobility, and each of these groups was evaluated in 3 parameters: total distance moved, maximal velocity and maximal distance to origin. This amounts to 6 analyses in total, 2 of which had a statistically significant result. It is not clear from the methods whether dividing the clusters was an a priori decision, or whether the authors perceived the whole group as heterogeneous and therefore decided to divide it. Such a posteriori analysis of subgroups will inevitably increase the number of comparisons performed, and thus the chance of obtaining false-positive results by chance (Lagakos SW, 2006). In this case, statistical significance thresholds could have been corrected by the authors to account for the number of comparisons generated by subgroup analysis.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.