10,000 Matching Annotations
  1. Jul 2018
    1. On 2016 Mar 06, Sanjay Srivastava commented:

      I have written about the analyses in Gilbert et al. Technical Comment elsewhere. Some key points:

      (1) The comment proposes to define a "successful" replication as one where the replication effect is contained within the original study's confidence interval. However, it interprets this based on an incorrect definition of a confidence interval. Even more seriously in my view, the comment does not adequately address how using confidence intervals to gauge replication success will be affected by the power of original studies.

      (2) The comment claims that high-powered replications have a high success rate, and bases this claim on Many Labs 1 (Klein et al., 2014), stating that ML1 had a "heartening" 85% success rate. However that is incorrect. Using the same replication metric Gilbert et al. define at the start of their comment and use everywhere else in their Technical Comment, Many Labs 1 had only a 40% success rate, which is similar to the Reproducibility Project.

      (3) The analysis of replication "fidelity" is based on original authors' judgments of how well replication protocols matched original protocols. However, the analysis by Gilbert et al. combines 18 nonresponses by original authors with 11 objections, labeling the combined group "unendorsed." We do not know whether all 18 nonresponders would have lodged objections; it seems implausible to assume that they would have.

      In my view these and other issues seriously undermine the conclusions presented in the Gilbert et al. technical comment. Interested readers can see more here: Evaluating a New Critique of the Reproducibility Project


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 05, Dorothy V M Bishop commented:

      My reading of this comment is that it maintains we should not expect high reproducibility for psychological studies because many are looking at effects that are small and/or fragile - in the sense that the result is found only in specific contexts. If that is so, then there is an urgent need to address these issues by doing adequately powered studies that can reliably detect small effects, and, once this is done, establishing the necessary and sufficient conditions for the effect to be observed. Unless we do that, it is very hard to distinguish false positives from effects that are genuine, but small in size and/or fragile - especially when we know that there are two important influences on the false positive rate, namely publication bias and p-hacking. I discuss these issues further on my blog here: http://deevybee.blogspot.co.uk/2016/03/there-is-reproducibility-crisis-in.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 11, Eran Elhaik commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 09, Eran Elhaik commented:

      A response to the criticism: Responding to an enquiry concerning the geographic population structure (GPS) approach and the origin of Ashkenazic Jews-a reply to Flegontov et al. by Das et al. (2016) is here: https://arxiv.org/abs/1608.02038


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Aug 04, Debbie Kennett commented:

      Two critiques of this paper, from both a linguistics and a genetics perspective, have now been published:

      1) Aptroot M, 2016 “Yiddish language and Ashkenazic Jews: a perspective from culture, language, and literature”. Genome Biol Evol. 2016 Jul 2;8(6):1948-9.

      2) Flegontov P, 2016 “Pitfalls of the geographic population structure (GPS) approach applied to human genetic history: A case study of Ashkenazi Jews”. Genome Biol Evol. 2016 Jul 7. pii: evw162. [Epub ahead of print].


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 01, David Reardon commented:

      Once again, these researcher have failed to provide a breakdown of how a history of prior pregnancy loss (miscarriage or termination of pregnancy) effects mortality rates. This is a serious oversight since other studies have shown that a history of pregnancy loss is a significant risk factor for elevated mortality rates.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 14, Eduardo Eyraa commented:

      This article did not cite a previous work Sebestyén E, 2015 where isoform swiches in cancer were already described between tumor and normal samples using TCGA data for 9 different cancer types; as well as between subtypes for breast tumors, lung squamous carcinoma and colon tumors from TCGA. In this previous article a specific switch in CTNND1 was already described for the basal breast tumors. If you are considering citing this publication, please consider whether you should also cite Sebestyén E, 2015.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 24, Ben Goldacre commented:

      This trial has the wrong trial registry ID associated with it on PubMed: both in the XML on PubMed, and in the originating journal article. The ID given is NCT0239396. We believe the correct ID, which we have found by hand searching, is NCT02393976.

      This comment is being posted as part of the OpenTrials.net project<sup>[1]</sup> , an open database threading together all publicly accessible documents and data on each trial, globally. In the course of creating the database, and matching documents and data sources about trials from different locations, we have identified various anomalies in datasets such as PubMed, and in published papers. Alongside documenting the prevalence of problems, we are also attempting to correct these errors and anomalies wherever possible, by feeding back to the originators. We have corrected this data in the OpenTrials.net database; we hope that this trial’s text and metadata can also be corrected at source, in PubMed and in the accompanying paper.

      Many thanks,

      Jessica Fleminger, Ben Goldacre*

      [1] Goldacre, B., Gray, J., 2016. OpenTrials: towards a collaborative open database of all available information on all clinical trials. Trials 17. doi:10.1186/s13063-016-1290-8 PMID: 27056367

      * Dr Ben Goldacre BA MA MSc MBBS MRCPsych<br> Senior Clinical Research Fellow<br> ben.goldacre@phc.ox.ac.uk<br> www.ebmDataLab.net<br> Centre for Evidence Based Medicine<br> Department of Primary Care Health Sciences<br> University of Oxford<br> Radcliffe Observatory Quarter<br> Woodstock Road<br> Oxford OX2 6GG


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 20, Amanda Capes-Davis commented:

      STR loci used in today's STR profiling kits come from multiple chromosomes, and the technique can generate a full STR profile even in the presence of microsatellite instability. So I find the absence of STR loci in these profiles from CABA I puzzling.

      Some additional testing is needed to further explore these findings.

      1) It is important to perform separate species testing to confirm that CABA I is of human origin. STR profiling is typically considered species-specific, however, it has been clearly documented that related species can be detected. This has been documented previously by Almeida et al (http://www.ncbi.nlm.nih.gov/pubmed/22059503), Ren et al (http://www.ncbi.nlm.nih.gov/pubmed/22206866), and others. STR profiles generated from non-human species can produce patterns similar to those seen here.

      2) Human cell line STR profiles have clearly defined quality criteria, including some requirements that are unique to cell lines (see ANSI/ATCC ASN-0002-2011 Authentication of Human Cell Lines: Standardization of STR Profiling). To my eye, the electropherograms seen here do not meet all quality criteria. It would be helpful to see other cell lines used as positive controls alongside CABA I data, to demonstrate adherence to quality criteria, in addition to the "typical male" and "typical female" results shown here.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 16, David Keller commented:

      Your helpful explanations clarify the study protocol very well. I have deleted my erroneous comments which were based, as you correctly noted, on a misunderstanding. Thank you, Dr. Buse.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Apr 05, John B Buse commented:

      I believe that you have misunderstood the protocol. At randomization, those assigned to the combination of insulin degludec and liraglutide (IDegLira) stopped their glargine and started 16 dose-steps of IDegLira (16 units of degludec and 0.6 mg of liraglutide. They then titrated IDegLira twice a week based on their average fasting plasma glucose by -2, 0 or +2 dose steps of IDegLira aiming for a fasting plasma glucose of 72-90 mg/dl. The maximum dose of IDegLira was 50 dose steps (50 units of insulin degludec and 1.8 mg of liraglutide). The IDegLira patients did not continue the glargine. So, our conclusion is that for patients inadequately controlled on glargine 20-50 units, switching glargine to IDegLira is superior to continued titration of glargine. There is a remaining question as to what to do with a patient inadequately controlled on the maximum dose of IDegLira. That has not been studied as of yet.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Mar 10, David Keller commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Mar 09, John B Buse commented:

      As explained in the paper, the comparison was made to glargine to examine the common clinical scenario of inadequately controlled diabetes treated with basal insulin. Glargine is the most commonly prescribed insulin formulation in the world. There are prior comparisons of IDegLira versus degludec in DUAL-1 (Gough, et al. Lancet Diabetes Endocrinol. 2014 PMID: 25190523) and in DUAL-2 (Buse, et al. Diabetes Care 2014. PMID: 25114296). There are also studies that have compared glargine to degludec head to head, e.g., Rodbard Diabet Med. 2013 PMID: 23952326. Thank you for your comment.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Mar 09, David Keller commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 21, Jean-Michel Claverie commented:

      An alternative interpretation to these results has been proposed in: Claverie JM, Abergel C. CRISPR-Cas-like system in giant viruses: why MIMIVIRE is not likely to be an adaptive immune system. Virol Sin. 2016 Jun 13. [Epub ahead of print] PubMed PMID: 27315813. see also: https://pubpeer.com/publications/3480B9DE6C9330B0747034C330BA6A


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 03, Lydia Maniatis commented:

      I don't see why this discussion is still going on. Anyone who has read PubPeer's blog posts will understand that a. they have solid arguments based on the public interest and thus that b. they have no reason to back down on their publishing model which c. is very popular for users and d. no one can make them do it against their will. End of story. MB's claims, on the other hand, turn a blind eye to important facts.

      Given MB's general hostility to anonymity in the context of scientific discourse, I'm having trouble imagining how he rationalises the anonymity of reviewers of submissions for publication. Why isn't it a problem that the potential critic of the submission is cravenly hiding (as he might put it) behind anonymity? What's the danger of being up front?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Apr 02, Boris Barbour commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Apr 02, Boris Barbour commented:

      The important issue here is the reluctance/refusal of Michael Blatt to engage in a substantive analysis of the pros as well as the cons of anonymous commenting, a recurring theme in this thread.

      The questions about the negotiations to publish a reply to Michael's original editorial in Plant Physiology represent a distraction from the more fundamental issues. However, because he is creating the impression that we have been untruthful and have something to hide, I reluctantly respond again on this point.

      Michael, as I said, we felt your initial suggestions were unfair. They did improve when we pushed back, as I have been happy to confirm. However, as I did not spread misinformation, I'm not apologising for it. Specifically, that you attempted to impose constraints that (at least we felt) were unfair is true, so I'm not apologising for having said that either.

      You requested permission to publish our email exchange. We do so below.

      Some context will be helpful in understanding why we did not reach agreement. Michael had just published a 3-page editorial in which he deployed a combination of insinuation and plausible deniability to associate us with notions such as voyeurism (peeping at published articles...), going through dirty laundry and money grabbing. A completely neutral editor-in-chief covering a controversial issue might have considered allowing us a reply of the same length, published at the same time (we must have missed the invitation to do so...) or as soon as possible afterwards. But Michael was in the conflicted position of also being chief prosecutor, having turned Plant Physiology into his personal propaganda vehicle (three editorials attacking PubPeer so far). Although there was a degree of mistrust and we were skeptical that he would be able to dissociate his conflicted roles, we decided to explore the possibility of replying to the same audience. As Michael was well aware, speed was of the essence, with any delay affording him the comfort of monopolising the "news cycle". From our point of view, truth was struggling to get her boots on, and any delay would reduce the effectiveness of our response.

      I have edited email adresses, boilerplate (signatures and embedded emails) and some whitespace in the chain below (posted in one or more comments below because of PMC size limits). Emphasis and text in square brackets have been added by me.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Apr 02, Michael R Blatt commented:

      Boris

      I shall take this as your apology for propagating misinformation and suggesting that I “tried to impose various unfair constraints” on any response from your and your PubPeer colleagues.

      Thank you for your (eventual) candour.

      Mike


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2016 Mar 31, Boris Barbour commented:

      Dear Michael,

      There is no real contradiction on the format negotiations. After some to-and-fro, your final offers were indeed relatively generous given "journal constraints". But by that time we had come to realise that we didn't need to satisfy ourselves with the "halfway" we were working towards. As anybody who has tried to correspond with a journal knows, the process can feel extremely restrictive compared to the freedom and immediacy of a blog post.

      Anyway, the point of the above comment was to correct rapidly three possible implications ambiguously left open (and predictably seized upon by a twitter denizen): i) that you'd offered to give us equal airtime, ii) spontaneously, and iii) that we hadn't felt able to counter your arguments. That's why I gave a bit more background about the process.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2016 Mar 31, Michael R Blatt commented:

      Boris

      Again, I think you do me a disservice. Given the constraints of publishing in a scientific journal, I did my utmost to meet you halfway and not limit your effectiveness (for example, engineering a way around the time lag between submission, acceptance, and final publication so that your response might be published instantly). The email thread I refer to above bares this out. Once more, I am happy to share it here with your approval (yes, vetos can work both ways).

      As for any mis-reading of the latest editorial (or any of the others, for that matter), I can only say that it is always possible to take a statement out of context and twist it into someting altogether different, no matter how precise the text. The context in this case was of an offer to respond in Plant Physiology, nothing more or less. If this was misconstrued to imply that you declined to make any response whatsoever (which, clearly, is not the case as you've noted), I can only say that this was not my intention.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2016 Mar 31, Boris Barbour commented:

      Dear Michael,

      You made several suggestions ("tried to impose constraints") that would, coincidentally of course, have limited the effectiveness of our reply to your editorial: shorter, later, hobbled, elsewhere. I didn't invent the list of issues I gave (and the problem was of course your veto not ours). Sure, the restrictions weren't untypical of journal correspondence and the power that editors are accustomed to wielding. And, yes, we might have been able to work something out. But we decided it was just not worth the struggle when we could post instantly in our desired format. In one way we acknowledge that was a mistake, because it has proven exceptionally difficult to engage you in any discussion of specifics.

      We didn't say that the statement about us declining to publish in Plant Physiology was wrong, we just felt that it might mislead people (just as it misled Leonid Schneider) into believing that we had avoided the debate. Those tempted by that interpretation are invited to read this thread, and, of course, our replies to your editorials.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    8. On 2016 Mar 31, Michael R Blatt commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    9. On 2016 Mar 31, Boris Barbour commented:

      Michael Blatt has published yet another editorial attacking PubPeer, containing an incomplete and potentially misleading statement:

      "An offer to respond had been made to Brandon Stell of PubPeer, who ultimately declined."

      We at PubPeer requested the opportunity to put our case to the readers of Michael's original editorial. He agreed in principle but tried to impose various unfair constraints ("no more than 3 points", "limit on text", interleaved rebuttals, publication veto, etc). In addition, as the timing of the new piece shows, we might have had to wait 5 months and Michael's decision for our reply to appear. The process reminded us why journal correspondence sucks so much and indeed why PubPeer was created in the first place. So we decided to publish our response immediately as a blog post.

      Readers of this thread can follow our largely unsuccessful attempt to draw Michael into a joint, open and even-handed evaluation of the pros and cons of anonymous post-publication peer review. Given his preference for preaching (several times) to a captive and passive audience, we shall just have to wait and see how scientists in general and the plant community in particular, which is by no means united on this matter, votes with its feet.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    10. On 2016 Mar 23, Jaime A. Teixeira da Silva commented:

      From my personal experience a PubPeer from what I have observed is that there are all kinds of anonymous: those with a desire to hold an academic discussion, as if in a journal club; those with valid, succinct claims; those with wild, but plausible claims; those with wild, and sometimes unsubstantiated claims; those with simple observations or concerns; those who have come to troll; those who have come to abuse, make libelous comments, or harass.

      Comments by the last group tend to be flagged and removed by the moderator(s), who are likely Boris Barbour and the other two PubPeer management figures. But all others remain, which is what makes PubPeer so conflictual, because it has attractive and highly unattractive aspects.

      One will never know the identity of all these types of anonymous commentators, and except for the use of extremely bad language, slang, or downright libelous name-calling (e.g. calling someone a fraud), we need this type of platform to allow a free level of discussion that is never possible with any journal's comment platform. Most scientists will know how to differentiate the wheat from the chaff, and can discern valid criticisms or concerns from noise, evasion or deflection. The most important thing is if what is written, either as a bounce from PubMed Commons, or directly here at PubPeer, has any value, and to whom?

      In my opinion, PubPeer serves for me as a platform to begin to show how sad the state of affairs is in plant science. Comments might not always be perfect, or tone-perfect, and you will find that will ultimately always create enemies or irritate those who oppose you, or your ideas. But this is a risk that comes with using an anonymous tool. Those who use PubPeer should know that these risks exist.

      I think the anonymous vs named argument is a dead horse. It is quite obvious that there are three groups: those who understand, and appreciate, anonymity; those who will always be skeptical and critical of it, and ultimately shun it; and those who see some benefit, and also some risk, but who would likely never venture to use it, either because they are of a traditional class of scientists/editors, or because they fear.

      I think that ultimately that what is lacking is the respect and recognition of one of these groups of the other two. And because there is a lack of recognition and/or respect, there will always be frustration and passionate defense of the home turf opinions. That is so evident in the responses by select members of the public or scientific community to Prof. Blatt's two editorials.

      I can personally see where Prof. Blatt's fears and concerns are coming from, and I respect his opinion and point of view, because that's all the editorials represent. I might not necessarily agree with his views in their entirety, but I understand that we need to respect his position, or at worst, respect his position in a civil way. Ultimately, one has to ask: has Blatt been a valuable asset to the plant science community, even if within his own restricted niche at Plant Physiology, and has something positive come from these two editorials?

      The answers to these two questions are more than evident.

      I thus suggest a new trajectory, at least for plant science. PubPeer has shown, in already hundreds of cases, that there are problems with the plant science literature. Problems that neither leaders like Blatt, Kamoun, or Zipfel knew or detected. But problems that ultimately drew them into the conflict that is, broadly speaking, a literature that is problematic, even in the top level plant science journals.

      We only need two things to make this recipe of correcting the literature work: a) the recognition that there are problems and that they need to be corrected; b) action, i.e., getting editors and publishers to recognize these errors formally, and correcting the ills of the traditional peer review process.

      Unless a) and b) take place, this whole discussion surrounding the anonymous voice is meaningless.

      In closing, I should add that not all anonymous commentators are the same, and that not all necessarily agree with the position, or choice of words, employed by Boris Barbour or PubPeer.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    11. On 2016 Mar 28, Jens Sommer commented:

      Dear Michael,

      I know about the complications of double blind review and disclosure of reviewers. As the reviewers see the reference list and self reference is part of scientific writing it is almost imposible to hide author's identity, so most authors don't care.

      About disclosure of the reviewers: It is not essential to insist on disclosure. Let the reviewers decide. In addition allow the authors to rate the quality of the reviews (anonymously?).

      As long as we want to improve our knowledge (and scientific progress) we will need skilled reviewers, not just many reviewers. But again, this has been discussed before.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    12. On 2016 Mar 22, Michael R Blatt commented:

      Dear Jens,

      Forgive me for not responding to your last paragraph. These issues have been addressed time and again (e.g. in my editorial and in the discussion below with Boris Barbour).

      As for your two, numbered points, you may be aware that several journals have tried and/or do offer a double-blind review process, including several of the Nature journals. Only a very tiny percentage of authors ever take up this option, however, and realistically it is often difficult to hide the authors' identities (see the editorial from Chris Surridge in Nature Plants last September for more information).

      Complete disclosure, as you propose in your second point (and if I understand you correctly), is also problematic. I think most editors would argue that, were they to insist on such disclosure, then it would be very difficult indeed to secure reviewers. Of course, editors are generally acknowledged; all journals publish the list of their editors on the journal masthead and some journals include the names of the handling editor with each published article (e.g. PNAS). As for the social contract involved in considering a manuscript for publication, I have commented on this in my editorial of last October.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    13. On 2016 Mar 21, Jens Sommer commented:

      Thank you for the editorials and thanks to all comments. This gives hope for the future of the scientific community and scientific progress.

      Maybe it is just the point in time, when we need or accept anonymous comments.

      Is it important to have

      1) a double blind review process (authors, reviewers) and 2) a complete disclosure (authors, reviewers and editors) after rejection or acceptance?

      While the first is essential to get an unbiased review, I expect the second to improve the quality of reviews and thus the quality of articles. At least my idea of reviewing an article is to improve it, and the communication with the authors is more like an anonymous discussion.

      As the review process includes more than one reviewer it is interesting to see that the process sometimes fails completely (false acceptance). So if it takes time to review an article properly, why shouldn't we see the names of the reviewers, who supported the authors in getting their work published?

      Finally, when the article is published and a discussion starts any reasonable comment will be welcomed - anonymous or named. Do we really need regulation if there is more than one free platform with high-quality comments and good usability? Why don't we let people figure out what suits their needs best?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    14. On 2016 Mar 23, Daniel Corcos commented:

      I would be grateful if you could show me potentially toxic comments. As for the toxicity of anonymous reviewers and bad editor choice, I know too many examples, but I only have to mention the case of CRISPR role discovery by Francisco Mojica, which has been rejected by many high impact journals for 2 years (http://www.cell.com/cell/pdf/S0092-8674(15)01705-5.pdf). You may say that two years delay for a basic paper does not harm much, but when it comes to medicine, it can be terribly harmful.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    15. On 2016 Mar 18, Michael R Blatt commented:

      Daniel, I guess we will have to disagree on the toxicity potential of anonymous comments.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    16. On 2016 Mar 18, Daniel Corcos commented:

      Mike, I agree that PubPeer comments can be wrong and misleading, but their advantage is that they can be seen by everybody. I prefer anonymous comments that I can read to hidden criticism. Rejecting a paper for spurious motives makes certainly more harm than comments in PubPeer. I must say that I am not in favor of anonymity and I hope that this debate will lead to openness of review. With time, this would allow a full evaluation of the harm done by some renowned scientists to the progress of knowledge.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    17. On 2016 Mar 18, Michael R Blatt commented:

      Daniel, I agree that overtly offensive comments are usually obvious as such to the reader. What is much more worrying about anonymous commenting is its potential to spread untruths and to do so without accountability. So I cannot agree with you that anonymous commenting is always bland and harmless. Subtle rumours can have just as serious consequences for an individual as an undue rejection.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    18. On 2016 Mar 17, Daniel Corcos commented:

      Mike, we are often aware that major breakthrough papers had been initially rejected by many journals, especially when the authors were not renowned. It would be interesting to know who were the experts and if the editor has ceased to ask them to review papers after considering that rejection was undue but for ordinary people like us, reviewers remain anonymous. On the other side, offensive anonymous comments have no great consequences because readers can judge by themselves, whereas undue rejection has much more consequences for the authors and for science.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    19. On 2016 Mar 17, Michael R Blatt commented:

      Daniel, it is a common mistake to equate blind (confidential) peer review with anonymity. There is a world of difference here. In assessing the potential of a manuscript for publication, an editor will often turn to one or more known experts in the field for their opinions. The editor will know the identity and expertise of these individuals, so their advice is most certainly not anonymous. Please have a read of my editorial from October 2015.

      Of course, we might discuss the pros and cons of open (non-confidential) review; however, this is not the same discussion as that of anonymity. Mike


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    20. On 2016 Mar 14, Daniel Corcos commented:

      If anonymity "has no place in scientific critique" as Blatt argues, then it should have no place in peer review. For this reason, Blatt's position is untenable.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    21. On 2016 Mar 22, Michael R Blatt commented:

      As you will have noted, I too am in favour of open (non-anonymous) debate.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    22. On 2016 Mar 19, Lydia Maniatis commented:

      I'm against artificial and unnecessary obstruction of open scientific debate, either via selection by a self-proclaimed "meritocracy" or any other means. Open debate is messy but the alternatives, arguably, are messier.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    23. On 2016 Mar 17, Michael R Blatt commented:

      Hello Lydia. Do I understand you correctly, then, that you are in favour of meritocracy in science?

      Mike


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    24. On 2016 Mar 17, Michael R Blatt commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    25. On 2016 Mar 08, Lydia Maniatis commented:

      I would like for the moment to single out the following argument/counterargument from the article, because it argues that science is not/should not be democratic.

      View attributed to those in favor of anonymity: “Anonymity is essential to protect fundamental rights and free speech in a global democratic society.”

      Blatt's rebuttal: “Yes, science is a “massively cooperative undertaking,” to quote one of my PubPeer commenters,1 but that does not mean it is democratic. Science requires substantial training; its foundations are logic and reasoning; it builds on the merits of knowledge and expertise; it is not a ‘one man, one vote’ endeavor with universal enfranchisement. To argue otherwise is manifestly absurd. “

      First, arguments appealing to “manifest absurdity” are not worthy of scientific debate, whether signed or anonymous. Second, logic and reasoning are the property of every human being, and their use is very often and very demonstrably absent from the scientific literature (which is the reason editors so fiercely protect the published literature from dangerous “Letters to the Editor.”) Third, there is no degree that can confer infallibility to any individual; likewise, no individual should be denied the right to have their arguments evaluated ON THEIR MERITS (something very different from the misleading "one man one vote" argument.)

      People who don't feel comfortable with the responsibility to defend their positions (scientific and otherwise) by argument and not on the basis of membership in a closed society of initiates do not understand how science progresses, and how it stalls.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    26. On 2016 Mar 08, Lydia Maniatis commented:

      I think that the “warped worldview” arguments made by Dr. Blatt and others in opposition to anonymous critiques could more appropriately be levelled at them. Their main concern seems to be that the material welfare and (relatedly) personal reputation of individuals within the scientific community will be threatened by anonymous trolls whose only aim is to sully reputations by suggestive but ill-founded attacks. (The critics of anonymity seem less concerned about the benefits to the public interest that PubPeer's editors have demonstrated and documented to have followed from the enabling of anonymous posting).

      Let's assume that such trolls exist and are even in the majority (though I don't believe this to be the case). How, in the context of a healthy, intellectually sharp, critically-minded scientific community will their efforts have an influence? Why would the targets' astute colleagues, grant reviewers, academic employers, etc. allow specious, unmerited criticism to influence their views or choices? If, on the other hand, decision-makers in the community are not equipped to separate the wheat from the chaff (whether we are referring to criticism of scientists or the scientists academic productions), then this is indeed a warped world, and in such a world the documented public interest value of anonymous criticism surely outweighs any nuisance value to individuals. Relatedly, Dr. Blatt states early on in his editorial that he is “Putting aside the issues of policing for fraud and whistleblowing for the moment....” I would be interested if he would come back to this issue, and particularly in his plan for creating a system where anonymity could be automatically enabled for comments falling in the category of “policing for fraud and whistleblowing” while denying it for silly comments. Who would decide, a priori, which commenters/comments get the privilege?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    27. On 2016 Mar 08, Lydia Maniatis commented:

      Blatt says: "Ultimately, it is a warped worldview, indeed, in which scientists are so fearful of engaging that they never challenge others’ research and ideas openly, whether online or in publication."

      Barbour notes that: "Plant Physiology has no functional feedback mechanism..."

      Perhaps Dr. Blatt should consider helping to unwarp the world by enabling signed, open publication of criticism in his journal...


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    28. On 2016 Mar 28, Michael R Blatt commented:

      Dear Boris:-

      Thank you, but no apologies are necessary.

      However, I think we are now going around in circles. You continue to use the ‘volume’ argument which I am not prepared to accept and, if you think about it, I suspect neither are you. As devil’s advocate, I could point out that the volume of good research still overwhelming outweighs the bad (see the references in my first editorial and further discussion in the editorial to be published next week), just as you argue in favour of the “overwhelming majority” of PubPeer comments that you claim are useful compared to the “tiny minority” that are antisocial, ethically unsound and/or defamatory.

      You will see my point, I hope. So let’s not beat this one to death. We are not going to resolve the problem by defending corners or looking for the lowest common denominator. I am convinced that to find a solution it will be necessary to look outside the box, so to speak.

      I’m happy to continue our discussion, but I don’t think anyone is particularly interested in following this thread much further. So I suggest we now do so by email. If we do come to a solution, then of course we will want to share this with the community, either through PubMed Commons or some other way.

      Mike


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    29. On 2016 Mar 27, Boris Barbour commented:

      Dear Michael,

      I apologise if I have misconstrued your position, which I thought was a good deal more negative.

      Anyway, let's work on the common ground a little.

      In favour of anonymous comments, some disseminate useful information that: 1) reduces wasted research based on unreliable research, 2) diminishes errors in clinical trials and medical guidelines arising from unreliable publications, 3) therefore saves careers, taxpayers' money and lives.

      Against anonymous comments ("abuse, misrepresentation, sock-puppetry, and other antisocial or ethically unsound behaviours"): 4) unjustified denigration of reputations, 5) no declaration of conflicts of interest, 6) no information about commenter's status, 7) no discussion of equals

      Let's weigh the "costs" and "benefits" of the anonymous comments on PubPeer as they are; we'll worry about how to influence their nature later. So do 4-7 outweigh 1-3, taking into account their relative frequencies?

      I would say that the most extreme negative outcome is damage to somebody's reputation. But how much damage can be done without convincing ammunition? Remember also that researchers can always defend themselves by explaining, showing data etc, in the case of a truly unfortunate misunderstanding. So, even if reputations probably can be damaged slightly by the ill-intentioned, I would contend that it is difficult to cause severe unjustified damage to somebody's reputation on PubPeer.

      In contrast, it is highly likely that rapid dissemination of information can save a PhD or post-doc from wasting 6 months to a year trying to build on some exciting but unreproducible result. In today's competitive environment that unproductive time may spell the end of a young career, and taxpayer's money will have been poorly spent. There are no doubt clinical trials in progress based upon flawed research - deeply unethical - and there are - hopefully rare - cases of flawed research causing erroneous medical decisions. In these cases, rapid dissemination of information could save lives. Examples where one wishes information had been made public (and acted upon) earlier include the Poldermans case mentioned in our blog and the Wakefield MMR vaccination scandal.

      So in terms of extreme outcomes, do you agree that saving lives, taxpayers' money and research careers outweighs slight damage to researchers' reputations?

      What about frequencies of the different types of comments? From having read nearly all of the ~50000 comments that have appeared on PubPeer over 3.5 years, I am happy to report that the overwhelming majority report valid signs of low-quality research or misconduct - the sort of comment that could lead to benefits 1-3. Only a tiny minority might be suspected of trying to run down the reputations of other researchers unfairly.

      Based upon the importance of disseminating information to readers and the observed low frequency of comments appearing to abuse the system, we have concluded that the anonymous comments appearing on PubPeer are very clearly beneficial on average. Therefore they should be encouraged. Do you agree?

      A question that I would like to keep separate and analyse next is what can be done to tilt the balance further towards beneficial comments (including your desire to convert anonymous to nonanonymous comments).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    30. On 2016 Mar 25, Michael R Blatt commented:

      Dear Boris,

      There’s nothing grand in these statements nor are they exempt from a cost-benefit analysis. It just happens that my measures of cost and benefit are (obviously) different from yours. Of course, it may be that we can still find common ground, and I would hope this is the case.

      As to your question “Is it a good thing to alert readers … to possible problems?”, clearly the answer is yes. I have said so repeatedly in my editorials and here on PubMed Commons. However, in my opinion, this needs to be done in a way that does not open the door to abuse, misrepresentation, sock-puppetry, and other antisocial or ethically unsound behaviours. I don’t think this is a particularly difficult concept, even if its solution is more complex in practice.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    31. On 2016 Mar 22, Boris Barbour commented:

      Dear Michael,

      You appeal to "foundational arguments" and "principles", but sounding grand doesn't exempt them from a cost-benefit analysis.

      I'll ask you just one question, the one you have avoided answering over 2 editorials and all the discussion here: is it a good thing to alert readers of publications to possible problems?

      We could call it the foundational principle of PubPeer...

      COI statement: see my original post.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    32. On 2016 Mar 22, Michael R Blatt commented:

      Dear Boris,

      I really do not think that we are so far apart in our views. We both are dismayed by some of what we see in scientific publishing and communication today, and we both want the same for the scientific community as a whole. Where we differ is only in some details of the means to this end.

      The point you raise is of measures, ‘averages’, and quantity, rather than of principle. I do not doubt that there are many comments on PubPeer that are thoughtful and constructive. I certainly never suggested that all comments on PubPeer “abuse the system” (nor did I ever suggest coersion, so let us not confuse the issue here). The point on which we differ is whether the quantitative argument for anonymity that you pose outweighs the foundational arguments I have set out against it. I think not.

      You raise the analogy to the utility of cars and whether these should be banned. Of course analogies are poor vehicles (pun intended) for ideas, but let’s follow it for a moment. It would be virtually impossible to ban anonymous commenting from social media, just as it is impossible to ban reckless driving (I recall you had this discussion with Philip Moriarty previously). However, this is not to say that either should be actively encouraged. There are norms for interpersonal interaction that we generally follow and that protect civil society (e.g. accountability), just as there are rules of the road and legal requirements (e.g. the need for a driving license) that are there to protect us when we are on the road.

      I think it is always important to look for other ways to a solution. Answers sometimes come from taking an entirely different perspective rather than looking for the common denominator. So, to follow your analogy one step further, rather than banning cars (and anonymous commenting), would it not be better to make them less attractive as a whole while making the use of public transport (and of open, accountable commenting) more attractive? Are we not both in a position to influence the process of PPPR?

      I alluded, at the end of my March 2016 editorial, to what I hope will be an approach to such a ‘third solution.’ It comes straight out of discussions with Leonid Schneider who, I think you will recall, was originally one of my fiercest critics last October. I am convinced this alternative is worth a try and, at this point, have a number of my opposite numbers from other publishers on board. You may be convinced as well in due course. Again, I hope that I will have much more to say on this matter later this year.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    33. On 2016 Mar 19, Boris Barbour commented:

      The key issue is whether or not anonymous comments are beneficial ON AVERAGE. If they are or can be made so (PubPeer implements guidelines to favour useful comments), then such comments should be encouraged. Your arguments focus purely on the negatives and you systematically avoid consideration of the benefits of comments that happen to be anonymous. We can agree that it is possible to abuse anonymous commenting. And anonymity does enable commenters to forego in-depth discussion with meritorious professors. However, a balance needs to be struck and you have still made little attempt to do that. Thus, even if abuse is possible, that doesn't mean that all comments do abuse the system. In fact, the great majority of anonymous comments on PubPeer are perfectly factual and some highlight matters of genuine importance to readers, disseminating that information without delay. At PubPeer we have weighed both the advantages and the disadvantages of the anonymous comments we publish. We are convinced that their overall effect is overwhelmingly beneficial, despite a small number of awkward cases. So we shall continue to enable anonymous commenting.

      Having been around the houses of this argument a few times without making much progress, maybe an analogy will be helpful. Would you ban cars because sometimes people get run over? Or would you take into consideration the fact that they are a useful means of transportation? I'd like to see you take into consideration the potential benefits of the content of anonymous comments.

      We agree that we should all strive to create a system in which researchers feel able to comment freely and transparently. But we don't have a magic wand to create that environment. You at least are in a position of power to implement some changes, but that will require supportive and constructive action, not coercion. The coercive approach has failed in the past: our direct experience on PubPeer has shown that many useful comments will only be made if anonymity is available. In other words, there is no way to make all useful commenting non-anonymous, you can only suppress the majority of comments, including many useful ones, by (hypothetically) forbidding anonymity.

      You continue to confuse research that contains known flaws (including overinterpretation) when produced with that which doesn't. Although all research is indeed potentially, eventually falsifiable, the use of small sample sizes, inappropriate statistics, unverified cancer cell lines etc, etc (the list is long) is known today to generate unreliable research. You can't expect researchers to predict the future, but it's not unreasonable to ask them to avoid known mistakes (respecting the "state of the art"). Moreover, isn't it precisely your job as a journal editor to draw this line? Do you really not recognise this distinction? In any case, PubPeer simply allows comments and questions; the site makes no judgement.

      Regarding the arsenic life paper, I'll leave you, as a practising editor-in-chief, to interpret the (admittedly inconsistent) COPE guidelines on the matter. Here are a couple of key quotes:

      "Journal editors should consider retracting a publication if ... they have clear evidence that the findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error (e.g. miscalculation or experimental error)."

      "Retraction should usually be reserved for publications that are so seriously flawed (for whatever reason) that their findings or conclusions should not be relied upon."

      COPE guidelines

      Finally, we obviously agree that "science does not end with publication". Nobody at PubPeer has ever said otherwise. Indeed, the whole raison d'être of the site is to enable science to continue after publication, something that traditional journals have not always embraced.

      COI statement: see my original post.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    34. On 2016 Mar 18, Michael R Blatt commented:

      I’ll reply to both your comments here, Boris. I did address all of the standard, conceptual arguments around anonymity in my second editorial, and I will be discussing some of these and other aspects of anonymity again next month with Jaime Teixeira da Silva.

      I believe you wish to point out, as your central argument, that the traffic on PubPeer is far greater than on PubMed Commons, for example, and you ascribe this to encouraging anonymous comments. Your numbers may be correct – I am not in a position to comment one way or the other – but I do dispute your underlying assumption that traffic volume equates with scientific value. I raised this point in my October 2015 editorial, as did Philip Moriarty both in the PubPeer threads that followed and in his discussion with you in the Times Higher Education in December 2015.

      I maintain, furthermore, that anonymous commenting encourages grubby comments and nefarious behaviours that undermine the very scientific community you want to build. So, in my opinion, encouraging anonymous commenting is counterproductive and, ultimately, self-defeating. Again, you will find all of the arguments in my editorials, so I’ll not revisit them here.

      As for my misinterpreting your definition of ‘ultimately unreliable’ “in the sense of ‘unreproducible’, ‘low-quality’, ‘known to be wrong’, [and] ‘overinterpreted’”, I agree there is such a thing as ‘bad science.’ Ben Goldacre has much to say about this. However, I think you need to be more cautious in calling for sweeping retractions on the basis of your definitions. Can you issue a blanket statement of unreproducibility without first seeking to reproduce each set of data and explaining why it is unreproducible, for example? Where do you draw the line between interpretation and overinterpretation? And when does overinterpretation become grounds for vilification?

      Again as a specific example, I agree that the Science paper on arsenic-based life was overinterpreted and included experimental methods that were insufficient to meet the exacting standards expected for such a claim. On this basis alone there is a strong argument to say that it should never have found its way into the journal. However, my understanding is that the results were not low-quality per se; they were demonstrably reproducible; and they did lead (ultimately!) to detailed knowledge of a transporter with a remarkable selectivity for phosphate over the structurally similar arsenate anion. As I noted before, “science does not end with publication. Publication is only the beginning of scientific debate. Progress often arises from what, in hindsight, is ‘ultimately unreliable’, and its cornerstone is open debate.”


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    35. On 2016 Mar 07, Boris Barbour commented:

      The words 'ultimately unreliable' in the title of this editorial are a quote from the text of our blog Vigilant scientists. By accident or by design, Blatt deforms their meaning completely. We meant "unreliable" in the sense of "unreproducible", "low-quality", "known to be wrong", "overinterpreted", while "ultimately" simply added emphasis (meaning something like "most importantly"). In contrast, Blatt interprets the word pair to include the meaning "eventually improved upon". In other words, we were discussing research that is of low-quality or wrong according to the state of the art at the time of publication, while he lumps such poor work with outstanding research containing no known defects at publication but upon which even greater discoveries are subsequently built. Thus, he gives the example of Hodgkin and Huxley's explanation of the action potential building upon Cole and Curtis' measurements of axonal impedance, characterising the latter authors' work as "ultimately unreliable". Nothing could be further from our intended meaning, which should have been abundantly clear from the context of the blog. In particular, we used the terms "unreliable" and "unreproducible" interchangeably, gave numerous examples and references relating to unreproducible and low-quality research, and gave no examples of the sort Blatt mentions.

      So there is a clear criterion of reliability that Blatt did not consider: does a paper contain known problems according to the state of the art at the time of publication? Papers that fail this test are "unreliable" and the relevant information should be disseminated to the readers. Applied to examples in the editorial, this test would classify the arsenic life paper as unreliable and Cole and Curtis as reliable, while Blatt considers both to be "ultimately unreliable". Coming from the editor-in-chief of a high-quality journal, this seems to be questionable relativism.

      COI: see previous post.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    36. On 2016 Mar 06, Boris Barbour commented:

      This editorial by Michael Blatt, editor-in-chief of Plant Physiology, follows up a previous one, Vigilante Science; both attack the anonymous commenting enabled by PubPeer (see "COI" below). PubPeer has responded to both editorials at Vigilant scientists.

      In his follow-up, Blatt completely avoids addressing our central argument in favour of anonymity, which is that our priority as a community should be to disseminate information about publications to readers and users as rapidly and as widely as possible, a process encouraged by anonymity. As Plant Physiology has no functional feedback mechanism and because Blatt has refused to join any discussions on PubPeer, maybe he would like to respond here, at least to address our principal argument in favour of anonymous commenting?

      Potential conflicts of interest: I am a co-organiser of PubPeer and wrote most of their two blogs on this subject. These views are expressed in a personal capacity, not as an official PubPeer position.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 27, Eduardo ANGLES-CANO commented:

      Comment on "We hypothesized that low factor XII reduces kallikrein formation and consequently the release of bradykinin..." I suggest an alternative explanation, to the oedema hypothesis; a low bradykinin may results in insufficient stimulation to release tPA by the endothelium leading to inefficient thrombus lysis. This is particularly pertinent if we consider that thrombus persistence is finally due to an insufficient fibrinolytic response.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 15, Wichor Bramer commented:

      Well, one can imagine, as there are 120 reviews, some have limited their dataset to English language articles only, others have translated foreign language articles. Likewise, some reviews I performed the searches for have included non published articles from registries, where they do make an important difference compared to reviews that only included published articles (such as Jaspers L, 2016, but that is not used for this research).

      Some reviews excluded conference papers (especially if the number of hits was high in the reviewers eyes, we resort to using that to reduce the number of hits), others included them. I must say that I don't see why these would not be found in embase/medline, as this is particularly a problem when searching embase, while medline hardly includes detailed conference proceedings.

      In this research we only looked at the included references that had been published in a journal, and we considered conference proceedings, published as supplements to journals to fall into that category.

      Regarding searching cochrane central, this results will be shown in upcoming articles from partially overlapping data, i must say that sofar for the 2500 included reviews of 60+ Published reviews Cochrane Central has not identified one single included reference that was not also retrieved by another database.

      In my opinion, when doing a systematic review, the authors should aim to find all relevant articles that can answer the research question. If that is not the goal, then it should not be called a systematic review, they can combine three MeSH terms in PubMed, extract some conclusions and automatically generate a rapid review.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 15, Hilda Bastian commented:

      Many thanks, Wichor and Dean - that's really helpful. Still not clear on whether there was a language restriction or not. I looked at a couple of the reviews you link to (thanks!), but couldn't see an answer in those either.

      On the question of implications for reviews: being included is a critical measure of value of the search results, but with such major resource implications, it's not enough. One of the reasons more detail about the spread of topics, and the nature of what was not found is important, is to explain the difference in these results compared to other studies (for example, Waffenschmidt S, 2015, Halladay CW, 2015, Golder S, 2014, Lorenzetti DL, 2014).

      Even if studies like this don't go as far as exploring what it might mean to the conclusions of reviews, there are several aspects - like language - that matter. For example, the Cochrane trials register was searched and other places as well. If studies were included from these sources based only on abstracts from conference proceedings for example, then it's clear why they may not be found in EMBASE/MEDLINE. Methodological issues such as language restriction, or whether or not to include non-journal sources, are important questions for a range of reasons.

      One way that the potential impact of studies can be considered is the quality/risk of bias assessment of the studies that would not have been found. As Halladay CW, 2015 found, the impact of studies on systematic reviews can be modest (if they have an impact at all).

      Disclosure: I am the lead editor of PubMed Health, a clinical effectiveness resource and project that adds non-MEDLINE systematic reviews to PubMed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Mar 15, Wichor Bramer commented:

      Dear Hilda,

      Thank you for your insightful comments, much appreciated. I have left comments via PubMed Commons before, but have never received any from other researchers. I will respond to your comments point by point:

      1) As we described in the last line of the second to last paragraph of the methods section of our paper, we searched all three databases post-hoc for included references.

      2) We searched the largest Ovid Medline files comprising Ovid MEDLINE® In-Process & Other Non-Indexed Citations. For clarity for endusers at Erasmus MC this is the only Medline database shown, and it is referred to as Medline, though it includes non-Medline PMC records. Articles retrieved from PubMed, the subset as supplied by publishers, were not classified as resulting from Medline Ovid searches, but rather as unique results from PubMed publisher subset (a classification not used in this article, but that will be used in other articles from partially overlapping datasets).

      3) As you pointed out, Bramer WM, 2015 is not a systematic review. After article acceptance, I realized it would have been wise to limit our study to medical research questions only (this being the only non-medical topic). Not all 120 searches have resulted in published systematic reviews. In some cases, the process is is ongoing and in others results were used to create other end products, such as clinical practice guidelines, grant proposals and chapters for theses. In 47 of the searches used in this research the resulting articles have been published in PubMed. That selection can be viewed via http://bit.ly/bramer-srs-gs.

      4) Criteria for searches to be included this research were that

      a) researchers had requested librarian-mediated searches because they intended to write a systematic review (in that view, the title should be read as 120 systematic review requests)

      b) titles and abstracts for the results for all databases had been reviewed

      c) the full text of the relevant abstract had been critically read and

      d) the resulting relevant references had been reported to us or were extractable from the resulting publication.

      Whether the searches result in finished published systematic reviews is independent of the search process. Retrospectively, it would have been wise to include a paragraph on this in the article.

      5) One of the peer reviewers also mentioned the expected difference between certain topics, and advised us to investigate that relation. However, it would be very complicated to group 120 unique and diverse topics systematically and even within broad subjects such as surgery or pediatrics one can expect variation between research questions. For very distinct topics such as nursing or psychology one can expect differences, because of the need to search Cinahl, respectively PsycINFO, but these research topics were scarce among our set. We do not believe huge differences were to occur regarding the performance of GS between different topics, as the overall performance remains too low. We did observe that for uncomplicated questions GS performed better than for search strategies with many synonyms.

      6) We chose not to investigate in detail what the missed studies would have meant for the conclusion of the reviews. Partially because of the vast number of topics, but also because we feel this does not add value to our conclusion about coverage, precision and recall. If searches in GS were likely to find fewer than 40% of all relevant references, or in Embase a high likelihood that fewer than 80% were retrieved, expected recall is too low for the systematic review, no matter what the quality was of retrieved results. In follow-up research where best database combinations are compared (in that case for published medical systematic reviews, so only partially overlapping with this set) we plan to investigate in detail why certain references were found by GS but not by traditional databases. One of the reasons could be that articles are retrieved from lower quality journals, as GS lacks quality requirements for inclusion, however there can be other reasons.

      Kind regards,

      Wichor Bramer


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Mar 12, Hilda Bastian commented:

      An interesting and very useful study of Google Scholar (GS). I am unclear, though, about the methods used to compare it with other databases. The abstract includes this step after the systematic review authors had a final list of included studies: "All three databases were then searched post hoc for included references not found in the original search results". That step is clearly described in the article for GS.

      However, for the other 2 databases (EMBASE and MEDLINE Ovid), the article describes the step this way: "We searched for all included references one-by-one in the original files in Endnote". "Overall coverage" is reported only for GS. Could you clarify whether the databases were searched post hoc for all 3 databases?

      I am also unclear about the MEDLINE Ovid search. It is stated that there was also a search of "a subset of PubMed to find recent articles". Were articles retrieved in this way classified as from the MEDLINE Ovid search? And if recent articles from PubMed were searched, does that mean that the MEDLINE Ovid search was restricted to MEDLINE content only, and not additional PubMed records (such as those via PMC)?

      There is little description of the 120 systematic reviews and citations are only provided for 5. One of those (Bramer WM, 2015) is arguably not a systematic review. What kind of primary literature was being sought is not reported, nor whether studies in languages other than English were included. And with only 5 topics given, it is not clear what role the subject matter played here. As Hoffmann T, 2012 showed, research scatter can vary greatly according to the subject. It would be helpful to provide the list of 120 systematic reviews.

      No data or description is provided about the studies missed with each strategy. Firstly, that makes it difficult to ascertain to what extent this reflects the quality of the retrieval rather than the contents of the databases. And secondly, with numbers alone and no information about the quality of the studies missed, the critical issue of the value of the missing studies is a blank space.

      Disclosure: I am the lead editor of PubMed Health, a clinical effectiveness resource and project that adds non-MEDLINE systematic reviews to PubMed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 28, GRAHAM COLDITZ commented:

      Important data here directing us to further study of in utero exposures and prostate cancer risk. Stronger results from measured weight than self-reported birthweight points to the underlying mechanisms likely being more important than previously thought.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 07, Martine Crasnier-Mednansky commented:

      Escherichia coli cells, when 'pre-induced' in the presence of the artificial inducer TMG, synthesize β-galactosidase in the presence of glucose. COHN M, 1959 stated: "The effect of pre-induction is to restore in the presence of 10<sup>-3</sup> M glucose about 50 per cent of the maximal differential rate obtainable on succinate". The observation the maximal rate was not reached in the presence of glucose led the authors to argue, indeed incorrectly, that glucose was a preferential metabolic source for yielding high internal levels of repressor. Such observation however will have an explanation later on with the discovery of the 'cAMP effect' on β-galactosidase synthesis, in agreement with the finding by COHN M, 1959 that carbon sources presently known to elicit higher cAMP levels (particularly succinate, lactate and glycerol, see Epstein W, 1975) were found to be non-inhibitory (i.e. allowing maximal differential rate). Anke Becker’s final statement, that inhibition of lactose permease by unphosphorylated Enzyme IIA<sup>Glc</sup> (leading to inducer exclusion) is primarily responsible for CCR of the lac operon, is therefore inappropriate as cAMP via its receptor protein (simultaneously designated as CRP Emmer M, 1970 and CAP Zubay G, 1970) also plays a role in CCR of the lac operon. Furthermore, Jacques Monod (1942) reported diauxie was attenuated - but not eliminated - when the cells were pre-induced (adapted to the less preferred 'B' sugar). Diauxie was however eliminated by addition of exogenous cAMP Ullmann A, 1968. Therefore, inducer exclusion and the level of cAMP both contribute to CCR of the lac operon.

      Lastly, unphosphorylated EIIA<sup>Glc</sup> does not inhibit adenylate cyclase. The current model of regulation postulates dephosphorylation of Enzyme IIA<sup>Glc</sup> during glucose transport interferes with the activation of adenylate cyclase by phosphorylated Enzyme IIA<sup>Glc.</sup>


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 06, Robert J Maier commented:

      Drs. McNichol and Sievert make some good points about the interpretation of our results. While we observed H2-augmented growth and CO2 uptake into cell-associated material, we did not show that CO2 contributes the main source of carbon. Therefore, the terms mixotrophy or chemolithoheterotrophy would seem to be accurate to describe our data, and not the term we used, chemolithoautotrophy. From our results, we cannot conclude Helicobacter is an autotroph.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Feb 16, Jesse McNichol commented:

      Kuhns et al (2016) provide evidence that the gastric pathogen Helicobacter pylori can use molecular hydrogen as an energy source. Increased growth yields and inorganic carbon incorporation both support the ability of H. pylori to gain metabolically useful energy from hydrogen. However, the use of the term chemolithoautotrophic to describe these findings is not correct.

      The term chemolithoautotrophy is accurately defined as a metabolic mode that derives energy from chemical compounds (chemo-; as opposed to light or photo-), electrons from inorganic sources (-litho-) and carries out net fixation of inorganic carbon (-autotrophy; (2)). While the -litho- portion of this term has been used to describe heterotrophic organisms that oxidize inorganic compounds to supplement their metabolism (3), the -autotroph portion of this term can only be applied where carbon dioxide can serve as the predominant source of carbon for biosynthesis.

      Since abundant organic carbon was present in the growth medium in this study, it is unclear if CO2 accounted for the main source of carbon for H. pylori. In addition, although the authors do observe CO2 uptake into biomass this does not prove that autotrophic carbon fixation occurred. Anaplerotic carbon fixation occurs as a series of carboxylation reactions that replenish intermediates in the citric acid cycle (4) or during fatty acid synthesis (5). As a normal process during heterotrophic growth, it explains the observed incorporation of CO2 in the absence of hydrogen. While such carboxylating enzymes do indeed incorporate inorganic carbon into biomass, the growth mode of an organism can only be considered autotrophic if they have complete pathways for using inorganic carbon as the main source for cellular biosynthesis (6).

      This point is illustrated considering the importance of the higher activity and abundance of the acetyl-CoA carboxylase enzyme in the presence of hydrogen observed by Kuhns et al (2016). While this enzyme is indeed responsible for the carboxylation of acetyl-CoA to malonyl-CoA, the CO2 thus incorporated is lost during the condensation of malonyl-CoA subunits during lipid synthesis (5). Its higher activity may therefore simply be the result of higher levels of lipid synthesis associated with increased growth in the presence of hydrogen.

      It should be simple to clarify whether autotrophic carbon fixation likely occurred during these experiments. The authors could estimate how much carbon was needed to support the observed increase in cell density, and compare this estimate with the amount of inorganic carbon incorporated into biomass. Unless the amount of inorganic carbon fixed represents a dominant fraction of H. pylori's cell carbon, chemolithoheterotrophic would be a more accurate term for the results observed by Kuhns et al (2016). Indeed, such chemolithoheterotrophic growth with hydrogen has been previously observed in other organisms (7).

      A final point is worth mentioning. True autotrophs are well-known among the Epsilonproteobacteria (6,8), which employ the reverse tricarboxylic acid (rTCA) cycle for carbon fixation (9). Therefore, the absence of RuBisCO reported by Kuhns et al (2016) is not surprising given that autotrophic Epsilonproteobacteria do not use this enzyme for carbon fixation. The key enzyme that allows the rTCA cycle to run in a reductive direction is ATP-citrate lyase (6); however, the genes encoding this enzyme are absent in H. pylori strain 26695 (10). Since it lacks this enzyme and is thought to have a complete (albeit non-canonical) oxidative citric acid cycle (11), the current genomic evidence also argues against the possibility of autotrophic carbon fixation in H. pylori.

      Jesse McNichol, Postdoctoral Scholar, Chinese University of Hong Kong; Simon F. S. Li Marine Science Laboratory, Shatin, Hong Kong; mcnichol at alum dot mit dot edu

      Stefan Sievert, Biology Department, Woods Hole Oceanographic Institution; Woods Hole, MA, 02543, USA; ssievert at whoi dot edu

      References:

      1) Kuhns LG, Benoit SL, Bayyareddy K, Johnson D, Orlando R, Evans AL, Waldrop GL, Maier RJ. 2016. Carbon Fixation Driven by Molecular Hydrogen Results in Chemolithoautotrophically Enhanced Growth of Helicobacter pylori. Journal of Bacteriology 198:1423–1428.

      2) Canfield DE, Erik Kristensen, Bo Thamdrup. 2005. Thermodynamics and Microbial Metabolism, p. 65–94. In Donald E. Canfield, EK and BT (ed.), Advances in Marine Biology. Academic Press.

      3) Muyzer DG, Kuenen PJG, Robertson DLA. 2013. Colorless Sulfur Bacteria, p. 555–588. In Rosenberg, E, DeLong, EF, Lory, S, Stackebrandt, E, Thompson, F (eds.), The Prokaryotes. Springer Berlin Heidelberg.

      4) Kornberg HL. 1965. Anaplerotic Sequences in Microbial Metabolism. Angew Chem Int Ed Engl 4:558–565.

      5) Voet D, Voet JG. 2010. Biochemistry 4th edition. Wiley, Hoboken, NJ.

      6) Hügler M, Sievert SM. 2011. Beyond the Calvin Cycle: Autotrophic Carbon Fixation in the Ocean. Annu Rev Marine Sci 3:261–289.

      7) Kiessling M, Meyer O. 1982. Profitable oxidation of carbon monoxide or hydrogen during heterotrophic growth of Pseudomonas carboxydoflava. FEMS Microbiology Letters 13:333–338.

      8) Campbell BJ, Engel AS, Porter ML, Takai K. 2006. The versatile ε-proteobacteria: key players in sulphidic habitats. Nature Reviews Microbiology 4:458–468.

      9) Hügler M, Wirsen CO, Fuchs G, Taylor CD, Sievert SM. 2005. Evidence for Autotrophic CO2 Fixation via the Reductive Tricarboxylic Acid Cycle by Members of the ε Subdivision of Proteobacteria. J Bacteriol 187:3020–3027.

      10) Tomb J-F, et al. 1997. The complete genome sequence of the gastric pathogen Helicobacter pylori. Nature 388:539–547.

      11) Kather B, Stingl K, van der Rest ME, Altendorf K, Molenaar D. 2000. Another Unusual Type of Citric Acid Cycle Enzyme in Helicobacter pylori: the Malate:Quinone Oxidoreductase. J Bacteriol 182:3204–3209.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 31, Damien Chaussabel commented:

      I read your paper with great interest; this is excellent work with clear translational potential, so first of all congratulations on getting it published!

      We are currently establishing a science education program that builds on the availability of large amounts of data in public repositories.

      In this context we will encourage students/trainees to examine new noteworthy publications and to identify and share with the authors observations that may extend or build upon their original findings.

      This is one of our first attempts! We hope that the exercise may also prove helpful to you:

      A first observation is that in blood stimulated in vitro with a wide range of immune agonists, including pathogen-associated molecular patterns, heat-killed bacteria and cytokines, the patterns of induction of PKM2, IL6 and IL1B at the transcriptional level are rather distinct:

      The explanation might be trivial (at least to you!), but I found such disconnect puzzling, especially with regards to differences in levels of induction by HK E. coli.

      Also, would you assume that the induction of PKM2 transcription correlates with dimerisation and nuclear translocation?

      If that were the case would this phenomenon be driven by ROS released as a result of an innate response to bacteria from for instance neutrophils, independent of a change in the cellular glucose metabolism?

      Thanks!


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 15, Arnaud Chiolero MD PhD commented:

      A brillant paper to understand the distance between the hype of clinical genomic and the reality.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 29, Suresh Panneerselvam commented:

      Sir/Madam, This is an interesting article. Thank you for the article. I like the title very much.

      The mention of TLR2 as intracellular instead of extracellular looks odd in the third paragraph "The intracellular TLRs consist of TLRs 2, 3, 7, 8, 9 and 10". Although, it is mentioned extracellular in the Figure.

      In addition, it seems to me that TLR10 is also extracellular for eg; in this paper http://www.pnas.org/content/111/42/E4478.full.pdf (Figure 5E) there is a mention.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 03, Christopher Southan commented:

      The sterol-linked endosomally targeted β-secretase inhibitor structure is neither disclosed here nor in the primary reference, rendering that part of the study irreproducible


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 01, Mick Watson commented:

      Unfortunately the authors used the incorrect function within poRe for comparison, and more appropriate ways to use the software have been in place for some time, eg:

      http://www.opiniomics.org/extracting-minion-fastq-on-the-command-line-using-pore/ http://www.opiniomics.org/how-to-extract-fastq-from-the-new-minion-fast5-format-using-pore/

      In fact recent work shows that poRe is incredibly fast for FASTQ extraction:

      http://www.opiniomics.org/fast-parallel-access-to-data-within-minion-fast5-files/

      It is a shame the authors did not make use of this functionality

      Watson M, Thomson M, Risse J, Talbot R, Santoyo-Lopez J, Gharbi K, Blaxter M. poRe: an R package for the visualization and analysis of nanopore sequencing data. Bioinformatics. 2015 31(1):114-5.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 01, Christopher Southan commented:

      This work is irreproducible without the disclosure of the GNX-4975 structure


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 28, Bernard Carroll commented:

      Ketamine effect is not specific to depression. For instance, it is rapidly effect in obsessive compulsive disorder, too, and probably in any neuropsychiatric condition mediated by overactive circuitry.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 01, Marco Lotti commented:

      Dear Dr. Di Saverio,

      thank you for your comment.

      The advantages of totally laparoscopic right colectomy with intracorporeal anastomosis over LRC with extracorporeal anastomosis are still under investigation. Preliminary data of a randomized trial show an earlier recovery of bowel function and a lower incidence of postoperative ileus. No differences were observed with respect to length of stay and complication rate <Vignali Andrea et Al. Extracorporeal vs. Intracorporeal Ileocolic Stapled Anastomoses in Laparoscopic Right Colectomy: An Interim Analysis of a Randomized Clinical Trial. Journal of Laparoendoscopic & Advanced Surgical Techniques. February 2016, ahead of print. doi:10.1089/lap.2015.0547.>

      We described a technique which is both minimally invasive for patients and an opportunity for low-volume surgeons to embrace laparoscopy as a tool to perform right colectomy with optimal oncological outcomes and a low complication rate. This is all about the importance of surgical education, the novel technique is just the complement.

      A definition is literally “a statement that explains the meaning of a word”. I think that the definition of “laparoscopic” is simply “by means of laparoscopy”. But we can also mean “minimally invasive by means of laparoscopy” or “more precise by means of laparoscopy”. Then, we should incorporate the meaning of “right colectomy” with respect to proper resection, acceptable complication rate and optimal oncological outcomes. Finally, we can speculate about the meaning of your term “non-laparoscopic surgeons”.

      We called our technique “Laparoscopic Right Colectomy” since it is derived from the original technique of “Laparoscopic Right Colectomy” described by Young Fadok and Nelson [Young-Fadok TM, Nelson H. Laparoscopic right colectomy: five-step procedure. Dis Colon Rectum. 2000 Feb;43(2):267-71]. It is compliant with the SAGES Guidelines for Laparoscopic Resection of Curable Colon and Rectal Cancer http://www.sages.org/publications/guidelines/guidelines-for-laparoscopic-resection-of-curable-colon-and-rectal-cancer/. Moreover, extracorporeal anastomosis is mentioned in the ASCRS Global Assessment for Laparoscopic Right Hemicolectomy http://www.apdcrs.org/GlobalAssessmentLapRightHemicolectomy.pdf.

      If you are still concerned about the definition, please post your comment also to Dr. Tonia Young Fadok and Dr. Heidi Nelson. I think it would be very interesting to know their opinion.

      Best regards


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Feb 28, Salomone Di Saverio commented:

      Interesting technique, but let me say that if vascular ligation, resection and anastomosis are performed extra corporeally, this can not be defined a true laparoscopic right colectomy but is rather an half laparoscopic-assisted and half open/hand-assisted right colectomy. Anyway is good and easy reproducible by non-laparoscopic and low volume surgeons, with no experience or not feeling confident in performing intracorporeally mesocolic vascular clipping and both ileum and colonic resection as well as performing intracorporeal anastomosis, or as an initial step during the laparoscopic learning curve. Nonetheless it can NOT be really defined as a "laparoscopic right colectomy" at all; this is just a simple laparoscopic mobilization of the right colon and nothing more.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 22, Lydia Maniatis commented:

      It would be great if vision articles stopped using the straw man of "border contrast" or lateral inhibition to frame cosmetic debates. Here, for example, we learn in the abstract that "The competing accounts for perceptual constancy of surface lightness fall into two classes of model: One derives lightness estimates from border contrasts, and another explicitly infers [meaning?] surface reflectance."

      The former "model" of lightness perception hasn't been credible for almost one hundred years. The reason it hasn't been viable is that it has been falsified. The reason that these "debates" still persist is that in the current culture, ad hoc accounts are given a free pass while falsifications merely indicate need for "more research." Oikonnen et al (2016) know (or should know) that half of the argument is a straw man:

      "Although this framework is attractive in its simplicity, it fails to explain some well-known lightness phenomena, such as the effect of spatial configuration on perceived lightness (e.g., Adelson, 1993; Anderson & Winawer, 2008; Bloj & Hurlbert, 2002; Gilchrist, 1977; Hillis & Brainard, 2007b; Knill & Kersten, 1991; Purves, Shimpi, & Lotto, 1999; Schirillo, Reeves, & Arend, 1990)."

      Thus, Oikonen et al (2016) propose to "adjudicate" between two "frameworks," one of which has already failed. What is gained by beating a dead horse? Until and unless the proponents of the failed models resolve the difficulties by redeeming the failures on a theoretical basis, their account is not in the game.

      Short version: Ad hoc "successes" don't outweigh falsifications, so there's no need to keep falsifying over and over. It's just redundant.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 03, Wichor Bramer commented:

      Contrary to what the authors describe here it is not so much the status of the publication (e-pub ahead of print: pubstatusaheadofprint) that causes records to be missed in Medline as it is the status of the record in the database (as supplied by publisher: publisher[sb]). All articles that are e-pub ahead of print are part of the subset as supplied by publisher (a search for _ publisher[sb] OR pubstatusaheadofprint_ generates the exact number of hits as publisher[sb] alone).

      Apart from that I wonder why the authors choose to exclude several specific sets (NOT pubstatusnihms NOT pubstatuspmcsd NOT pmcbook) but search Ovid MEDLINE(R) In-Process & Other Non-Indexed Citations where the Non-Indexed Citations contain the articles in pubstatusnihms, pubstatuspmcsd and pmcbook. And I wonder what the use is of searching in process citations in PubMed (inprocess[sb]) when their Ovid MEDLINE search already contains the In-Process articles.

      Therefore there is no need to search with that complicated structure presented here, a searcher can obtain equal results adding publisher[sb]. This is a practice common for librarians and medical information specialists.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 29, Damian Scarf commented:

      Great review on an emerging and rapidly developing area of research. In addition to our Ecological Momentary Assessment (EMA) work, which you reference, we have also run studies that combine EMAs and Ecological Momentary Intervention (EMIs) with some success (reference below).

      Riordan, B.C., Conner, T.S., Flett, J.A.M., Scarf, D., 2015. A brief orientation week ecological momentary intervention to reduce university student alcohol consumption. Journal of Studies on Alcohol and Drugs 76, 525-529.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 15, Amanda Capes-Davis commented:

      More than 600 articles have been published from 2000 to 2015 that refer to the KB cell line as "oral" or "epidermoid" (squamous cell carcinoma), when it is actually HeLa and thus derived from cervical adenocarcinoma. This is the first time I have seen a retraction or correction published in response. I would like to acknowledge the authors, editor Sergio Schenkman and publisher John Wiley & Sons Ltd for their integrity in correcting the scientific record.

      Two points just for clarity.

      1) The comment that KB is a "Human Oral Epidermal-like Cancer cell line" may cause confusion. More than 50 years of testing, starting with the work of Stanley Gartler in the 1960s, shows that KB is derived from HeLa. HeLa has been extensively described and we can be confident that it is cervical carcinoma. In the early stages of cross-contamination a mixed culture can occur, in which the original cells are present alongside the contaminating cells, resulting in a mixed phenotype. Typically this only lasts for a few passages; the faster growing culture will rapidly overgrow and replace the other population (Nims et al, 1998, PMID 9542633). Where HeLa is the contaminant it will typically outcompete other cell types, due to its higher rate of proliferation and resilience at low density. There is no evidence that KB is currently a mixed culture, or that it retains any characteristics from the original culture that were present before cross-contamination occurred.

      2) There is ongoing confusion in the literature regarding expression of tissue-specific markers in misidentified cell lines. Phenotype can appear to support the idea that original material is still present. This has been debated in the literature since at least the 1970s - for an example, see the discussion between R.S. Chang and Walter Nelson-Rees regarding the Chang liver cell line (PMID 622561). "Chang liver" is actually HeLa despite the fact that it has been documented as expressing liver-specific markers. For this reason, it is essential to use genotype-based methods when looking at cell line origin. Cytogenetic analysis, short tandem repeat (STR) profiling and single nucleotide polymorphism (SNP) analysis are all important testing methods for cell line authenticity; STR profiling provides a consensus method for laboratories to compare results. Phenotype can provide helpful supporting evidence, but should not be the deciding factor when determining the origin of a cell line.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 29, Gary Goldman commented:

      Marin et al report a vaccine effectiveness (VE) of 81% (95% C.I.: 78-84) for the one-dose varicella vaccination protocol. [1] This figure is biased high and declines rapidly as the vaccine is widely used and exogenous boosting becomes rare. Many of the clinical trials and studies that reported VE were conducted within the first few years of the start of varicella vaccination—during a time period when vaccinees were additionally boosted by exogenous exposures to those shedding wild-type (or natural) varicella-zoster virus during annual outbreaks. Annual VE, derived from secondary family attack rate (SFAR) data among contacts aged <20 years reporting to the Antelope Valley Varicella Active Surveillance Project (VASP), demonstrated an annual increase from 87% in 1997 to 96% in 1999 (the last year that varicella displayed its characteristic seasonality), then declined to 85% and 74% in 2000 and 2001, respectively, when exogenous boosting substantially decreased. [2]

      The conclusion given in the Meta-analysis by Marin et al states that several studies reported a lower risk for herpes zoster among varicella vaccinated children “and a decline in herpes zoster incidence among cohorts targeted for varicella vaccination.” [1] The later part of that statement is patently false. There are two confounders in the cited studies that contribute to this erroneous conclusion and a consideration of these confounders helps to explain why the VASP study [3] (referenced in the Meta-analysis [1]) reports that (a) the 2000-2006 HZ incidence increased by 63% among 10- to 19-year olds and (b) HZ incidence decreased by 32% from 98.3/100,000 person-years (p-y) in 2006 to 66.7/100,000 p-y in 2010, with “substantial fluctuation in annual HZ rates.” [3]

      The authors of both the Meta-analysis [1] and supporting reference [3] have erroneously assumed that the HZ cases reported to the VASP represent 100% reporting completeness. However, using capture-recapture with two ascertainment sources (schools and health cares), it was demonstrated that varicella cases among 2- to 18-year-olds were under-reported by approximately 45%. [4] Likewise, it can be shown that VASP also experienced approximately 50% under-reporting of HZ cases [2], leading to the Marin et al study [3] reporting incidence rates that are one-half the actual rates. HZ incidence rates that have not been ascertainment corrected simply reflect the incidence of reported HZ cases to the VASP and not the HZ incidence rate in the community. It is invalid to compare the uncorrected VASP-reported HZ rates to those rates reported by other studies that possess much higher case ascertainment. [5]

      Additionally, the >10- to 19-year-old age category consists of three different cohorts with widely differing HZ incidence rates. Marin et al [3] only considers the mean HZ incidence rate for each age category instead of stratifying by (1) those still susceptible to varicella and never vaccinated (0 cases/100,000 p-y); (2) those that have had a prior history of wild-type varicella who exhibit increasing HZ incidence rates from approximately 120 cases/100,000 p-y to 500 cases/100,000 p-y (in the absence of exogenous boosting); and (3) those vaccinated who exhibit an HZ incidence rate less than 120 cases/100,000 p-y.

      In summary, unless HZ incidence rates are ascertainment corrected [5], such rates will erroneously be reported as “lower” than other studies. [1] Also, reporting the mean HZ incidence of a bimodal distribution masks the widely differing incidence rates among those vaccinated and those with a prior history of varicella. Further, this invalid mean masks the significant effects of exogenous boosting. [7] Varicella vaccination innoculates children with the Oka-strain VZV. When these children are exposed to natural varicella or herpes zoster in adults, they may additionally harbor the natural VZV strain. Both strains are subject to reactivation as HZ. This is another confounder in the reporting of HZ incidence rates. Health officials initially believed that only a single dose of varicella vaccine would provide long-term protection and have negligible impact on the incidence of HZ. These assumptions are incorrect and have led to a continual cycle of treatment and disease. The shingles (herpes zoster) vaccine now provides the boosting to postpone or suppress the reactivation of HZ in adults aged 60 years and older—a substitute for the exogenous boosting that was prevalent in the pre-varicella vaccination era at no cost. [6]

      References:

      [1] Marin M, Marti M, Kambhampati A, Jeram SM, Seward JF. Global varicella vaccine effectiveness: A meta-analysis. Pediatrics Feb. 16, 2016; DOI: 10.1542/peds.2015-3741. Marin M, 2016

      [2] Goldman GS. Universal Varicella Vaccination: Efficacy Trends and Effect on Herpes Zoster. Int J Toxicol 2006 Sep-Oct; 25(5):313-317. Goldman GS, 2005

      [3] Marin M. Civen R, Zhang J, et al. Update on incidence of herpes zoster among children and adolescents following implementation of varicella vaccination, Antelope Valley, CA. 2000-2010. Presented at IDweek 2015, October 7-11, 2015; San Diego, CA.

      [4] Seward JF, Watson BM, Peterson CL, Mascola L, Pelosi JW, Zhang JX, et al. Varicella disease after introduction of varicella vaccine in the United States, 1995-2000. JAMA 2002; 287(5):606-611. Seward JF, 2002

      [5] Hook EB, Regal RR. The value of capture-recapture methods even for apparent exhaustive surveys: the need for adjustment for source of ascertainment intersection in attempted complete prevalence studies. Am J Epidemiol 1992; 135:1060-1067. Hook EB, 1992

      [6] Goldman GS, King PG. Review of the United States universal varicella vaccination program: Herpes-zoster incidence rates, cost effectiveness, and vaccine efficacy based primarily on the Antelope Valley Varicella Active Surveillance Project data. Vaccine 2013; 31(13):1680-1694. Goldman GS, 2013

      [7] Guzzetta G, Poletti P, Del Vava E, et al. Hope-Simpson’s progressive immunity hypothesis as a possible explanation for herpes –zoster incidence data. Am J Epidemiol 2013; 77(10):1134-1142. Guzzetta G, 2013


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 29, Bhaskar Chandra Mohan Ramisetty commented:

      One major issue with this work would be the choice of the strain. Although the strain is supposedly relA Plus, it was shown that relA1 mutation in this particular strain is not cured (Tsilibaris et. al, 2007, look into the material and methods section). In our own experiments, we found that same strains to be SMG negative meaning that these strains are deficient in production of ppGpp. It may be noted that the details of the strain construction were not given accurately in Engelberg-Kulka et. al., 1998. We would appreciate if these strains are verified independently for relA mutations. Since the work revolves around stress physiology, it might be imperative to validate these observations in E. coli MG1655 strain.

      Our work on the above-stated problem is recently published http://www.ncbi.nlm.nih.gov/pubmed/27259116 Thank you.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Feb 29, Bhaskar Chandra Mohan Ramisetty commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Feb 29, Bhaskar Chandra Mohan Ramisetty commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 20, Linda Z Holland commented:

      In my review, I did not intend to criticize the ability of ascidian development to say something about the role of gene subnetworks in developing systems in vivo—it is a fruitful approach worthy of vigorous pursuit. Ascidians are highly tractable for experimental embryology and have scaled-down genomes and morphologies (at least with respect to vertebrates). As a result, noteworthy progress is being made in elucidating the gene networks involved in ascidian notochord development (José-Edwards et al. 2013, Development 140: 2422-2433) and heart development (Kaplan et al. (2015. Cur Opin Gen Dev 32: 119-128). It is currently a useful working hypothesis to make close comparisons between gene subnetworks in ascidians and other animals (Ferrier 2011. BMC Biol 9: 3). At present, however, the genotype-to-phenotype relationship is an unsolved problem in the context of a single species, and to consider the problem across major groups of animals is to venture deep into terra incognita. Much more work on the development in the broadest range of major animal taxa will be required to determine how (or even if) genotypes can predict phenotypes in vivo in embryos and later life stages. Studies of this complex subject, which are likely to require a combination of experimental data and computational biology (Karr et al, 2012. Cell 150: 389-401) are still in their infancy. That said, when I consider the developmental biology of animals in general, I think it is very likely that the highly determinate embryogenesis and genomic simplifications of ascidians are evolutionarily derived states. It is possible that this ancestor may have been more vertebrate-like than tunicate-like. For example, it might have had definitive neural crest, and the situation in modern ascidian larvae, which apparently have part of the gene network for migratory neural crest, may represent a simplification from a more complex ancestor. In the absence of fossils that could represent the common ancestor of tunicates and vertebrates, we cannot reconstruct a reasonable facsimile of this ancestor. Given that tunicates are probably derived, it is not very likely that any amount of research on modern chordates will solve this problem.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 15, Lionel Christiaen commented:

      In this article, the author presents an extensive account of the extreme diversity of adult anatomies and life histories encountered across the thousands of tunicate species that roam the oceans worldwide, and occupy multitudes of ecological niches. The author then emphasizes that tunicate genomes are markedly more compact and evolve faster than the genomes of their chordate relatives, the cephalochordates and vertebrates. Several recent studies support this notion, and the argument that rapid genome diversification may have fostered tunicate evolution is reasonable. Since the early development of tunicates, in particular ascidians, has been considerably simplified and streamlined in a manner analogous to what is observed in nematodes, the author argues that tunicates must have lost most ancestral genomic, developmental and anatomical features that could inform reconstruction of the evolutionary history of vertebrate traits. We wish to provide alternative interpretations and propose a more inclusive approach to the problems posed by tunicates in building models for the evolution of vertebrates. First, the argument about faster evolutionary rates implies that every part of the genome evolves at similarly faster rates; yet, phylogenomic analyses of concatenated coding sequences unequivocally revealed that tunicates and vertebrates form a monophyletic group referred to as olfactores [1, 2]. Moreover, conserved anatomical features including the notochord, the dorsal neural tube and the pharyngeal gill slits depend upon ancestral regulatory inputs from conserved transcription factors, as noted by the author. These simple examples argue against a complete relaxation of evolutionary constraints on ancestral features in tunicates, especially in ascidians. In other words, high average rates of sequence evolution and profound morphological changes are not incompatible with deep conservation of cellular and molecular mechanisms for embryonic patterning and cell fate specification. Instead, the apparent incompatibility between high rates of genome divergence and the maintenance of ancestral olfactores features over long evolutionary distances hints at the notion of developmental system drift (DSD), whereby mechanistically connected developmental features may be conserved between distantly related species exhibiting extensive divergence of the intervening processes [3]. Ascidians provide an attractive test-bed to study DSD since their early embryos have barely changed in almost half a billion years, despite considerable genomic divergence [4]. This is a lively area of research as illustrated by the 11 tunicate genomes recently made openly available to the worldwide research community [4-6]. We argue that comparative developmental studies are poised to identify additional features conserved between tunicates and vertebrates, such as those recently reported for the neural crest, the cranial placodes and the cardiopharyngeal mesoderm [7-10]. These "islands of conservation" will continue to shed light on the mechanisms of tunicate diversification and the deep evolutionary origins of the vertebrate body plan.

      REFERENCES 1. Delsuc, F., Brinkmann, H., Chourrout, D., and Philippe, H. (2006). Tunicates and not cephalochordates are the closest living relatives of vertebrates. Nature 439, 965-968. 2. Putnam, N.H., Butts, T., Ferrier, D.E., Furlong, R.F., Hellsten, U., Kawashima, T., Robinson-Rechavi, M., Shoguchi, E., Terry, A., Yu, J.K., et al. (2008). The amphioxus genome and the evolution of the chordate karyotype. Nature 453, 1064-1071. 3. True, J.R., and Haag, E.S. (2001). Developmental system drift and flexibility in evolutionary trajectories. Evolution & development 3, 109-119. 4. Stolfi, A., Lowe, E.K., Racioppi, C., Ristoratore, F., Brown, C.T., Swalla, B.J., and Christiaen, L. (2014). Divergent mechanisms regulate conserved cardiopharyngeal development and gene expression in distantly related ascidians. eLife 3, e03728. 5. Voskoboynik, A., Neff, N.F., Sahoo, D., Newman, A.M., Pushkarev, D., Koh, W., Passarelli, B., Fan, H.C., Mantalas, G.L., Palmeri, K.J., et al. (2013). The genome sequence of the colonial chordate, Botryllus schlosseri. eLife 2, e00569. 6. Brozovic, M., Martin, C., Dantec, C., Dauga, D., Mendez, M., Simion, P., Percher, M., Laporte, B., Scornavacca, C., Di Gregorio, A., et al. (2016). ANISEED 2015: a digital framework for the comparative developmental biology of ascidians. Nucleic acids research 44, D808-818. 7. Abitua, P.B., Gainous, T.B., Kaczmarczyk, A.N., Winchell, C.J., Hudson, C., Kamata, K., Nakagawa, M., Tsuda, M., Kusakabe, T.G., and Levine, M. (2015). The pre-vertebrate origins of neurogenic placodes. Nature 524, 462-465. 8. Abitua, P.B., Wagner, E., Navarrete, I.A., and Levine, M. (2012). Identification of a rudimentary neural crest in a non-vertebrate chordate. Nature 492, 104-107. 9. Diogo, R., Kelly, R.G., Christiaen, L., Levine, M., Ziermann, J.M., Molnar, J.L., Noden, D.M., and Tzahor, E. (2015). A new heart for a new head in vertebrate cardiopharyngeal evolution. Nature 520, 466-473. 10. Stolfi, A., Ryan, K., Meinertzhagen, I.A., and Christiaen, L. (2015). Migratory neuronal progenitors arise from the neural plate borders in tunicates. Nature 527, 371-374.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 27, Randi Pechacek commented:

      Katherine Dahlhausen wrote a blog about this paper on microBEnet. She explains a little about bacteriocins and if they are too good to be true.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 04, R Andrew Moore commented:

      Do topical nonsteroidal anti-inflammatory drugs for acute musculoskeletal pain work?

      It is good to see Peter taking an interest in another of our publications.

      The evidence shows that topical NSAIDs can be effective for acute musculosketal pain. That was the result we found in our first review of the topic (BMJ 1998;316:333–8), and it has not changed in subsequent updates. Even 20 years ago we actively examined the issue of study size, and concluded that small studies tended to overestimate treatment effects, as well as testing other issue of quality, and of the particular topical NSAID tested. In the current update we were able to add formulation to the list of topics that might affect study results.

      Peter dismisses the results for a number of reasons, we think incorrectly. For example, the only risk of bias measure where there was potentially high risk of bias was small study size, a measure included in our risk of bias assessments rather than ignored, as is so often the case. On the issue of industry funding affecting the results of randomised trials, not only is there no evidence of any such effect, but there is positive evidence that there is no effect. We showed the lack of any effect in analgesic trials a decade ago (Pain 2006;121:207-18), and no effect was found for statins (BMJ 2014;349:g5741). That does not mean that industry has clean hands, of course, and we have pointed out, for example, the biases that might arise from multiple publication (BMJ 1997;315:635-40) or inappropriate imputation methods (Pain. 2012;153:265-8).

      The review did include a number of older studies of relatively poorer quality, involving nine NSAIDs (other than diclofenac and ketoprofen) but we make clear in the review that there were insufficient data of adequate quality to draw any conclusions about these. Studies were underpowered for adverse events, which were inconsistently reported, but we have again drawn attention to these limitations, which are common in many clinical trials.

      The review is not sui generis, but an ongoing dynamic of updated reports over the years. For example we wrote to 88 pharmaceutical companies for our 2004 update (BMC Family Practice 2004;5:10), with only one providing otherwise unpublished data. Topical NSAIDs (like most drugs in pain) are usually generic, with no requirement to perform clinical testing, or older, when it is much more difficult to obtain CTRs, as we have frequently been able to do previously. Experience suggested that trying to get hold of CTRs from hundreds of companies worldwide, many of which had done no trials on their product, was probably a lost cause. In this update we were able to identify that trial data from almost 6,000 patients was not available; a known unknown rather than an unknown unknown.

      Of course there was heterogeneity for all formulations together, because different formulations produced different levels of efficacy (and probably with different doses as well, though that was difficult to assess). But within formulations like Flector plaster or Emulgel the I2 was 0; Figures 5 and 6 in the Cochrane review make the point. There are arguments to have about the use of heterogeneity tests, but this is not the place.

      In terms of declarations, the issues here are of time, what it is that journals want, and relevance. Both RB and Menarini have topical NSAID products. RAM has worked with both those companies, and while that did not involve topical products it is a relevant disclosure. Futura Pharma dropped out of the three year period for declarations. None of the others were relevant, and for the most part involved investigator-initiated research using patient-level data to elucidate issues around evidence in analgesic trials, for example the importance of formulation (Pain 2014;155:14-21; Eur J Pain 2015;19:187-92) or how best to conduct multiple dose studies in acute pain (Br J Anaesth. 2016;116:269-76; BMC Anesthesiol 2016;16:9).

      There is much that could not be done without constructive involvement with pharmaceutical companies, as they produce the trial data on which we base our evidence. Understanding that evidence thoroughly is what it is all about. But we think there is confusion over declarations on interest and what is required, and are actively working on just that topic.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 29, Peter Gøtzsche commented:

      Do topical nonsteroidal anti-inflammatory drugs for acute musculoskeletal pain work?

      Referring to their Cochrane review, the authors asserted that topical NSAIDs are effective for acute musculoskeletal pain (1). However, there are major problems with the trials they reviewed and also with the Cochrane review itself (2). The authors included 8,781 patients in their review but found that the results were missing from another 5,900 patients. Moreover, the trials were industry funded, of relatively poor quality, and the authors analysed published data, not data from clinical study reports, and did not try to obtain all the missing trials and data from the manufacturers.

      There was extreme heterogeneity in their meta-analyses, e.g. I2 was 92% for the diclofenac trials, which were the most common ones, and there was extreme funnel plot asymmetry, with the largest trials showing the smallest effects (the authors didn’t show funnel plots but I constructed one for diclofenac).

      The authors cautioned that the large amounts of unpublished data “could influence results in updates of this review” (2). They certainly could and I believe it is plain wrong to perform meta-analyses on the authors’ data. When I most recently reviewed this area for the BMJ in 2010, I concluded that we don't know whether topical NSAIDs are beneficial (3).

      One of the authors, Andrew Moore, “reported receiving a grant and personal fees from Reckitt Benckiser and personal fees from Menarini. No other disclosures were reported” (1). However, in 2015, Moore published another systematic review of NSAIDs where his competing interests, in addition to those declared in JAMA, were: “personal fees from Novartis, grants and personal fees from Grunenthal, personal fees from Orion Pharma, personal fees from Futura Pharma, personal fees from Astellas, personal fees from Eli Lilly, personal fees from Pfizer and personal fees from Menarini” (4).

      1 DerryS, Wiffen P, Moore A. Topical nonsteroidal anti-inflammatory drugs for acute musculoskeletal pain. JAMA 2016;315:813-4.

      2 Derry S, Moore RA, Gaskell H, McIntyreM, Wiffen PJ. Topical NSAIDs for acute musculoskeletal pain in adults. Cochrane Database Syst Rev. 2015;6:CD007402.

      3 Gøtzsche PC. NSAIDs. Clin Evid (Online). 2010 Jun 28.

      4 Moore RA, Derry S, Wiffen PJ, Straube S. Effects of food on pharmacokinetics of immediate release oral formulations of aspirin, dipyrone, paracetamol and NSAIDs - a systematic review. Br J Clin Pharmacol 2015;80:381-8.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Mar 29, Peter Gøtzsche commented:

      Do topical nonsteroidal anti-inflammatory drugs for acute musculoskeletal pain work?

      Referring to their Cochrane review, the authors asserted that topical NSAIDs are effective for acute musculoskeletal pain (1). However, there are major problems with the trials they reviewed and also with the Cochrane review itself (2). The authors included 8,781 patients in their review but found that the results were missing from another 5,900 patients. Moreover, the trials were industry funded, of relatively poor quality, and the authors analysed published data, not data from clinical study reports, and did not try to obtain all the missing trials and data from the manufacturers.

      There was extreme heterogeneity in their meta-analyses, e.g. I2 was 92% for the diclofenac trials, which were the most common ones, and there was extreme funnel plot asymmetry, with the largest trials showing the smallest effects (the authors didn’t show funnel plots but I constructed one for diclofenac).

      The authors cautioned that the large amounts of unpublished data “could influence results in updates of this review” (2). They certainly could and I believe it is plain wrong to perform meta-analyses on the authors’ data. When I most recently reviewed this area for the BMJ in 2010, I concluded that we don't know whether topical NSAIDs are beneficial (3).

      One of the authors, Andrew Moore, “reported receiving a grant and personal fees from Reckitt Benckiser and personal fees from Menarini. No other disclosures were reported” (1). However, in 2015, Moore published another systematic review of NSAIDs where his competing interests, in addition to those declared in JAMA, were: “personal fees from Novartis, grants and personal fees from Grunenthal, personal fees from Orion Pharma, personal fees from Futura Pharma, personal fees from Astellas, personal fees from Eli Lilly, personal fees from Pfizer and personal fees from Menarini” (4).

      1 DerryS, Wiffen P, Moore A. Topical nonsteroidal anti-inflammatory drugs for acute musculoskeletal pain. JAMA 2016;315:813-4.

      2 Derry S, Moore RA, Gaskell H, McIntyreM, Wiffen PJ. Topical NSAIDs for acute musculoskeletal pain in adults. Cochrane Database Syst Rev. 2015;6:CD007402.

      3 Gøtzsche PC. NSAIDs. Clin Evid (Online). 2010 Jun 28.

      4 Moore RA, Derry S, Wiffen PJ, Straube S. Effects of food on pharmacokinetics of immediate release oral formulations of aspirin, dipyrone, paracetamol and NSAIDs - a systematic review. Br J Clin Pharmacol 2015;80:381-8.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 08, Nicholas Malmquist commented:

      Thank you Dr. Soldati-Favre for your comments and highlighting the work of Dr. Ke Hu. In Chen PB, 2016 we openly discuss the possibility that PfSET7 is not necessarily a histone methyltransferase. We also provide examples from the literature of histone methyltransferase enzymes that are present in the cytosol in other organisms. After reporting PfSET7 as an active methyltransferase enzyme using histones as protein substrates, similar to the TgAKMT activity assays performed in Heaslip AT, 2011 and Sivagurunathan S, 2013, we invite our colleagues in the community, including those with an interest in parasite motility, to join us in the further exploration of the cellular function of PfSET7. Indeed, based solely on the phylogenetic analysis in Sivagurunathan S, 2013, investigating the role of PfSET7 in motility might prove fruitful. For additional information, the nomenclature "PfSET7" comes from Cui L, 2008 and is apparently unrelated to the SET7 family in Figure 1A of Sivagurunathan S, 2013.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 06, Dominique Soldati-Favre commented:

      This well conducted study reports the enzymatic characterization of a Plasmodium falciparum Protein methyltransferase (PF3D71115200, referred to as PfSET7). This work is embedded in a line of research aiming at a better understanding of how histone post-translational modifications orchestrate gene expression and notably genes involved in virulence and antigenic variation. However the unexpected punctate cytoplasmic localization of PF3D71115200 in parasites from asexual blood stage, sporozoite and liver stage is not easily compatible with a function as histone methyltransferase. Indeed, published evidence of the phylogenetic and functional characterization of a Toxoplasma gondii ortholog of PF3D71115200 points in another direction. Heaslip AT, 2011 reported, the functional characterization of an apical protein lysine methyltransferase (TGME49216080, AKMT). The authors elegantly showed that TgAKMT is involved in activation of T. gondii motility. Additionally Sivagurunathan S, 2013 reported a detailed dissection of TgAKMT, along with a robust phylogenetic analysis showing that various apicomplexan AKMT orthologs form a clade distinct from other KMTs. In the phylogenetic tree presented in figure 1A, PF3D71115200 is described as an ortholog of TgAKMT, whereas not a single apicomplexan protein appears to fall within the SET7 cluster of histone methyltransferases. The work from Dr. Ke Hu is thus of considerable value to re-visit the interpretation of the data presented here, and to shed light on the potential role of PF3D71115200 in regulation of motility in P. falciparum.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 08, Randi Pechacek commented:

      Katherine Dahlhausen, a biophysics Ph.D. candidate, wrote an enthusiastic blog about how this new technique works on microBE.net


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 04, Ruopeng An commented:

      Dr. Brown, we sincerely appreciate your comments on our paper. We agree with you regarding the limitations of self-reported 24-hour dietary recall data in the NHANES. However, despites these data limitations, the NHANES 24-hour dietary recall data have been a primary source to study dietary behavior at the population level. In the paper, we noted: “Dietary intakes in NHANES were self-reported and subject to measurement error and social desirability bias …. Dietary recall method in estimating plain water consumption is likely to result in underestimation because water intake occasions are often forgotten.” Due to the these study limitations, we warrant further research that adopts a randomized study design and objective measure on water intake, “… this work is observational in nature, and the findings warrants confirmation through controlled interventions.”


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 23, Andrew Brown commented:

      This article used self-reported energy intake and self-reported water intake to try to estimate the association between water consumption and actual energy intake. The use of self-reported energy intake as an estimate for actual energy intake has been widely demonstrated to be invalid over decades of research [BEAUDOIN R, 1953,Schoeller DA, 1990] and by expert consensus [Schoeller DA, 2013, Dhurandhar NV, 2015, Subar AF, 2015]. Considering much of the article and its conclusions focused on energy intake, much of the article is also invalid, particularly undermining the use of these results to support the conclusion that “promoting plain water intake could be a useful public health strategy for reducing energy...”


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 27, Jim Woodgett commented:

      Interested in testing this further in either conditional or global GSK-3 KOs (would also get at possible isoform issue)? We found this approach effective in comparing a different GSK-3 small molecule inhibitor with a GSK-3alpha KO in another model of schizophrenia (DISC1 L100P: http://www.ncbi.nlm.nih.gov/pubmed/20687111). Also rescued behavioural deficits.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 24, DAVID ALLISON commented:

      We write to correct a misstatement in the above referenced article. The article stated “On gestation d 4, rats were randomized into 2 groups: 18 entered the unfiltered chamber, and the remaining 12 entered the filtered chamber”. One of us (DBA) contacted the others (JZ and YW) to inquire how and why the 3:2 allocation was achieved in the randomization and upon dialogue it became clear that allocation was not via randomization. The sentence should be corrected as “On gestation d 4, rats were assigned into 2 groups: 18 entered the unfiltered chamber, and the remaining 12 entered the filtered chamber. The assignment was done by consideration of baseline body weight in such a way that the baseline weights were not significantly different between the two groups.”

      Sincerely,

      Jim Zhang, PhD<br> Professor of Global and Environmental Health

      Yongjie Wei, PhD Associate Professor Chinese Academy of Environmental Sciences

      David B. Allison, PhD Distinguished Professor University of Alabama at Birmingham


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 25, Cicely Saunders Institute Journal Club commented:

      The Cicely Saunders Institute journal club discussed this paper on Wednesday 6 April 2016.

      We enjoyed discussing this paper and felt that the authors used routinely available data innovatively to examine important health effects on a large number of bereaved informal caregivers, particularly exploring differences between those living with patients of different diagnoses. We were pleased by the inclusion of dementia and COPD, which are often neglected compared with bereaved caregivers of those with cancer. An interesting part of our discussion focused on those excluded from the study. A suggestion was that it would have been interesting to have divided the numbers in the first exclusion box of Figure 1, to identify the proportion of those without a cohabitee separate from those in non-eligible households (and the reasons for their ineligibility), particularly as these two groups comprised more than 75% of the data. We also wondered why the study solely focussed on spousal/partner caregivers, rather than other family members and friends? We discussed how the political landscape might impact on informal caregivers, for example the Care Act 2014. Given the future structural changes in funding caregiving, we found it interesting to reflect on how legislative changes may affect outcomes for caregivers. This interesting study recognises the important role that informal caregivers have in allowing people to die at home and ensuring their preferences can be met, but whose burden is currently under-measured. In future it would be useful to explore further the identification of informal caregivers on GP registers.

      Commentary by Clare Pearson and Mendwas Dzingina


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 19, David Keller commented:

      What about safety?

      This study confirms that, in symptomatic male seniors, raising testosterone concentrations from moderately low to the normal midlife range has moderate benefits. Prior studies have yielded similar benefits in similar groups of men. What we are still missing is evidence of safety for this intervention. Specifically, we need to rule out the possibility that exogenous testosterone will increase deaths from atherosclerotic arterial disease, especially heart attacks, or neoplasms, especially prostate cancer.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 19, Stuart RAY commented:

      In response to a query regarding the method used to measure insulin concentration (to calculate HOMA-IR), Dr. Kernan referred me to Viscoli CM, 2014, which states: "The Linco (St. Charles, MO) human insulin-specific radioimmunoassay (RIA) was used at the laboratories in North America and Australia to measure circulating insulin concentrations. Because this assay was not available at the laboratories in Europe and Israel, the Linco animal serum-free enzyme-linked immunosorbent assay was used and results converted to RIA values by means of an internal LINCO correlation equation (insulin RIA [μU/mL] = 1.1056 × (insulin enzyme-linked immunosorbent assay [ulU/mL]) + 2.1494)."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 07, David Keller commented:

      Dementia caused by elevated aluminum levels in dialysis is not Alzheimer's disease: a distinction without a difference

      Professor Haenisch gave two references to substantiate her statement that "the involvement of aluminum in the etiology of dementia seems to be a matter of debate". Immediately below, I summarize what I have learned from her first reference [1]:

      Lidsky points out that the clinical presentation of dementia caused by elevated aluminum levels in dialysis patients is clearly distinct from that of true Alzheimer-type dementia. He also debunks the rumor that elevated aluminum levels cause the neurofibrillary tangles in the human brain which are pathognomonic for Alzheimer disease, noting that the neurofibrillary tangles caused by aluminum exhibit a distinctly different pattern when examined carefully under immunofluorescence, proving once and for all that the form of brain damage caused by aluminum is definitely not Alzheimer disease.

      These findings are noted, but are of little comfort if they merely imply that aluminum ingestion causes brain damage and dementia which cannot be classified as Alzheimer type. As a primary-care physician who must answer patients' questions about the risks of dietary aluminum, that distinction truly makes no difference to patients or to myself.

      Reference

      [1] Lidsky TI. Is the aluminum hypothesis dead? J Occup Environ Med. 2014;56(5)(suppl): S73-S79.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Aug 05, David Keller commented:

      Advice to avoid PPI medication is premature until major confounding by the use of aluminum antacids is eliminated

      "The avoidance of PPI medication may prevent the development of dementia." So begins the widely-quoted conclusion of this study by Gomm, Haenisch and colleagues. The evidence I presented in my letter to JAMA-Neurology [1] along with its effect on their study was "addressed", as Haenisch claims, but in a manner I would characterize as completely dismissive and devoid of information.

      To recapitulate, my letter pointed out that Haenisch' group had not corrected their results for chronic aluminum exposure. A recent meta-analysis of 8 case-control and cohort studies of over 10,500 subjects, showed a significant 71% increase in risk of Alzheimer dementia for subjects with chronic aluminum exposure. [2] Are these findings by Wang and colleagues not applicable for some reason?

      Aluminum salts are the active ingredients of most immediately-effective over-the-counter antacids (calcium-based antacids are less effective, and antihistamines are slowly effective). Haenisch found a 44% increase in dementia for subjects taking proton pump inhibitors (PPIs), compared with subjects not taking them. However, patients taking a PPI for upper GI acid symptoms are more likely than are controls to have taken aluminum-based antacids (such as Maalox, Mylanta, Rolaids and many others). In the USA, physicians routinely inquire about the use of such antacids by patients suspected of having upper GI acid disorders.

      Subjects taking PPIs logically must therefore be more likely to have a history of chronic aluminum exposure due to OTC antacid use than controls, and the 71% higher risk of dementia from their aluminum exposure is greater than the 44% increase found by Haenisch in association with PPI use. Attribution of increased risk for dementia to PPI use therefore requires correction of her dataset for exposure to aluminum antacids.

      Haenisch stated "we were not able to include aluminum-containing antacids as these drugs are often not covered by the statutory health insurance in Germany". That was not how I learned epidemiology should be practiced, from a German epidemiology professor who goes out in work boots to collect the data herself if it is missing.

      Therefore, the recommendation by Haenisch to avoid PPI use seems premature and should be withdrawn, as a public safety measure, until someone can truly address the following question: how much of the risk associated with PPI use is likely to be attributable to the use of aluminum-based antacids? Otherwise, we may witness an upsurge in peptic ulcer disease as patients and physicians prematurely embrace Haenisch' conclusions.

      References

      1: Keller DL. Proton Pump Inhibitors and Dementia Incidence. JAMA Neurol. 2016 Jun 20. doi: 10.1001/jamaneurol.2016.1488. [Epub ahead of print] PubMed PMID:27323287.

      2: Wang Z, Wei X, Yang J, Suo J, Chen J, Liu X, Zhao X. Chronic exposure to aluminum and risk of Alzheimer's disease: A meta-analysis. Neurosci Lett. 2016 Jan 1;610:200-6. doi:10.1016/j.neulet.2015.11.014. Epub 2015 Nov 27. PubMed PMID: 26592479.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Aug 03, Britta Haenisch commented:

      Aluminium exposure and dementia

      Keller raised the issue of aluminium ingestion and dementia risk in a letter to JAMA Neurology [1]. We would like to refer to our reply in JAMA Neurology where we addressed the comment [2]. While the involvement of aluminum in the etiology of dementia seems to be a matter of debate [3,4], this aspect is interesting to examine in further studies.

      References

      [1] Keller DL. Proton Pump Inhibitors and Dementia Incidence. JAMA Neurol. 2016 Jun 20. doi: 10.1001/jamaneurol.2016.1488.

      [2] Gomm W, Haenisch B. Proton Pump Inhibitors and Dementia Incidence-Reply. JAMA Neurol. 2016 Jun 20. doi: 10.1001/jamaneurol.2016.1494.

      [3] Wang Z, Wei X, Yang J, et al. Chronic exposure to aluminum and risk of Alzheimer’s disease: a meta-analysis. Neurosci Lett. 2016;610: 200-206.

      [4] Lidsky TI. Is the aluminum hypothesis dead? J Occup Environ Med. 2014;56(5)(suppl): S73-S79.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Jul 21, David Keller commented:

      Aluminum exposure raised dementia risk by 71% in meta-analysis; PPI dementia risk confounded by aluminum-containing antacids

      Dietary aluminum ingestion is theorized to be neurotoxic and play a causative role in the onset and progression of dementia. [1-3] A recent meta-analysis showed that individuals chronically exposed to aluminum were 71% more likely to develop Alzheimer disease (odds ratio, 1.71; 95% CI, 1.35-2.18).[4] Many strong antacids contain aluminum hydroxide and are often taken for years by patients with peptic ulcer disease or gastroesophageal reflux before they are prescribed proton pump inhibitors, and concurrent with their use. Gomm and colleagues [5] did not correct for the use of aluminum-containing antacids when calculating the association of dementia with proton pump inhibitor use. How much of their observed association of proton pump inhibitor use with dementia is actually due to long-term ingestion of aluminum antacids, either currently or in the past?

      References

      1: Bhattacharjee S, Zhao Y, Hill JM, Percy ME, Lukiw WJ. Aluminum and its potential contribution to Alzheimer's disease (AD). Front Aging Neurosci. 2014 Apr 8;6:62. doi: 10.3389/fnagi.2014.00062. eCollection 2014. PubMed PMID:24782759; PubMed Central PMCID: PMC3986683.

      2: Rodella LF, Ricci F, Borsani E, Stacchiotti A, Foglio E, Favero G, Rezzani R, Mariani C, Bianchi R. Aluminium exposure induces Alzheimer's disease-like histopathological alterations in mouse brain. Histol Histopathol. 2008 Apr;23(4):433-9. PubMed PMID: 18228200.

      3: Exley C. What is the risk of aluminium as a neurotoxin? Expert Rev Neurother. 2014 Jun;14(6):589-91. doi: 10.1586/14737175.2014.915745. Epub 2014 Apr 30. PubMed PMID: 24779346.

      4: Wang Z, Wei X, Yang J, Suo J, Chen J, Liu X, Zhao X. Chronic exposure to aluminum and risk of Alzheimer's disease: A meta-analysis. Neurosci Lett. 2016 Jan 1;610:200-6. doi:10.1016/j.neulet.2015.11.014. Epub 2015 Nov 27. PubMed PMID: 26592479.

      5: Gomm W, von Holt K, Thomé F, et al. Association of proton pump inhibitors with risk of dementia: a pharmacoepidemiological claims data analysis [published online February 15, 2016]. JAMA Neurol. doi:10.1001/jamaneurol.2015.4791.

      The above letter was published in JAMA-Neurology [6], but the reply by Gomm and colleagues failed to provide the necessary correction for aluminum antacid use.

      6: Keller DL. Proton Pump Inhibitors and Dementia Incidence. JAMA Neurol. 2016 Jun 20. doi: 10.1001/jamaneurol.2016.1488. [Epub ahead of print] PubMed PMID:27323287.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 18, Roman Stilling commented:

      Please also kindly note a previous study from our lab, where we show NF-kappaB interacts with the Kat2a/Gcn5 histone acetyltransferase to regulate stimulus-induced gene expression of plasticity-associated genes in the CA1 region of the hippocampus in mice: http://www.ncbi.nlm.nih.gov/pubmed/25024434


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 31, Robert Speth commented:

      The primary reason this study is scientifically invalid is that there is no evidence that a protein the size of apoaequorin, administered in a pill form can enter the body via absorption from the digestive tract. Indeed, there is abundant evidence that proteins are metabolized in the digestive tract into component amino acids. To make a simple comparison, this is the reason insulin is injected into the body rather than taken orally. Additionally, even if apoaequorin was administered parenterally, it would not be able to cross the blood-brain-barrier and enter into the brain to exert its therapeutic effects in brain neurons that mediate cognitive functions. It is noteworthy that in the manuscript the authors cite to show the putative benefits of apoaequorin to protect hippocampal neurons from stroke damage[1], the apoaequorin was administered intracranially since this was the only way that the drug could gain access into the brain. Additionally, the protective effects described for apoaequorin in that study related to hypoxic/glucose deprivation conditions associated with stroke leading to massive increases in intracellular calcium, as opposed to normoxic/glucose available conditions. Notably missing from the article is any mention of a mechanism by which orally administered apoaequorin could gain access to neurons in the brain to mediate the improvement in cognitive function.

      Additionally, if by some remarkable circumstance apoaequorin was able to enter the body from the GI tract, e.g., through an open sore in the mouth, accidental inhalation of the pill, or a damaged esophagus, it would pose a serious health hazard to the person taking this pill because foreign proteins are immunogenic and could produce a harmful immunological response. Worse yet, it might impair the calcium homeostasis of every cell in the body in which it gained entry.

      Another reason that this article is scientifically invalid is that there was no statistical comparison made between the apoaequorin treated group and the placebo control group. The primary reason for including a placebo control group in clinical trials is to determine if the treated group improves significantly more than the placebo group. The omission of this information invalidates any claims the authors might try to make regarding the efficacy of orally administered apoaequorin.

      Additional statistical inadequacies that make the article scientifically invalid are: there is no representation of the error variance for the data presented in Figures 1 – 6 of the manuscript, nor is such error variance reported in the text of the manuscript. The Statistical analysis section describes the use of paired and independent t tests, a repeated-measures analysis of covariance (ANCOVA) test, Mann-Whitney U test and Wilcoxon signed-ranks test to examine group differences. However, the only tests presented in the results are paired t tests with degrees of freedom that differ from the degrees of freedom expected based upon the group sizes that were reported in Table 1. Only 2 subjects are indicated as not having completed the testing, yet the number of subjects per group which should be the number of degrees of freedom plus 1, suggest that the values for 51 subjects are missing from the statistical analyses.

      The description of the tests of cognition is vague and does not allow the reader to know how many different tests were administered to the participants in “The Madison Memory Study”. There is no indication of the number of computerized tasks from the CogState Research Battery that were administered to the study participants. The lack of such information makes it impossible to determine if the preponderance of the tasks included in the CogState Research Battery showed no difference between the apoaequorin group and the control group. It also makes it impossible to determine the error rate that should be applied to t tests to account for multiple comparison mediated increases in the Type I error rate. Furthermore, there are much better and more meaningful ways to describe an effect than Cohen’s d test, for example the 95% confidence interval around the measured effect.

      In view of the statistical anomalies in this article one must consider the possibility of subjective bias in the conduct of this research. Since all of the authors of the study are associated with the company that is marketing apoaequorin as a memory enhancer, and there is no acknowledgment of participation by any other persons, let alone an independent third party, there is a financial conflict of interest in the outcome of the study that should disqualify the authors from being able to claim that this study was objectively double blinded. It is troublesome that the control group, from which the values for 13% of the participants are missing, experienced a dramatic reduction in their level of performance enhancement between days 60 and 90, for which no explanation is provided, while the apoaequorin group for which the results for 2 subjects are missing, showed a dramatic increase in performance from 60 to 90 days. Looking only at the 90-day performance enhancement rather than the enhancements at shorter time intervals, which seems to have been part of the original study design by ANCOVA, also creates a selection bias that compromises the validity of the study.

      Another potential conflict of interest is the occurrence of a paid advertisement for Prevagen® brand of apoaequorin prior to the table of contents page of the journal issue. The possibility that publication of this article in Advances in Mind-Body Medicine was associated with financial compensation to the journal for placement of this advertisement is at the very least an apparent financial conflict of interest.

      Finally, there is an inaccuracy in the characterization of apoaequorin “… having an amino acid sequence similar to human calcium binding proteins.” This uncited statement on page 5 is misleading. A BLAST sequence analysis of apoaequorin run using blastp, revealed a small sequence homology with a single isoform of the human calcium binding protein plastin-3. By no means can the inference be made that apoaequorin has a high homology with human calcium binding proteins.

      1. Detert JA, Adams EL, Lescher JD, Lyons JA, Moyer JR, Jr. Pretreatment with apoaequorin protects hippocampal CA1 neurons from oxygen-glucose deprivation. PloS one. 2013;8(11):e79002. Epub 2013/11/19. doi: 10.1371/journal.pone.0079002. PubMed PMID: 24244400; PubMed Central PMCID: PMCPMC3823939.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 22, Judy Slome Cohain commented:

      One year length of time is very short term, not long term, when it comes to Uterine prolapse. Uterine prolapse is very common, starting in one's midlife and continuing until death, so on average about 30 years. A woman who uses a pessary for one year may very well stop after that, and then, left with no alternatives, use it on and off. It would be very interesting indeed, to do a long term study of what solutions women use LONG TERM for uterine prolapse.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 18, Jennifer A Lyon commented:

      I noticed a couple of very minor text editing issues in this article. I'm only commenting because the first one accidentally results in a rather amusing read.

      In the results, across pages 4-5, note the sentence "Patients with confirmed bacteremia had a more severe respiratory affection than those with no bacteria identified in blood." I suspect the authors meant either 'infection' or 'affliction' rather than 'affection.' I doubt the patients' respiratory systems were more affectionate in response to the presence of bacteremia. Nothing vital here, but it did give me a quick laugh.

      In the next paragraph on page 5 there's simple typo: simultaneously is misspelled as "simltaneously."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 15, Christopher Tench commented:

      The version of GingerALE used (2.3.2) is known to produce false positive results due to a bug; fixed at 2.3.6. The results can not be considered valid. Furthermore, there is no control for type 1 error rate when comparing HC and patient groups, which is not appropriate given the 10<sup>5</sup> statistical tests performed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 31, Leigh Jackson commented:

      The Nobel Prize for Medicine was thoroughly deserved for the discovery of artemisinin via a clue in the traditional literature of Chinese herbal medicine.

      Should it be confirmed that the Ayurveda tradition is supported by genetics it is not clear how that might result in the enormous kind of medical benefit provided by artemisinin. However it would certainly be an interesting discovery.

      The study suggesting genetic support for Ayurveda needs independent verification and a lot more supporting evidence.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 03, Jason Doctor commented:

      Dear Dr. Del Mar and Colleagues,

      Thank you for your interest in our paper. We would like to respond to your questions and comments.

      You ask, “Does this effect spill over to the other three quarters of ARIs? Or also to the other conditions for which antibiotics might be prescribed, including skin and urinary infections?” The implementation (or “triggering”) of our interventions was restricted to acute respiratory infections. We did not apply the study interventions in cases where co-morbid diagnoses for skin and urinary infections were present at the visit. We were unable to evaluate spill-over to these other diagnoses, but such spillover would be an interesting area for future research.

      You also comment that our intervention could not evaluate total antibiotics dispensed and because of this may have underestimated the effects of the intervention for cases where ‘delayed prescribing’ was practiced. We actively attempted to discourage delayed prescribing with each of the three interventions by focusing on changing ordering behavior. Delayed prescribing is not a good treatment strategy because it sends conflicting messages to patients, forces patients to make a clinical decision, may result in patients consuming antibiotics unnecessarily, and may discourage follow-up visits for more serious medical conditions deserving careful evaluation (e.g., pneumonia).

      You note that we found that diagnosis shifting was not evident, referring to our presentation of eTable 6. However, transforming the coefficient estimates in eTables 3, 4A and 4B into odds ratios, your group reports finding a significant effect on the trajectory of ‘antibiotic appropriate’ and ‘antibiotic inappropriate’ diagnoses over time. We note that eTables 3, 4A and 4b do not include antibiotic appropriate diagnoses of any kind, so evaluation of data from these eTables cannot measure diagnosis shifting. Only eTable 6, which reports our analysis of the proportion of all acute respiratory infections coded as antibiotic appropriate diagnoses over time, contains antibiotic appropriate diagnoses. As noted, we found no evidence of diagnosis shifting in the analyses reported in eTable 6.

      As a final question, you ask why the control group’s data are decreasing in prescribing rate pre-randomization. You correctly point out that this cannot be due to the Hawthorne effect because it occurred prior to enrollment. You conjecture that this may be explained by diagnosis shifting of electronic health record coders. To address this, we make the following clarifying observations. First, the data presented in our graphs are from the statistical model and are not unadjusted raw rates over that time period. Unadjusted data were more variable during that period and while they showed an overall reduction, they did not show a strictly decreasing reduction month-to-month. Second, during the period of time before the intervention, there were numerous state and local efforts to reduce inappropriate prescribing. It is possible that the noisy downward trend was due to a greater awareness that brought about changing practice patterns over time. Third, as indicated in eTable 6, time was not a significant predictor of the proportion of all acute respiratory infections coded as antibiotic appropriate diagnoses. This means that the trend is unlikely due to any shifting of diagnoses due specifically to time. Whatever the reason for this trend, randomization and our primary analysis method insure that pre-intervention trajectories of any sort do not threaten the study’s statistical conclusions.

      Authors: Daniella Meeker, PhD; Jeffrey A. Linder, MD, MPH; Craig R. Fox, PhD; Mark W. Friedberg, MD, MPP; Stephen D. Persell, MD, MPH; Noah J. Goldstein, PhD; Tara K. Knight, PhD; Joel W. Hay, PhD; Jason N. Doctor, PhD

      Author Affiliations: Schaeffer Center for Health Policy and Economics, University of Southern California, Los Angeles (Meeker, Knight, Hay, Doctor); RAND Corporation, Santa Monica, California (Meeker); Division of General Internal Medicine and Primary Care, Brigham and Women’s Hospital, Boston, Massachusetts (Linder, Friedberg); Anderson School of Management, University of California, Los Angeles (Fox, Goldstein); Department of Psychology, David Geffen School of Medicine at UCLA, Los Angeles (Fox, Goldstein); RAND Corporation, Boston, Massachusetts (Friedberg); Northwestern University, Chicago, Illinois (Persell).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 28, Chris Del Mar commented:

      Reducing inappropriate antibioics for acute respiratory infections in primary care

      We congratulate Meeker and colleagues on a very ambitious factorial trial to reduce antibiotics in primary care.1 The three interventions investigated appear to have had a small but important effect on the approximately one quarter of acute respiratory infections (ARIs) presenting to the primary care clinicians where antibiotics were judged inappropriate.

      Does this effect spill over to the other three quarters of ARIs? Or also to the other conditions for which antibiotics might be prescribed, including skin and urinary infections? Parsimony for the indications studied might spill over into other clinical areas -- which would be important.

      Sadly, this analysis could not measure changes in total antibiotics dispensed (rather than the surrogate outcome of those prescribed). The would be important because one important reduction strategy, ‘delayed prescribing’ (in which an antibiotic is prescribed but the patient advised to keep it ‘in case’, and not routinely have it dispensed2) might have been employed by some clinicians independently of the interventions being trialled, which might mean the effect is greater.

      However there were two concerns raised at our Journal Club.

      Might the observed effect be explained by Diagnosis Shifting, (in which high antibiotic prescribers disproportionately label a greater proportion of ARIs diagnoses as antibiotic-justifiable3)? The Authors declare diagnosis shifting was not evident, referring to eTable 6. However, transforming the coefficient estimates in eTables 3, 4A and 4B into odds ratios, we found a significant effect on the trajectory of ‘antibiotic appropriate’ and ‘antibiotic inappropriate’ diagnoses over time.

      What explains the control group’s dramatic reduction of ‘antibiotic inappropriate’ prescriptions in the 18 months before the interventions commenced, which continued through the study randomization period? This was in the order of 10% in the pre-randomization period, and a further 10% in the control group during the randomization period – greater than for any of the interventions themselves, (taken from the slopes in Fig 2). This cannot be a Hawthorne effect because the data were collected for a time period before the clinicians were enrolled. It does not fit any nationwide trends. We speculate that it is explained by Diagnosis Shifting, perhaps by a misclassification by electronic health record coders. Until we understand this reduction, we can have no confidence in the smaller intervention effect.

      Chris Del Mar MD professor of public health Paul Glasziou PhD professor of evidence based practice Elaine Beller MAppStat statistician On behalf of the Centre for Research in Evidence Based Practice Journal Club Bond University, Queensland 4229 Australia

      1 Meeker D, Linder JA, Fox CR, et al. Effect of behavioral interventions on inappropriate antibiotic prescribing among primary care practices: A randomized clinical trial. JAMA. 2016;315:562-70 2 Spurling GK, Del Mar CB, Dooley L, Foxlee R, Farley R. Delayed antibiotics for respiratory infections. Cochrane database of systematic reviews (Online). 2013;;CD004417:DOI: 10.1002/14651858.CD004417.pub3. 3 Howie JG, Richardson IM, Gill G, Durno D. Respiratory illness and antibiotic use in general practice. J R Coll Gen Pract. 1971;21:657-63.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 27, Daniel Corcos commented:

      The answer of Welch et al. to Peter Eby is interesting: "Eby posits that the stable incidence of metastatic breast cancer derives from two countervailing trends: a steady increase in underlying (true) breast-cancer incidence and a steady decrease in metastatic-disease incidence resulting from screening. Such perfectly counterbalanced trends would be remarkable. Furthermore, the supposition that the 30% increase in overall incidence reflects true increased disease burden ignores the fact that most of it occurred during the 1980s — as mass screening was introduced. The fact that the increase persists suggests substantial over diagnosis." However, the evidence presented by Eby is clear. So the counterbalanced trends should be explained, which can be done by assuming that mass screening is responsible for the rising incidence of breast cancer. http://www.ncbi.nlm.nih.gov/myncbi/daniel.corcos.1/comments/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 05, Marko Premzl commented:

      The eutherian third party data gene data sets FR734011-FR734074, HF564658-HF564785, HF564786-HF564815, HG328835-HG329089, HG426065-HG426183, HG931734-HG931849, LM644135-LM644234, LN874312-LN874522, LT548096-LT548244 and LT631550-LT631670 were deposited in European Nucleotide Archive under research project "Comparative genomic analysis of eutherian genes". The 1293 complete coding sequences were curated using tests of reliability of eutherian public genomic sequences included in eutherian comparative genomic analysis protocol including gene annotations, phylogenetic analysis and protein molecular evolution analysis (RRID:SCR_014401).

      Project leader: Marko Premzl PhD, ANU Alumni, 4 Kninski trg Sq., Zagreb, Croatia

      E-mail address: Marko.Premzl@alumni.anu.edu.au

      Internet: https://www.ncbi.nlm.nih.gov/myncbi/mpremzl/cv/130205/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 28, Christian J. Wiedermann commented:

      Flawed conclusion on sepsis and disseminated intravascular coagulation

      In this systematic review of antithrombin (AT) in critically ill patients, no statistically significant effect of AT concentrate on mortality was found in any of the studied patient groups including the group of severe sepsis and disseminated intravascular coagulation (DIC) in 12 randomized controlled trials (RCT) with a total of 2,858 participants.

      The KyberSept trial (Warren BL, 2001), a large-scale multicenter RCT directly assessing the effects of AT concentrate on mortality in patients with severe sepsis and septic shock, contributed 2,314 patients to the analysis in sepsis and DIC (weight, 81.4%); however, not all had DIC. In KyberSept patients, DIC was investigated only post hoc. Among the 563 participants on whom there were sufficient data to identify a subgroup of those with DIC, only 40.7% (229 of 563) had DIC at baseline (Kienast J, 2006). Consequently, in the subgroup analysis for patients with sepsis and DIC, the KyberSept trial is at high risk of bias, not at low risk.

      This implies that in the meta-analysis by Allingstrup et al. of sepsis and DIC, at least 334 of the total of 2,858 participants definitely did not have DIC, thus, heavily invalidating its conclusions on survival of sepsis and DIC patients after treatment with AT concentrate.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 18, Scott Edmunds commented:

      The FTP links to the the supporting oyster genome data in GigaDB unfortunately are likely to break during our next database migration, but the stable DOI that will always link to it is here: http://dx.doi.org/10.5524/100030


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 15, Srinivasan Kannan commented:

      My name is given as Kannan Srinivasan in this article. Kindly read it as Srinivasan Kannan. Please consider this as Kannan S to access my publications on pubmed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 28, Christian J. Wiedermann commented:

      Flawed conclusion on sepsis and disseminated intravascular coagulation

      In this systematic review of antithrombin (AT) in critically ill patients, no statistically significant effect of AT concentrate on mortality was found in any of the studied patient groups including the group of severe sepsis and disseminated intravascular coagulation (DIC) in 12 randomized controlled trials (RCT) with a total of 2,858 participants.

      The KyberSept trial (Warren BL, 2001), a large-scale multicenter RCT directly assessing the effects of AT concentrate on mortality in patients with severe sepsis and septic shock, contributed 2,314 patients to the analysis in sepsis and DIC (weight, 81.4%); however, not all had DIC. In KyberSept patients, DIC was investigated only post hoc. Among the 563 participants on whom there were sufficient data to identify a subgroup of those with DIC, only 40.7% (229 of 563) had DIC at baseline (Kienast J, 2006). Consequently, in the subgroup analysis for patients with sepsis and DIC, the KyberSept trial is at high risk of bias, not at low risk.

      This implies that in the meta-analysis by Allingstrup et al. of sepsis and DIC, at least 334 of the total of 2,858 participants definitely did not have DIC, thus, heavily invalidating its conclusions on survival of sepsis and DIC patients after treatment with AT concentrate.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 04, Joe Newton commented:

      The absence of established biomarkers does indeed make diagnosis uncertain and this uncertainty applies to other psychiatric anomalies. The missing heritability of gene events predisposing to anomalies can now only be roughly determined. Also there are expected factorial combinations (billions) of epistatic SNPs suggesting similar numbers of distinct diagnoses. See (Mellerup et al. 2004 mania and reentry)(Newton JR 2007 gene ontology)(Kim et al., 2010) (Koefoed et al., 2011) (Mellerup et al., 2012)

      The locations-times of: particular regional volumes and dysmyelination extent are suggested places and times to start in finding biomarkers.

      My congratulations to the authors.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 15, David Keller commented:

      Urologists have the most expertise at detecting curable prostate cancer

      The USPSTF recommendation against screening for prostate cancer using the PSA blood test was based on two large randomized studies which were hindered by poor compliance and design flaws [1]. Further, the widespread use of PSA testing has been accompanied by a 40% drop in deaths from prostate cancer, which is not adequately explained by any other factor. Now, this study informs us that urologists continue to employ the PSA test more often than primary care physicians. Since urologists generally have the most experience in the detection and early cure of prostate cancer, it is reasonable for primary-care physicians to emulate their practices, at least until credible results are obtained from properly designed and executed randomized trials.

      Reference

      1: Allan GM, Chetner MP, Donnelly BJ, Hagen NA, Ross D, Ruether JD, Venner P. Furthering the prostate cancer screening debate (prostate cancer specific mortality and associated risks). Can Urol Assoc J. 2011 Dec;5(6):416-21. doi: 10.5489/cuaj.11063. PubMed PMID: 22154638; PubMed Central PMCID: PMC3235209.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 09, David Keller commented:

      Screening for prostate cancer with PSA has never been properly evaluated

      The two randomized trials of PSA screening upon which the USPSTF based their D recommendation were ERSPC and PLCO. PLCO suffered from such high rates of statistical "contamination" (including off-protocol and pre-protocol screening of control subjects) that its negative result is not considered a valid assessment of PSA screening. [1]

      ERSPC reported a reduction in the prostate cancer mortality rate for screened men of 27% at 13 years by per-protocol analysis [2], versus a 21% reduction by intention-to-treat analysis. There is a 6% discrepancy between these mortality reduction estimates because the latter counts men as having been screened based only on their initial randomization, even if they never had a single PSA test. Many consider the former figure a more realistic estimate for patients who actually get screened as directed.

      The high false-positive rate of PSA screening led to many unnecessary negative biopsies in the randomized trials. In clinical practice, the common-sense response to a high PSA is to repeat the test a week later for confirmation, because there are many benign causes of transient PSA elevation. If the repeated PSA is normal, the patient can be spared a biopsy. This approach has been demonstrated to reduce the harms of PSA screening compared with reflex biopsy based on a single elevated PSA level, as practiced in the randomized trials. [3]

      PSA velocity was not considered in the biopsy decision. For example, in centers using a biopsy threshold PSA of 3, a man whose PSA rose from 2.9 to 3.1 (a 6% increase) was biopsied, but a man whose PSA rose from 0.5 to 2.5 (a 400% increase) was not biopsied. A rapidly rising PSA is more likely to signal an aggressive prostate cancer than a higher but essentially stable PSA.

      Most subjects were screened with a PSA test about every 4 years in ERSPC, an interval long enough to allow aggressive tumors to metastasize before being detected. In clinical practice, PSA should be measured annually, or even more frequently, to better distinguish its inherent signal from its noise. The additional PSA data points can be used to establish a baseline PSA range, calculate PSA velocity, and to detect (and confirm) worrisome increases earlier, with the goal of intervening before metastasis occurs. Harms falsely attributed to frequent PSA measurements are actually caused by inappropriate reflex biopsies, which, as we have seen, are actually reduced by additional PSA data [3].

      PSA screening has been associated with a greater than 40% decrease in mortality from prostate cancer [4], for which no other combination of interventions or population trends can account. This substantial decrease in prostate cancer mortality observed with PSA screening cannot be dismissed based on questionable results from flawed randomized trials. We should not abandon the intervention (PSA screening) most likely to have caused the bulk of the observed decrease in prostate cancer mortality. After all, we advise against smoking based on observational data, in the complete absence of randomized trial data.

      It was unwise for the USPSTF to issue their anti-PSA recommendation without a single urologist on their panel. As health systems implement bans on PSA testing, conservative models predict that "discontinuing PSA screening for all men may generate many avoidable cancer deaths. Continuing PSA screening for men aged <70 years could prevent greater than one-half of these avoidable cancer deaths while dramatically reducing over-diagnosis compared with continued PSA screening for all ages." [5] If these models are correct, the USPSTF recommendation will be responsible for many preventable prostate cancer deaths.

      Harms associated with prostate cancer treatments, such as erectile dysfunction and urinary incontinence, have been steadily reduced by advances in conformal radiation, brachytherapy, robotic surgery, imaging and watchful waiting. Men should be given a choice whether to have PSA screening. Such a choice requires thorough discussions with patients and more effort by clinicians to carefully track PSA levels over time, with the goal of maintaining or improving the reductions we have achieved in prostate cancer mortality by means of PSA screening.

      References

      1: Vickers AJ. Does Prostate-Specific Antigen Screening Do More Good Than Harm?: Depends on How You Do It. JAMA Oncol. 2016 Mar 24. doi: 10.1001/jamaoncol.2015.6276. [Epub ahead of print] PubMed PMID: 27010733.

      2: Schröder FH and ERSPC Investigators. Screening and prostate cancer mortality: results of the European Randomised Study of Screening for Prostate Cancer (ERSPC) at 13 years of follow-up. Lancet. 2014 Dec 6;384(9959):2027-35. doi:10.1016/S0140-6736(14)60525-0. Epub 2014 Aug 6. PubMed PMID: 25108889; PubMed Central PMCID: PMC4427906.

      3: Lavallée LT, Binette A, Witiuk K, Cnossen S, Mallick R, Fergusson DA, Momoli F, Morash C, Cagiannos I, Breau RH. Reducing the Harm of Prostate Cancer Screening: Repeated Prostate-Specific Antigen Testing. Mayo Clin Proc. 2016 Jan;91(1):17-22. doi: 10.1016/j.mayocp.2015.07.030. Epub 2015 Dec 10. PubMed PMID: 26688045.

      4: Howlader N, Noone AM, Krapcho M et al: SEER Cancer Statistics Review, 1975-2009 (Vintage 2009 Populations), National Cancer Institute. Bethesda, MD, http://seer.cancer.gov/csr/1975_2009_pops09/, based on November 2011 SEER data submission, posted to the SEER web site, April 2012

      5: Gulati R, Tsodikov A, Etzioni R, Hunter-Merrill RA, Gore JL, Mariotto AB, Cooperberg MR. Expected population impacts of discontinued prostate-specific antigen screening. Cancer. 2014 Nov 15;120(22):3519-26. doi: 10.1002/cncr.28932. Epub 2014 Jul 25. PubMed PMID: 25065910; PubMed Central PMCID: PMC4221407.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 30, Matthew Romo commented:

      Of important note, this article has two letters to the editor associated with it that have not been linked to it in PubMed as of August 2016:

      http://www.ncbi.nlm.nih.gov/pubmed/27262080 http://www.ncbi.nlm.nih.gov/pubmed/27569897


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 07, Christine Carson commented:

      The study shows NO BENEFIT of lavender oil inhalation on fatigue levels in haemodialysis patients. The Conclusion in the Abstract is ambiguous. It reads:

      CONCLUSION: Our result does not support other studies suggesting that lavender essential oil is effective on fatigue in haemodialysis patients.

      which could be interpreted two ways: 1. that other studies showed a benefit and this study does not OR 2. that this result is counter to previous work and suggests that lavender is effective on fatigue

      It is clearer in the full text of the paper (from Conclusion on page 36): "...Although a few previous studies have demonstrated that lavender aromatherapy is an effective way to alleviate fatigue in dialysis patients, we found that lavender essential oil at a concentration of 5% does not positively affect fatigue levels in haemodialysis patients."

      But part of this statement conflicts with a statement in their Intro: "To the best of our knowledge, no published study has explored the effects of lavender essential oil on fatigue in haemodialysis patients"


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 03, Donna Berryman commented:

      While the authors of this article attempt to demystify the process of literature searching, they have made one glaring omission. The best way to improve literature searching is to work with a medical librarian. Medical librarians are professionals and know the ins and outs of databases, controlled vocabularies, and the nuances of searching. Working with a medical librarian will save time and ensure that the search has been done properly. Many hospitals and most academic medical centers have libraries staffed with professional librarians. All clinical nurse specialists should be encouraged to build collaborative relationships with their librarian.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 19, Marinos A Charalambous commented:

      How Did Andreas Get Here? An amazing story narrated by Dr. King who was essentially the person who brought Andreas Gruentzig to the United States. A story that reminds us what America used to be and what America needs to continue to be: the place where you can make things happen!


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 20, Cicely Saunders Institute Journal Club commented:

      This paper was discussed at a Journal Club at the Cicely Saunders Institute, King's College London, on Wednesday 1st June, 2016.

      We felt the subject matter under study was extremely important and has significant implications for clinical practice. The experience of medical professionals who support bereaved children is under researched and this study makes an important contribution to the body of evidence in this field. We discussed the recruitment and sampling procedures used and felt that the self-selecting nature of the sample may limit the representativeness of the findings. The findings may exclude the experiences of certain groups such as men (as a large proportion of the sample were women) and people who experience significant distress due to bereavement. However, we acknowledge the difficulties in recruiting a population that is considered vulnerable and commend the authors for providing this evidence to back the need for more support for both bereaved children and medical professionals supporting them. Moreover, we wondered how the findings from the two groups of participants (i.e. bereaved children and medical professionals) linked together and whether it would be worth separating the findings from these two participant groups and discussing them in more detail. The findings of this study can be used to make recommendations to clinical practice (e.g. train clinicians in bereavement support), as well as to hospitals (e.g. make hospital environments more child-friendly). We look forward to more research on the experience of bereavement so that we can support bereaved individuals better.

      Commentary by Cathryn Pinto (@CathrynPinto) and Steve Marshall (@hollowaystevo)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 10, Morten Oksvold commented:

      Before reading or citing this article, please read the conclusion from the expert group for misconduct at the Central Ethical review Board which finds Paolo Macchiarini guilty of scientific misconduct:

      http://ki.se/en/news/macchiarini

      The specific report fromt the synthetic trachea transplantations:

      http://www.sll.se/Global/Verksamhet/Hälsa och vård/Nyhet bilaga/The Macchiarini Case Summary (eng).pdf


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 01, Misha Koksharov commented:

      The findings are really interesting! I hope there will be further publications elucidating how Per2 regulates response to food anticipation and fasting.

      I have a few questions/comments.

      1) To what extent the peak of the 3-OH-butyrate (3HB) at ZT4 (under RF) is just due to fasting? Is there an entrainment contribution to the peak magnitude? For example, the peak level could decrease/increase or stay the same following the entrainment during multiple days under RF.

      3HB levels are usually considerably increased in response to fasting. Here it was 16 hour fasting every day. The ~2-fold increase is similar to what was reported for C57BL after 14h fasting (Lin X, 2005). Interestingly, in the recent paper by Chikahisa S, 2014 there was a 4-fold increase after just 6h of fasting but looks like it's specific to their mouse strain (Jcl/IC).

      2) Do these results actually mean that mice with liver Per2 KO are unable to increase ketone body synthesis in response to fasting? Or does it just occur later than usual? It seems really interesting if Per2 is important for proper response to fasting.

      3) The title says "ketone bodies."

      So, have you looked at acetoacetate - the second ketone body? How did it change under AL, RF, fasting? It is known to be increased by fasting as well - similar to 3HB (Chikahisa S, 2014).

      Maybe, the release of the relevant amount (or compensation by higher concentration of 3HB) would further improve the rescue.

      4) Can the FA in response to 3HB be essentially an example of a conditional training similar to what can be done with visual or sound signals?

      It takes 5 days for them to learn to have FA at ZT4. So one can imagine the following situation: (1) 3HB, AcAc levels are increased by 16h fasting every day; (2) mouse brain senses the 3HB levels in the blood and converts them into perceived hunger levels; (3) mice learn after some time that this peak (level of hungriness) corresponds to the long-sought appearance of food;

      In this case it would be possible to similarly train them to show FA in response to odors, sounds.

      5) Were the metabolites measured only at one specific day or also at the same ZT points several times throughout these 45 days?

      6) Fig. S5a

      Is there still this weak cycling of 3HB under AL in Per2 KO?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 18, Andrea Messori commented:

      First-line treatment of chronic myeloid leukemia with imatinib or nilotinib: modelling the achievement of major molecular response

      Andrea Messori

      In the study by Hochhaus et al.(1), the data reported in Figure 2 describe the cumulative molecular response rates observed in three patients cohorts treated with nilotinib 300mg bid (n=282), nilotinib 400 mg twice daily (n=281), and imatinib 400 mg once daily (n=283). These data represent the results at 5 years of the ENESTnd trial. Over a follow-up from 0 to 60 months after the start of TKI treatment, Panel A of Figure 2 shows, at yearly intervals, the percent rates of major molecular response (MMR) for the three agents. These percentages are the following:

      a) nilotinib 300mg bid: 55% at 12 mos, 71% at 24 mos, 73% at 36 mos, 76% at 48 mos, and 77% at 60 mos;

      b) nilotinib 400 mg twice daily: 51% at 12 mos, 67% at 24 mos, 70% at 36 mos, 73% at 48 mos, and 77% at 60 mos;

      c) imatinib 400 mg once daily: 27% at 12 mos, 44% at 24 mos, 53% at 36 mos, 56% at 48 mos, and 60% at 60 mos.

      These findings show that nilotinib achieves a greater cumulative response rate than imatinib and also that the rates with nilotinib grow faster than that of imatinib. Current therapeutic strategies for handling TKIs are aimed at testing whether drug discontinuation can safely lead to a durable condition of treatment-free remission. Studying this issue is complex, and simulation models based on Markov methodology are often needed for this purpose (2). To facilitate this type of modelling, we have fitted the data reported above at items (a), (b), and (c) to the following exponential equation:

      rate = 100 x (1 – e–Kt)

      where: 100 represents the maximum percentage that can be achieved under this experimental condition; K is a first order rate constant, the units of which are reciprocal months; t is the time of the follow-up, the values of which are 0, 12, 24, 36, 48, and 60 in Figure 2 of the referenced article. To fit the rate-vs-t data pairs, a standard least-squares fitting procedure can be employed. In the present analysis, we used the procedure available under Microsoft Excel; the original observations were firstly converted into values of 100-rate and were then subjected to a logarithmic transformation; in this way, the function of these latter values becomes linear and the (negative) slope of the line represents the value of K. We obtained the following results:

      a) nilotinib 300mg bid: K = 0.0307 reciprocal mos (half-time of the process = 22.6 mos);

      b) nilotinib 400 mg twice daily: K = 0.029 reciprocal mos (half-time of the process = 23.9 mos);

      c) imatinib 400 mg once daily: K = 0.0176 reicprocal mos (half-time of the process = 39.4 mos).

      Figure 1 (available at http://www.osservatorioinnovazione.net/papers/mmrbestfit.gif ) shows the graph of the three functions estimated for nilotinib 300 mg, nilotinib 400 mg, and imatinib, respectively.

      References

      1) Hochhaus A, Saglio G, Hughes TP, Larson RA, Kim DW, Issaragrisil S, le Coutre PD, Etienne G, Dorlhiac-Llacer PE, Clark RE, Flinn IW, Nakamae H, Donohue B, Deng W, Dalal D, Menssen HD, Kantarjian HM. Long-term benefits and risks of frontline nilotinib vs imatinib for chronic myeloid leukemia in chronic phase: 5-year update of the randomized ENESTnd trial. Leukemia. 2016 May;30(5):1044-54.

      2) Marsh K, Xu P, Orfanos P, Gordon J, Griebsch I. Model-based cost-effectiveness analyses for the treatment of chronic lymphocytic leukaemia: a review of methods to model disease outcomes and estimate utility. Pharmacoeconomics. 2014 Oct;32(10):981-93.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 30, Ben Goldacre commented:

      This trial has the wrong trial registry ID associated with it on PubMed: both in the XML on PubMed, and in the originating journal article. The ID given is NCT01569255. We believe the correct ID, which we found in the free text of the article, is NCT02569255.

      This comment is being posted as part of the OpenTrials.net project<sup>[1]</sup> , an open database threading together all publicly accessible documents and data on each trial, globally. In the course of creating the database, and matching documents and data sources about trials from different locations, we have identified various anomalies in datasets such as PubMed, and in published papers. Alongside documenting the prevalence of problems, we are also attempting to correct these errors and anomalies wherever possible, by feeding back to the originators. We have corrected this data in the OpenTrials.net database; we hope that this trial’s text and metadata can also be corrected at source, in PubMed and in the accompanying paper.

      Many thanks,

      Jessica Fleminger, Ben Goldacre*

      [1] Goldacre, B., Gray, J., 2016. OpenTrials: towards a collaborative open database of all available information on all clinical trials. Trials 17. doi:10.1186/s13063-016-1290-8 PMID: 27056367

      * Dr Ben Goldacre BA MA MSc MBBS MRCPsych<br> Senior Clinical Research Fellow<br> ben.goldacre@phc.ox.ac.uk<br> www.ebmDataLab.net<br> Centre for Evidence Based Medicine<br> Department of Primary Care Health Sciences<br> University of Oxford<br> Radcliffe Observatory Quarter<br> Woodstock Road<br> Oxford OX2 6GG


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 10, Marcus Munafò commented:

      Kim and colleagues highlight the role of genes on chromosome 12 in a range of cardio-metabolic traits, and argue that this reflects genetic pleiotropy. One prominent gene in this region is ALDH2, which has been shown to strongly influence alcohol consumption in East Asian samples, where the minor allele associated with reduced consumption is relatively common [1]. A more parsimonious explanation for the findings reported by Kim and colleagues, therefore, is simply that alcohol exerts a causal effect on cardio-metabolic traits.

      Genetic studies can tell us about modifiable behavioural risk factors that contribute to disease [2], and results of genetic studies should therefore be interpreted with this in mind. In particular, the distinction between biological (or horizontal) and mediated (or vertical) pleiotropy is critical. The former refers to a genetic variant influencing multiple separate biological pathways, while the latter refers to the effects of a genetic variant on multiple outcomes via a single biological pathway. Effects of ALDH2 on cardio-metabolic traits most likely are due to the latter.

      Marcus Munafò and George Davey Smith

      1. Quertemont E. Genetic polymorphism in ethanol metabolism: acetaldehyde contribution to alcohol abuse and alcoholism. Mol Psychiatry, 2004. 9: p. 570-581.
      2. Gage, S.H., et al. G = E: What GWAS Can Tell Us about the Environment. PLoS Genet, 2016. 12: e1005765.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 29, David Reardon commented:

      This is another unreliable result from the Turn Away Study. It is seriously flawed by the non-representative selection of women used in the study.

      Careful reading of the companion studies reveals that of the women approached to participate 62.5% declined. Another 15% dropped out before the baseline interview.

      After the baseline interview, women continued to drop out at each six month followup period. While the number of women who dropping out at each stage is has not been revealed, the authors imply a high retention rate by declaring that 93% participated "in at least one" of the six month followups.

      What careful reading will reveal is that only 27.0% of the eligible women were interviewed at the three year follow-up.

      There are well known risk factors which predict which women are most likely to have negative reactions to abortion, many of which would make women less likely to agree to participate in a follow up interviews . . . even if there was an offer to be paid.

      For example, from the APA list of risk factors: perceived need for secrecy; feelings of stigma; use of avoidance and denial coping strategies; low perceived ability to cope with the abortion; perceived pressure from others to terminate a pregnancy. All of these risk factors suggest that such women may be more likely to refuse to participate in a study with follow-up interviews.

      Other problems include the fact that the sample is disproportionately filled with women having late abortions. The sample used includes 413 women who had an abortion near the end of the second trimester compared to only 254 women having an abortion in the first trimester.

      In addition, women who had abortions due to suspected fetal anomalies were excluded. Probably because research shows high rates of psychological disruption after abortion in these types of cases, therefore excluding this segment of women was a way to reduce the effects associated with abortion. This is extremely misleading, of course, since this is a common reason for abortion . . . especially in the second and third trimester.

      Another major confounding problems is that the comparison group, the Turn Away group (n=210), includes 50 women who later terminated at another facility or had a miscarriage. So 24% of the purported "no abortion" group actually includes women who experienced abortion or miscarriage. Yet the researchers barely disclose this fact, giving the false impression that their study is comparing women who had abortions to women who carried to term. In fact, they are comparing a group of women who had abortions to a group of women including those who (a) carried to term, (b) had abortions in a state other than where they first sought one, or (c) miscarried or had a still birth.

      In short, the findings of a study including only 27% of eligible patients in it's followup (much less other exclusions imposed on the population by the researchers) simply cannot be used to draw any conclusions regarding the general population of women having abortions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 22, Vinay Prasad commented:

      The reader has invented the idea that we believe some participants should be excluded from the ODAC meeting. We do not say that in the manuscript.

      Yet, because a sizable percentage of speakers have financial conflicts with the Industry, one may question the representativeness of the public comment.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Apr 06, John Tucker commented:

      The article evaluates public commentators at ODAC meetings according to several characteristics that the authors believe are suggestive of conflict of interest.

      • Having the cancer for which the drug is being evaluated
      • Having received the drug in question
      • Representing an organization
      • Having been a principle investigator in the drug trials
      • Or having a financial relationship with the sponsor, directly or by being a member or an organization that receives funding from the sponsor.

      Overall, the authors appear to have misunderstood the purpose of the open comment period at FDA Advisory Committee meetings, which per the relevant guidance document, is to permit "participation from all public stakeholders in [the FDA's] decision-making processes".

      Should having cancer, having benefited (or been harmed by) the drug under consideration be cause to exclude a speaker from presenting their view? Should being a member of an advocacy group related to the disease in question?

      Of course not. Hearing from a diverse group of stakeholders is exactly the purpose of the Open Public Hearing of ODAC meetings. The search for conflicts of interest has its place, but when it is used to exclude patients and disease advocates from having input into the process, the ivory tower has become too fortified.

      Having cancer is not a conflict-of-interest that demands that one's POV be excluded from ODAC meetings.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 09, Swapnil Hiremath commented:

      We read the meta-analysis review by Subramaniam et. al. with interest. The investigators undertook a herculean effort by including multiple treatments for the prevention of contrast induced-acute kidney injury (CI-AKI) within one report, and should be congratulated for their efforts. However, some of the results are open to being poorly understood or misinterpreted as presented, and we encourage readers to reflect on a few other aspects here.

      • Firstly, the focus on the surrogate outcome of change in creatinine is perplexing, when greater emphasis could be placed on the many trials which report clinically important outcomes (need for renal replacement therapy, cardiac events and mortality).

      • Secondly, it can be difficult to capture or identify important points of heterogeneity between between studies for various treatments in the context of such a large undertaking. These points of heterogeneity have been characterized in more focused meta-analyses of these CI-AKI prevention strategies. We are concerned that these points of heterogeneity have not been adequately recognized in the present report and may have implications on the authors conclusions. It has been noted that for many CI-AKI prevention trials, if not the majority for some treatments,the sample sizes are quite small with extremely large treatment effects. When these small studies have been followed by larger ones, it has become evident that these early promising results were not readily reproducible. As such, analysis of trials by sample size has shown that the purported treatment benefit is the greatest in smaller studies, and often not statistically significant in larger trials. Unfortunately, the larger studies are often considerably smaller in number, and the meta-analytic framework does not give them adequate weight. It may argued that meta-analyses were not intended for marked imbalances of this sort, and if calculated, the summary estimates may not be accurate. Observing consistency between treatment effects in large and small studies would be reassuring. Unfortunately, there is no such consistency for many CI-AKI prevention treatments. This is particularly notable for N-acetylcysteine (NAC) and sodium bicarbonate. Multiple clinical trials were performed after initial reports of positive clinical trial results of sodium bicarbonate for the prevention of CI-AKI. The studies largely showed no significant benefit compared with sodium bicarbonate fluid administration. Not unexpectedly, this practice has largely been abandoned by most practitioners.

      • Similarly, the efficacy of NAC was explored in a large and possibly the best quality trial, which was done with high-dose NAC, included 2308 patients with a hazard ratio of 1.00 for the primary outcome. Even in subgroups at higher risk for CI-AKI, diabetics and estimated GFR less than 60, no significant benefit was observed. The result for the effectiveness of low-dose NAC, which includes older, smaller trials of lower methodological quality (36 trials,4874 participants), as compared to high-dose NAC (usually larger, higher quality trials, 18 trials, 4336 participants), makes little biological sense. The literature on statin therapy is similarly flawed. The largest clinical trial with almost 3000 patients, was about six times larger than the next largest clinical trial. While this study did show a statistically significant benefit for statin treatment versus no statin treatment, the interpretation is complicated given that the majority of patients enrolled were at very low risk for CI-AKI because the majority had CKD stage I or II and were at low risk for CI-AKI. When the subset of approximately 500 patients with estimated GFR & less than 60 were analyzed, no significant benefit for statin therapy was observed.

      • Presentation of the results as NAC compared to intravenous (IV) saline (in Table 1) is also misleading, given that this was a comparison of patients who received NAC and IV saline compared to those who received IV saline alone (or with placebo). IV fluid expansion is the most effective measure reported in the literature, and to report NAC as being potentially superior to it may result in preferential use of NAC instead of IV saline, a concern which was also highlighted by Drs Weisbord and Pavelsky in a response to the journal.

      • Lastly, in the present report, stratification by contrast type further complicates the interpretation by introducing an additional subgroup in the analyses, and increases the risk for a type II error.

      These aspects of the literature can be readily lost when seen through the meta-analytical lens. The aim of a meta-analysis is not solely to generate a summary estimate, but provide a structured framework to explore and understand sources of heterogeneity, both statistical and clinical. Identifying important heterogeneity between studies can at times provide more insight than the summary estimates. Unfortunately, this can be very time-consuming, in particular when multiple treatments are under review, and requires expertise in the treatments being evaluated.

      Somjot Brar; Kaiser Permanenete, Los Angeles

      Ayub Akbari, Swapnil Hiremath; University of Ottawa

      (This is a longer version of the comment posted online on the Annals website for this same study)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 05, Tamás Ferenci commented:

      The paper is available for download here: http://www.medstat.hu/vakcina/Cikkek/BencskoFerenci_Polio_EI.pdf. (Full bibliographical details: Bencskó G, Ferenci T. Effective case/infection ratio of poliomyelitis in vaccinated populations. Epidemiol Infect. 2016 Feb 2:1-10. [Epub ahead of print] DOI: 10.1017/S0950268816000078, Copyright Cambridge University Press 2016. Link to the online edition of the journal at Cambridge Journals Online: http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=10171600&fulltextType=RA&fileId=S0950268816000078.)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 08, UFRJ Neurobiology and Reproducibility Journal Club commented:

      The study demonstrates that the presence of a conspecific animal during a fear conditioning task can influence fear behavior in mice, which is an interesting finding. However, we point out that, since training in the pair-exposed animals was performed with a conspecific, while testing was performed with an isolated animal, there is a possibility that the reduced freezing in this group when compared to single-exposed animals was merely due to a contextual change (i.e. presence of a conspecific or not), and not necessarily to social interaction. After all, the training and testing sessions were clearly more similar to each other in the single-exposed group, a fact that could account for the higher freezing with no need to postulate any kind of social modulation of fear. Thus, we feel that an important control missing from the study is a group in which pair-exposure occurred both in the learning and retrieval sessions. Alternatively, the demonstration that a non-social contextual element during training (e.g. an inanimate object) does not alter freezing would also strengthen the case that the observed difference is indeed socially mediated.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 06, Dan Chitwood commented:

      Erratum: In the Materials and Methods of this publication, the Vitis hybrids analyzed in this work are incorrectly referred to as "V. vinifera hybrids" when in fact they are hybrids either spontaneously occurring from wild North American Vitis species or of various parentages other than V. vinifera.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 21, Lydia Maniatis commented:

      The authors of this article never test the notion in which they are interested (if percepts are influenced by “undersampling”), but merely assume that it is the case, and crunch the data accordingly. Given that the rationale for the “undersampling” account is very tenuous to begin with, running the numbers seems moot. (I explain this assessment of the study below).

      The authors propose that perception is affected by something called “under-sampling” and that the locus of this effect is in the retina. It is clear that this proposal is speculative, but we are not given even a thumbnail sketch of arguments in its favor, only told that “Image sampling by cone photoreceptors is frequently cited as the neural limit to resolution acuity for central vision (Green, 1970; Williams, 1985a), whereas ganglion cells have been cited as the limiting array in peripheral vision (Anderson, 1996; Anderson, Drasdo, & Thompson, 1995; [etc].” Frequency of citation is, however, not an argument. As I've noted in many comments on vision publications, it is, unfortunately, very often used as a substitute. At any rate, it should be clear that the idea is speculative and that if there is a coherent rationale for it, it is to be found elsewhere. I would ask the authors where such a rationale is to be found.

      The applicability of the sampling notion is apparently rather narrow: “According to the sampling theory of visual resolution, when other limiting factors are avoided ... resolution acuity for sinusoidal gratings is set by the spatial density of neural sampling elements.” (Five of the references offered in support of this claim date from before 1960, and of these two are from the 1800's, so their relevance for this technical claim seems doubtful). In light of this statement, I would ask the authors why the notion of under-sampling is supposed to apply especially to sinusoidal gratings. Why do the authors say that “Double lines, double dots, geometrical figures, and letters are traditional stimuli for measuring acuity across the visual field (Aubert & Förster, 1857; Genter, Kandel, & Bedell, 1981; Weymouth, 1958) but resolution of these stimuli is not necessarily a sampling-limited task.”? How do they come to the conclusion that tasks using such stimuli are “not necessarily a sampling-limited,” especially given that the very existence of sampling limitations is what they are investigating? When would tasks with such very simple figures be predicted to be “sampling limited,” and when not?

      The explanation for the authors' actual choice of stimuli is also thin: “we used sinusoidal gratings because they provide the simplest, most direct link to the sampling theory of visual resolution for the purpose of demarcating the neural bandwidth and local anisotropy of veridical perception.” More specifically, “sampling theory” has been tailored to specific stimuli, and there is no rationale for extending it beyond these. But does “sampling theory” provide a rationale for its claims for the use of sinusoidal gratings, and where might this be found?

      In fact, the broad underlying rationale for this study has a common flaw that makes it invalid on its face. This is the notion that particular stimuli can produce percepts that tap specific “low-level” neural sensitivities, and can reveal these to the investigator. So our perception of sinusoidal gratings, for example, are supposed to tell us about the properties of retinal ganglion cells, or V1 neurons, etc. As Teller (1984) has noted, such arguments lack face validity. Low-level neurons are the basis of all percepts, and any claim for a direct link between their properties and a particular percept carries with it the burden to explain how and why these mechanisms do, or do not, influence, or “muck up” all percepts. The fact that the effect of a neuron or neural population is contingent on the interaction of its activity with feedback and feedforward mechanisms of extraordinary complexity, and that percepts are highly inferential and go well beyond “the information given,” makes arguments linking percepts to neural function (in the sense that the former alone can reveal the latter) highly untenable.

      The problem remains to the discussion, which begins: “This study measured the highest spatial frequency of a sinusoidal grating stimulus that is perceived veridically at selected locations in the visual field. For gratings just beyond this resolution limit, the stimulus always remained visible but was misperceived as an alias that WE ATTRIBUTE [caps mine] to under-sampling by the retinal mosaic of neurons.” To the end, it appears that the presumption of under-sampling is never tested, merely assumed. The complex computations performed on the basis of the data generated carry no weight in corroborating a hypothesis that was not tested, does not seem to possess the level of specificity needed to be testable, and lacks face validity.

      Similarly: The authors seem to me to be confusing assumption and corroborated fact. They first tell us that aliasing is “defined as the misperception of scenes caused by insufficient density of sampling elements, [and] is more likely for peripheral than for central vision because the density of retinal neurons declines with eccentricity...” They then state that: “perceptual aliasing is the proof that resolution is sampling-limited.” If aliasing is defined on the basis of assumed under-sampling, then it is difficult to see how aliasing can also be the proof of under-sampling. That is, if the aliasing/ under-sampling link is presumptive, then aliasing cannot, without explicit theoretical arguments, be offered as a proof of under-sampling. If the under-sampling notion is false, then, by the proposed definition, aliasing is non-existent.

      Given that the sampling notion is speculative, I'm puzzled about how the authors can state with confidence that they use “methodology that ensures sampling-limited performance.” Again, if the assumption that “under-sampling” affects perception is what is being tested, then how can we assume that our methodology ensures “sampling-limited performance”? The claims seem circular.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 23, Ben Goldacre commented:

      This trial has the wrong trial registry ID associated with it on PubMed: both in the XML on PubMed, and in the originating journal article. The ID given is NCT015751. We believe the correct ID, which we have found by hand searching, is NCT01575197.

      This comment is being posted as part of the OpenTrials.net project<sup>[1]</sup> , an open database threading together all publicly accessible documents and data on each trial, globally. In the course of creating the database, and matching documents and data sources about trials from different locations, we have identified various anomalies in datasets such as PubMed, and in published papers. Alongside documenting the prevalence of problems, we are also attempting to correct these errors and anomalies wherever possible, by feeding back to the originators. We have corrected this data in the OpenTrials.net database; we hope that this trial’s text and metadata can also be corrected at source, in PubMed and in the accompanying paper.

      Many thanks,

      Jessica Fleminger, Ben Goldacre*

      [1] Goldacre, B., Gray, J., 2016. OpenTrials: towards a collaborative open database of all available information on all clinical trials. Trials 17. doi:10.1186/s13063-016-1290-8 PMID: 27056367

      * Dr Ben Goldacre BA MA MSc MBBS MRCPsych<br> Senior Clinical Research Fellow<br> ben.goldacre@phc.ox.ac.uk<br> www.ebmDataLab.net<br> Centre for Evidence Based Medicine<br> Department of Primary Care Health Sciences<br> University of Oxford<br> Radcliffe Observatory Quarter<br> Woodstock Road<br> Oxford OX2 6GG


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 22, Karen Shashok commented:

      In the Correspondence section of Nature* we questioned the choice of the padlock-and-dagger illustration used for this article, because we felt it might send an incongruous message about open access despite Lewandowsky and Bishop’s favorable views.

      A senior subeditor acknowledged that there may have been an editorial oversight in not considering the possibility that some might misinterpret the illustration. Nature declined to consider publishing a correction because we did not point out any factual errors in the article.

      Karen Shashok kshashok@kshashok.com Remedios Melero rmelero@iata.csic.es


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 20, Ryckie Wade commented:

      Re: Lopez et al. “Surgical Timing and the Menstrual Cycle Affect Wound Healing in Young Breast Reduction Patients.” PRS 2016;137(2):406–10.

      On the face of it, this article appears interesting, however, it contains a number of irremediable flaws.[1] Most reputable medical journals including Plastic and Reconstructive Surgery mandate compliance with the STROBE statement [2, 3], but this article does not meet such standards. Thus this article has limited use for readers and its conclusions are unsubstantiated.

      Introduction

      The STROBE statement recommends a clear objective and hypothesis to test (STROBE item 1). The authors did not explain their basis for studying this topic. References to ankle injuries and striae after breast augmentation are cited, but their relevance to wound healing in breast reduction is lacking. Such omissions prevent the reader’s from determining the importance of the study.

      Design & Sampling

      The methods section should include the population of interest, sampling strategy and eligibility criteria (STROBE items 5 and 6a). The authors did not explain their rationale for either the study design or sample size. I have no idea why they selected 49 from a potential 561 patients. It is well known that opportunistic sampling introduces bias and confounding (represented, for example, by the wide range of BMIs and different operations described in this study).

      Eligibility criteria

      The article lacks inclusion and exclusion criteria (STROBE item 6a). The abstract states that “studies have found an association between hormone levels and wound healing” and previous work by this group showed that the contraceptive pill affected breast striae after augmentation[4], but this time the authors excluded women taking oral contraception (who will have supraphysiological hormone levels) without any explanation. Also, they did not explain their reason for excluding smokers.

      Interventions

      The authors pooled women who had two different operations - breast amputation and free nipple grafting, and Wise pattern inferior pedicle reduction. These are different operations with different indications, which is obvious from the range of reduction weights (180 - 2525 grams). This heterogeneous group of women could confound comparisons. The authors could have performed subgroup analyses or excluded outliers.

      Statistics

      The SAMPL statement points out that statistical errors in scientific papers are long-standing, widespread, potentially serious and largely unsuspected by readers.[6] Both the International Committee of Medical Journal Editors (ICMJE)[2] and SAMPL[6] give clear guidance on manuscript preparation and recommendations of peer review from qualified persons, in order to avoid statistical errors.

      By dichotomising a continuous variable (i.e. changing the continuous variable of 0-28 days from ovulation into two groups “pre-” and “post-ovulatory”) the authors sacrificed power for simplicity. Clearly, day 13 and 14 of the menstrual cycle are not the same as day 1 and 14. However, by dichotomising the data, the authors have treated the data the same. Dichotomisation is a well recognised problem because information is lost, statistical power is reduced, the risk of Type 1 errors increased, individuals juxtaposed to the cut-off point are categorised as different when in fact they may be similar (e.g., day 14 versus 15 of the menstrual cycle is arguably indifferent) and non-linearity between exposure and outcome is lost.[5] The decision to arbitrarily categorise this important continuous variable should have been addressed by the authors in the discussion (STROBE items 19 and 20).

      The authors improperly used the chi square test for proportional comparisons of between-group complications and incorrectly used the term '”correlation” in reference to such tests. Two assumptions were violated (cells had counts of zero and >25% had counts <5) so resampling methods (bootstrapping) or the Fisher Exact test would have been better (incidentally, these methods yield statistically significant differences).

      Outcomes

      The Results section does not comply with STROBE guidance (items 13a, 13b, 13c, 14a, 14b, 16a and 17) and as the outcomes of interest were not described, their approach suggests data mining. And, in the absence of adequate between-group demographics, readers are unable to judge potential confounders. Therefore, the ‘statistically significant differences’ found are at high risk of Type 1 error because when more statistical tests are performed, the odds of chance findings increases, especially when tests are underpowered and improperly used. The authors should have only analysed outcomes of interest or generated a family wise error rate. While the method section describes a multivariate analysis those results were missing.

      Conclusions Both the STROBE and ICMJE statements[2,3] recommend cautious interpretation of observational data, giving careful consideration to: potential sources of bias and confounding variables, the original objectives, limitations of the study design and execution, multiplicity of analyses, relationship to results from other studies and external validity (STROBE items 19, 20 and 21). Therefore, the conclusion that “wound healing is affected by the menstrual cycle” seems unfounded.

      Even if this research were of sterling quality, I am concerned it represents little practical value in light of the question “it is reasonable and feasible to organise surgery around a menstrual cycle?”.

      Mr Ryckie G. Wade MBBS MClinEd MRCS FHEA

      NIHR Academic Clinical Fellow in Plastic Surgery, Leeds General Infirmary, UK

      References

      1. Lopez, Mariela M., Alexander Chase Castillo, Kyle Kaltwasser, Linda G. Phillips, and Clayton L. Moliver. 2016. “Surgical Timing and the Menstrual Cycle Affect Wound Healing in Young Breast Reduction Patients.” Plastic and Reconstructive Surgery 137(2):406–10.
      2. International Committee of Medical Journal Editors (ICMJE). Journals Following the ICMJE Recommendations. Available at http://www.strobe-statement.org/" target="_blank">http://www.strobe-statement.org/
      3. Vandenbroucke, Jan P. et al. 2007. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: Guidelines for Reporting. Annals of Internal Medicine 147(8):573–78. Checklists available at http://www.strobe-statement.org
      4. Tsai R, Castillo A, Moliver C. Breast striae after cosmetic augmentation. Aesthet Surg J. 2014;34:1050–1058
      5. Douglas G Altman and Patrick Royston. 2006. The Cost of Dichotomising Continuous Variables. British Medical Journal 332(7549):1080.
      6. Lang, TA and DG Altman. 2013.“Statistical Analyses and Methods in the Published Literature: The SAMPL Guidelines. Science Editors’ Handbook 29–32. Available at http://inaspauthoraid.stage.aptivate.org


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 04, David Keller commented:

      Would the use of fondaparinux instead of LMWH reduce residual risk of HIT (and costs) even more?

      This study demonstrates an impressive reduction in the risk of heparin-induced thrombocytopenia (HIT), and associated costs, by the use of low-molecular-weight heparin (LMWH) instead of unfractionated heparin (UFH) whenever possible. While LMWH has a much lower risk of causing HIT than does UFH, the use of fondaparinux (Arixtra) has an even lower risk of causing HIT (although fondaparinux-induced HIT cases have been reported). How much of the residual risk of HIT could be decreased by the use of fondaparinux, when appropriate, instead of LMWH? Would overall costs of care decrease thereby, or would the higher cost of branded fondaparinux compared with generic LMWH dominate the cost-benefit analysis, at least until generic fondaparinux becomes available?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 May 04, David Keller commented:

      This important study should end "automatic" substitution of UFH for LMWH merely to save money

      It has been known for years that the use of low-molecular-weight heparin (LMWH) causes a lower risk of the feared condition heparin-induced thrombocytopenia (HIT) than does the use of unfractionated heparin (UFH). The first generic LMWH was approved by the FDA in 2010, but the cost of UFH remains lower still. Some skilled nursing facility (SNF) pharmacists have been allowed the cost-saving practice of "automatically" substituting subcutaneous (SC) UFH for the SC LMWH ordered by admitting physicians. This important paper demonstrates that the overall cost of using UFH is higher than that of using LMWH. Because SNFs are only subjected to the immediate costs of buying the heparin and administrating it, while the overall costs (including re-hospitalization for HIT) are borne by society and the patient, regulatory agencies should end the substitution of UFH for physician-ordered LMWH by SNF pharmacists. The cost savings of UFH are illusory.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 26, Martin Holcik commented:

      we posted the following comment on the Science website, but reposting it here as well:

      Internal initiation of translation writ large. S.D. Baird, Z. King and M. Holcik

      In the article “Systematic discovery of cap-independent translation sequences in human and viral genomes” (15 January 2016 )(1) Weingarten-Gabby et al. surveyed a large number of mRNA-derived 210 bp segments for their ability to internally initiate translation. Our own search for IRESes with a secondary structure similar to that of a known IRES from XIAP mRNA identified two new IRESes but, surprisingly, with little notable structure similarity (2). Was that serendipity? We therefore wondered how prevalent IRESes are within the human transcriptome, and asked how many IRESes are there if we test 10 random UTRs. We selected 10 random 5'-UTRs from the UTRdb (3) using a perl script with a randomization function and tested their ability to initiate internal initiation using a previously characterized gal/CAT bicistronic reporter system (4). We identified one UTR from ZNF584 that showed bona fide IRES activity (without any cryptic promoter or splicing activity) but missed two IRESes discovered by the systematic survey (TEX2 and ZNF146). This is because the UTRs were improperly annotated at the time of our cloning; theoretically we had 3 genes out of 10 with IRES elements, or 30% of transcriptome, which is a bit higher than the 10% reported by the survey. While Weingarten-Gabby et al.'s test of small segments may have missed large structural IRESes like HCV, their discovery of so many IRES elements shows the IRES mechanism as a common feature, and that it frequently occurs within the coding sequence points to the ability of a cell to selectively express a more complex proteome than is evident by the transcriptome. Should the IRES elements be annotated into the RefSeq sequence features? Are there sequences that will recruit the ribosome, and coupled with RNA modification (such as recently described adenosine methylation or hydroxymethylation of cysteine (5) also regulate translation? The transcriptome to the proteome is not a simple step as ribosomal proteins and RNA binding proteins which control IRES activity will control which proteins are expressed. Nevertheless, these new insights suggest that internal initiation on cellular mRNAs writ large.

      1. S. Weingarten-Gabbay et al., Comparative genetics. Systematic discovery of cap-independent translation sequences in human and viral genomes. Science. 351(6270). aad4939. doi: 4910.1126/science.aad4939. Epub 2016 Jan 4914. (2016).
      2. S. D. Baird, S. M. Lewis, M. Turcotte, M. Holcik, A search for structurally similar cellular internal ribosome entry sites. Nucleic Acids Res. 35, 4664-4677. (2007).
      3. F. Mignone et al., UTRdb and UTRsite: a collection of sequences and regulatory motifs of the untranslated regions of eukaryotic mRNAs. Nucleic Acids Res. 33, D141-146. (2005).
      4. M. Holcik et al., Spurious splicing within the XIAP 5' UTR occurs in the Rluc/Fluc but not the {beta}gal/CAT bicistronic reporter system. RNA 11, 1605-1609 (2005).
      5. B. Delatte et al., RNA biochemistry. Transcriptome-wide distribution and function of RNA hydroxymethylcytosine. Science. 351, 282-285. doi: 210.1126/science.aac5253. (2016).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Feb 04, Ivan Shatsky commented:

      It is a fruitful idea to use a high- throughput assay to fish out sequences that regulate translation initiation. I like this idea. It may result in very useful information provided that the experimental protocol is correctly designed to reach the goal of a study. However, while reading the text of the article I had an impression that the authors did not make a clear difference between the terms “IRES-driven translation” and “cap-independent” translation. In fact, cap-independent mechanisms may be of two kinds: a mechanism that absolutely requires the free 5’ end of mRNA (see e.g. Terenin et al. 2013. Nucleic Acids Res. 41(3):1807-16 and references therein and Meyer et al. 2015. Cell 163(4): 999-1010 ) and that which is based on internal initiation. Only in the latter case a 5’ UTR starts penetrating the mRNA binding channel of ribosomes with an internal segment of the mRNA rather than with a free 5’ end. Consequently, the experimental design should be distinct for these two modes of cap-independent translation. The method of bicistronic constructs used by the authors is suitable exclusively to identify IRES-elements. However, this approach is sufficiently reliable when it is employed in the format of bicistronic RNAs transfected into cultured cells. It is repeatedly shown that the initial format of bicistronic DNAs is extremely prone to almost unavoidable artifacts (for literature, see ref. 48 in the paper and the review by Jackson, R.J. The current status of vertebrate cellular mRNA IRESs. Cold Spring Harb Perspect Biol, 2013; 5). The control tests to reveal these artifacts which are still used (unfortunately!) by many researchers are not sensitive enough to detect formation of few percents of monocistronic mRNAs. (To this end, one should perform precise and laborious experiments which are not realistic in the case of high-throughput assays.). The capping of these aberrant mono- mRNAs can produce a dramatic stimulation of their translation activity (20-100 fold, depending on cell line). Therefore, even few percents of capped mono-mRNAs may result in a high activity of the reporter as compared to an almost zero activity of empty vector (see Andreev et al. 2009. Nucleic Acids Res.37(18):6135-47 and references therein). Real-time PCR assessment of mRNA integrity (Fig. S4) is an easy way to miss these few percents of aberrant transcripts. The other concern is genome-wide cDNA/gDNA estimation. The ratio for e.g. “c-myc IRES” is 2<sup>-1.6</sup> which is roughly 1/3 (Fig. S3). Does this mean that 2/3 of c-myc transcripts are monocistronic rather than bicistronic? I had a general impression that the authors were not aware of serious pitfalls inherent to the method of bicistronic DNA constructs and simply adapted this method to their high throughput assay. At least, I did not find citations of papers that discussed this important point.

       The data in section Supplementary materials (Figs. S5 and S6) give us expressive and compelling  evidence of such kind of artifacts: indeed, some  174 nt long fragments from the EMCV IRES possessed an IRES activity. Moreover, one of them with GNRA motif had the activity similar to that for the whole EMCV IRES (!?).  This result is in an absolute contradiction with our current knowledge on this picornaviral IRES, one the best studied IRES elements! Parts of the EMCV IRES are known to have no activity at all! Thus, the most plausible explanation is that the EMCV fragments harbor cryptic splice sites. The same is true for other picornavirus IRESs examined in these assays. The HCV IRES tested by the authors in the same experiments worked only as a whole structure (Fig.S6B), in a full agreement with data of literature. However, this result may not encourage us as it just means that the data obtained in this study may be a mixture of true regulatory sequences with artifacts.   
        We should keep in mind that the existence of viral IRES-elements is a firmly established fact. They have a complex and highly specific organization with well defined boundaries and THEY ARE ONLY ACTIVE AS INTEGRAL STRUCTURES. The minimal size of IRESs from RNAs of animal viruses is  >300 nts. Their shortening inactivates them and therefore, they cannot be studied with cDNA fragments of 200 nt long or less. Thus, I think it was a mistake to mix viral IRESs with cellular mRNA sequences. As to cellular IRESs, none of them has been characterized and hence we do not know what they are and whether they even exist. For none of them has been shown that they do not need a free 5’ end of mRNA to locate the initiation codon. Some of them have already been disproved (c-Myc, eIF4G, Apaf-1 etc.).  By the way, I do not know any commercial vector that employs a cellular IRES. Thus, I think that we should first find adequate tools to identify cellular IRESs, characterize several of them, and only afterwards we may proceed to transcriptome-wide  searching for cellular IRESs.
      


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 12, Christopher Southan commented:

      This is an impressively thourough study of a new mouse smORF. However, despite the possible splicing out of codon five, database evidence indicates the major human sequence is not the 34 residues in the suplimentary data of this paper and Swiss-Prot P0DN84 but rather the 35 resides represented in in ENSG00000240045 (via Vega) and (circularly) in TrEMBL A0A1B0GTW0. There is an older independent cDNA translation as ACT64388 from 2009 as well as ~ 30 full-length matches from human ESTs. As the authors know, eventual submission of (by them?) of complete mouse and human cloned cDNAs with CDS annotations to GeneBank or EBI should eventually feed through the systems and thereby (on a good day) correct the protein/gene discrepancies (even if minor) and displace the misleading LncRNA annotations.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 14, NephJC - Nephrology Journal Club commented:

      This study about the longer term follow-up of the original BENEFIT trial was discussed on February 23rd and 24th 2016 in the open online nephrology journal club, #NephJC, on twitter. Introductory comments written by Nikhil Shah are available at the NephJC website. The discussion was quite detailed, with over 50 participants, including general, transplant and pediatric nephrologists, fellows, residents and patients. The transcripts of the tweetchat are available from the NephJC website.<br> The highlights of the tweetchat were:

      • The authors, the trial participants and the sponsor (Bristol-Myers Squibb, the makers of belatacept) should be commended for conducting the trial, and pursuing the long term follow up for 7 years.

      • There was considerable enthusiasm for having an effective alternative to calcineurin inhibitors, and the difference in GFR observed was very encouraging.

      • Several issues made the group cautious and skeptical with respect to the finding of the benefit of belatacept in graft and patient survival: the differential loss to follow up and the small absolute difference in number of events which drove the significance; the lack of improvement in survival seen in the 7 year follow up BENEFIT-EXT study; the use of cyclosporine as the comparator rather than tacrolimus; the cost of the drug and lastly, the fact that the data was analyzed by the sponsor, and the manuscript was written by a medical writer paid by the sponsor.

      Interested individuals can track and join in the conversation by following @NephJC, #NephJC, signing up for the mailing list, or visit the webpage at NephJC.com


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 28, Andrew Jones commented:

      Due to some over-vigorous editing, important descriptive statistics were missing from the paper. This was unfortunately missed by us (the authors). We thank Robert Hester (Melbourne) for drawing this to our attention after the paper was published. The data are reported below.

      Stop signal task indices split by time and group. Values are Means (SDs)

      Time 1:

                        Control    │    Stress
      

      SSRT: 219.23 (38.56) │ 216.18 (34.28)

      Go RTs: 515.72 (119.87) │ 525.53 (122.24)

      Go Errors: 2.60 (2.84) │ 2.00 (1.76)

      Stop Errors (%): 45.13 (9.11) │ 42.91 (10.83)

      Time 2:

                        Control    │    Stress
      

      SSRT: 215.02 (34.40) │ 212.97 (40.54)

      Go RTs: 520.64 (136.00) │ 528.09 (131.84)

      Go Errors: 2.94 (2.85) │ 3.00 (3.33)

      Stop Errors (%): 45.71 (11.01) │ 44.24 (11.67)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Oct 27, Stuart RAY commented:

      Is it absolutely clear that one can accurately infer "haplotype frequency counts" after PCR and 454 sequencing? As the Lorenzo-Redondo manuscript notes, the template concentrations are low.

      Liu SL, 1996 noted that template resampling is likely under such conditions; it's not clear to me that "clean up" procedures resolve this. "Controversial" seems appropriate - so does "intriguing", and I look forward to the next chapter in this vital story.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Oct 06, Steven M Wolinsky commented:

      In our view, the contrarian findings by Kearney et al. on the sequence data from our paper are entirely due to severely biased subsampling of the data (ignoring haplotype frequency counts), and inappropriate rooting of trees that can give rise to profoundly misleading inferences of lineage relationship and viral evolution. There is evidence of forward evolution for a notable subset of patients studied by many of the same authors as in Kearney et al. using single genome sequencing when rooting the trees on an appropriate outgroup. The study of Van Zyl et al., while clearly valuable and interesting, does not directly contradict our findings because: it only examined sequences from a single compartment (bloodstream); did not plumb sequences deep enough to ensure reliable detection of low-frequency viral variants; and used an entirely different study population.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2017 Sep 20, Stuart RAY commented:

      The findings described here are controversial. Others who analyzed the same sequence data did not reach the same conclusions - and some of the latter report's authors are also involved in another recent study suggesting evolutionary stasis in children receiving suppressive antiretroviral therapy Van Zyl GU, 2017. It will be important to address these discrepancies, because they address issues central to treatment efficacy and HIV cure.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 18, Kaikobad Irani commented:

      This valuable piece of work illustrates that skeletal muscle SIRT3 is a vital component of the metabolic program activated by inorganic nitrite which promotes glucose disposal in a rat model of metabolic syndrome/HFpEF. As such, it adds to the growing body of evidence that SIRT3 promotes insulin signaling in skeletal muscle.

      However, the conclusion in the title that SIRT3 activation by nitrite normalizes pulmonary hypertension associated with HFpEF should be interpreted with caution. The data presented in this regard show correlation, not causation, as acknowledged in the Discussion. Causality demands proof that lack of SIRT3 exacerbates pulmonary hypertension, or activation of SIRT3 ameliorates it, in a suitable model of HFpEF.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On date unavailable, commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 01, David Robert Grimes commented:

      To clarify a few points:

      (1) the paper in question never mentions thimerosal or any specific component of vaccines. Rather, it refers to the conception that vaccines are dangerous and there is an active conspiracy to promote them / suppress. This belief is quite persistent [1-4] in the anti-vaccination fringe, often presented as a rationale for why the medical establishment encourage vaccination when subscribers to the damage narrative strongly believe it is clearly harmful. It is essentially a way of reducing cognitive dissonance, and it is this argument the paper addresses.

      With regards to specific vaccine additives, as far as I am aware thimerosal has never been demonstrated to cause harm, and indeed there are alternatives. As the CDC [5] make very clear;

      "One vaccine ingredient that has been studied specifically is thimerosal, a mercury-based preservative used to prevent contamination of multidose vials of vaccines. Research shows that thimerosal does not cause ASD. In fact, a 2004 scientific review by the IOM concluded that "the evidence favors rejection of a causal relationship between thimerosal–containing vaccines and autism." Since 2003, there have been nine CDC-funded or conducted studies[PDF - 357 KB] that have found no link between thimerosal-containing vaccines and ASD, as well as no link between the measles, mumps, and rubella (MMR) vaccine and ASD in children.

      Between 1999 and 2001, thimerosal was removed or reduced to trace amounts in all childhood vaccines except for some flu vaccines. This was done as part of a broader national effort to reduce all types of mercury exposure in children before studies were conducted that determined that thimerosal was not harmful. It was done as a precaution. Currently, the only childhood vaccines that contain thimerosal are flu vaccines packaged in multidose vials. Thimerosal-free alternatives are also available for flu vaccine. For more information, see the Timeline for Thimerosal in Vaccines."

      In any case, this factlet is not relevant to the argument put forward in the paper, which is defined there.

      (2) Again, the paper is very specific about terminology - while the over-whelming scientific consensus is that climate change is an anthropogenic phenomena, there are those that deny its very existence and claim it's a hoax perpetuated by scientists [6] rather than anything specific about implications of this. The paper distinguishes this clearly in a few places;

      "Climate-change denial has a deep political dimension. Despite the overwhelming strength of evidence supporting the scientific consensus of anthropogenic global warming, there are many who reject this consensus. Of these, many claim that climate-change is a hoax staged by scientists and environmentalists, ostensibly to yield research income. Such beliefs are utterly negated by the sheer wealth of evidence against such a proposition, but remain popular due to an often-skewed false balance present in partisan media, resulting in public confusion and inertia."

      ...and also

      "The climate-change conspiracy narrative requires some clarification too; those sceptical of the scientific consensus on anthropogenic climate change may take either a “hard” position that climate-change is not occurring or a “soft” position that it may be occurring but isn’t anthropogenic. For this investigation, we’ll define climate change conspiracy as those taking a hard position for simplicity. "

      Hope this helps.

      DRG

      References

      1. http://embor.embopress.org/content/11/7/493

      2. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0089177

      3. http://www.jmir.org/2005/2/e17/

      4. http://www.sciencedirect.com/science/article/pii/S0264410X11019086

      5. http://www.cdc.gov/vaccinesafety/concerns/autism.html

      6. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0147905


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Feb 13, David Keller commented:

      Climate trends & harms of vaccination are openly debated by scientists, not "demonstrably false" conspiracy theories

      Thimerosal is a mercury-based preservative used to suppress the growth of bacteria and fungi in multi-dose vaccine vials. The Institute of Medicine conducted a comprehensive study which concluded that "exposure to thimerosal-containing vaccines could be associated with neurodevelopmental disorders" in children, and recommended "removing thimerosal from vaccines administered to infants, children, or pregnant women" [1]. Given this recommendation, it is reasonable for non-pregnant adults to also insist on receiving vaccines which do not contain thimerosal.

      Why is thimerosal still allowed in influenza vaccines and so many others? The small amount of money saved by packaging multiple doses of vaccine in one vial is negligible. Exposure to possible thimerosal toxicity can be eliminated simply by demanding a vaccine packaged in a single-dose vial or syringe, which does not require a preservative. Avoiding unnecessary mercury exposure is logical risk-minimization, not the irrational behavior of a paranoid "conspiracy theorist".

      Regarding man-made climate warming, it should be noted that there is disagreement among high-level climate experts regarding the implications of rising temperatures and sea levels. Global-warming "skeptics" argue against the catastrophic outcomes predicted by "alarmist" weather models which have gained widespread credence among the general public, and many weather scientists [2]. These skeptics argue that the recorded changes in temperatures and sea levels do not signal imminent danger requiring drastic reductions in the use of fossil fuels. Those who disagree with their conclusions should dispute them using scientific discourse, rather than dismissing their theories as "conspiratorial beliefs".

      In contrast, there is zero credible evidence that the moon landings were faked, or that doctors are suppressing a "cure for cancer" because it would reduce their incomes. Such conspiratorial beliefs are clearly non-viable, and indeed preposterous, in contrast to the reasonable debate over unresolved questions like the toxicity of vaccine preservatives and the implications of climate changes.

      References

      1: Institute of Medicine (US) Immunization Safety Review Committee; Stratton K,Gable A, McCormick MC, editors. Immunization Safety Review: Thimerosal-Containing Vaccines and Neurodevelopmental Disorders. Washington (DC): National Academies Press (US); 2001.PubMed PMID: 25057578.

      2: Idso CD, Carter RM, Singer SF. Why Scientists Disagree About Global Warming; 2015 Report by NIPCC (Non-governmental International Panel on Climate Change), published 11/23/2015 http://climatechangereconsidered.org/


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Feb 12, David Keller commented:

      Effective high-level conspiracies remain operational over decades to centuries

      Conspiracies are an effective means for a small group to exercise economic and political power over others for long periods of time, up to eons. For example, the divine right of kings to rule over commoners was supported by the Church for many centuries, thus yielding a highly effective and self-perpetuating means of controlling societies by the dual threats of force in this life (by the king's army) and the afterlife (where political dissenters would suffer hell-fire eternally).

      More recently, we have seen a market with 5 or more acid-reducing proton pump inhibitors (PPI's) all on patent and all "competing" on price, yet not one medication was priced below $5 per pill until the first generic hit the market. The same was seen with statins, which did not see any effective price competition until lovastatin went generic. How did 5 or more pharmeceutical companies arrive at the same floor price prior to generic competition? It was either telepathy (get back to me with the details, telepathically), or it was conspiracy among these allegedly competing big pharmaceutical companies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 15, Rima Obeid commented:

      On comparing the performance of single B12 markers (comparators) to detect cB12 indicator (separator) (Figure 1. ROC curves showing the diagnostic value of vitamin B-12, MMA and HC in predicting cB12: The study contained important results on possible disagreement between B-12 markers in patients with cancer. However, the diagnostic accuracy of B12, methylmalonic acid and Homocysteine in predicting cB12 (separator or outcome) by using ROC analyses and the interpretation of this approach need further evaluation. The separator (cB12) is calculated from the three B12 markers. Thus, cB12 cannot be used to define performance of the single components used for calculating it. Moreover, the three markers are not independent, but affected by each other. Therefore, the 3 individual markers that are indeed components of the new variable (cB12) participate to different degrees in determining the cB12 and they cannot be used in ROC analyses to predict sensitivity and specificity or best performance. ROC analyses aim at testing a 'new test or a new method' against established ones. A diagnosis that is based on using this 'new test under evaluation' should be made using criteria independent from the established tests being tested against this new test.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 23, Giuseppe Querques commented:

      We apologize, but at the time of submission, the mutation was not referenced in retina international database, and in HGMD (human gene mutation database in free access).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 May 17, Heidi Schulz commented:

      The p.Asp304Gly mutation is not novel. It was previously published by Sodi et al. 2012 (PMID2321327).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 26, benoit thirion commented:

      See also indexing errors like:

      "Cochrane Database Syst Rev"[Journal] AND depression[mh] = 206

      "Cochrane Database Syst Rev"[Journal] AND depressive disorders[mh] = 59

      The Cochrane Collaboration works about Depressive Disorders quasi only, not about Depression as a Symptom.

      Hope this helps.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 May 29, Mike KLYMKOWSKY commented:

      As part of its analysis, this paper describes the appearance of ciliated cells within the developing Xenopus laevis embryo. I add this note because a search for Xenopus + Cilia or Ciliogenesis does not find this paper (and perhaps it should).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Sep 08, Youhe Gao commented:

      A strategy named 4F-acts was proposed a few years ago trying to minimize false positives and false negatives. Fast Fixation is necessary to study real-time protein-protein interactions under physiological conditions. Fast formaldehyde crosslinking can fix transient and weak protein interactions. With brief exposure to a high concentration of formaldehyde during the crosslinking, the complex is crosslinked only partially, so that the complex is small enough to be resolved by SDS-PAGE, and the uncrosslinked parts of the proteins can be used for identification by shotgun proteomics. Immunoaffinity purification can Fish out complexes that include the proteins of interest. Because the complex is covalently bound, it can be washed as harshly as the antibody-antigen reaction can stand; the weak interactions will remain. Even if the nonspecific binding can persist on the beads or antibody, it will be eliminated at the next step. To Filter out these complexes, SDS-PAGE is used to disrupt non-covalent bonds, thereby eliminating uncrosslinked complexes and simultaneously providing molecular weight information for identification of the complex. The SDS-polyacrylamide gel can then be sliced on the basis of the molecular weight without staining. All the protein complexes can be identified with the sensitivity of mass spectrometry rather than sensitivity of the staining method. The advantages are the following: (i) The method does not involve tagging. (ii) It does not include overexpression. (iii) A weak interaction can be detected because the complexes can be washed as hard as the antigen-antibody reaction can stand as the complexes are crosslinked covalently. No new covalent bond can form as a false positive result. (iv) The formaldehyde crosslinking can be performed at the cellular, tissue, or organ level fast enough so that the protein complexes are fixed in situ in real time. Proteome Science 2014, 12:6 doi:10.1186/1477-5956-12-6


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 17, Randy Blakely commented:

      Please note that the name of Jeremy Veenstra-VanderWeele was misspelled in the author list of this paper.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 16, Ditte Caroline Andersen commented:

      Dear Dr. Bakker, Thanks for your interest in our study. The fusion protein generated by the pACT vector encompasses a strong nuclear localization signal of the SV40 large T antigen, driving fusion proteins into the nucleus. Omitting the membrane signal peptide has been suggested to improve nuclear localization of the fusion proteins by Waller and co-workers, but they did not observe an increase in relative luciferase activity after removing membrane-targeting sequences (Waller et al, 2010). In our study, we verified nucleus localization of the proteins using immunohistochemistry, and the conclusion is that DLK1 and NOTCH1 in that milieu do interact. Whether this also occurs at the membrane remains to be determined, but results herein as well as those reported by others suggest that this may be the case since DLK1 seems to have an impact on Notch signaling. Waller H, Chatterji U, Gallay P, Parkinson T, Targett-Adams P (2010) The use of AlphaLISA technology to detect interaction between hepatitis C virus-encoded NS5A and cyclophilin A. Journal of virological methods 165: 202-210


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Feb 15, Hans Bakker commented:

      Can the authors explain how the mammalian two-hybrid system should work with proteins that are on the cell membrane?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 08, Lydia Maniatis commented:

      Thanks to the authors for taking the trouble to respond to my comments. However, I feel they're missing the point. Below, I've copy-pasted their reply and broken it up into statements by them and replies by me.

      Authors: Our conclusions do not depend on any model, falsifiable or otherwise. They were based on the mathematical incompatibility between certain visual computations and empirical measurements reported in our paper.

      Me: What visual computations?

      Authors: (It should also be noted that we also described a model consistent with our measurements: The Markovian Subsampler. However, that model was ignored by the commentator.)

      Me: Ad hoc models will usually be consistent with the data, that's their purpose. This doesn't make them relevant.

      Authors: Our conclusions follow logically from our measurements of efficiency.

      Me: Conclusions with respect to what?

      Authors: These measurements do not rest on any unproven theoretical constructs. Efficiency is a purely descriptive measure of performance in relation to the available information.

      Me: What does this measure tell us about the “visual computations” involved in the “performance?”

      Authors: Given any sample size N, efficiency is the ratio of M to N, where M is the sample size that the ideal observer would need in order to estimate a statistic with the same precision as a human observer. Since the ideal observer is, by definition, at least as good as any other observer, human decisions are necessarily based on M or more elements.

      Me: Why, as vision scientists, should we care about your measure of performance? What insights does it provide with respect to visual perception and related processes?

      Authors: Estimates of efficiency tell us nothing about visual appearance.

      Me: So the “visual computation” referred to in the first paragraph is not connected to “visual appearance.” What, then, is “visual” about the computation?

      Author: The relationship between appearance and performance would be something interesting to study, but our study was about performance alone.

      Me: The text refers to a “visual computation,” “visual statistics,” "visual information." What is the meaning of the term “visual” if not “pertaining to appearance?” Furthermore, your abstract clearly implies that you are interested in appearance, e.g. when you say that "With orientation...it is relatively safe to use an item's physical value as an approximation for its average perceived value." Your stimuli are contrasted with those used in studies where "observers are asked to make decisions on perceived average size," the idea being that in these circumstances the percept is not as reliable as in the case of orientation.

      Authors: Our observers were asked to estimate expected values, and they were pressed for time. Inferring the minimum sample sizes used by our observers is a purely mathematical exercise.

      Me: How is this exercise of interest to readers of a journal on visual perception?

      Authors: On average (i.e. across observers) we found that this minimum increased from approximately 2 to approximately 3 (average efficiencies increased from approximately 2/8 to approximately 3/8), as we eased the time constraint by providing longer presentations. Of course these numbers are subject to measurement error, so we performed statistical tests to see whether the 3 was significantly greater than the 2 and whether the 2 was significantly greater than 1. Both differences proved significant.

      Me: You might get similar results asking viewers the following question: Is the average of the numbers to the right of the colon smaller or larger than the number to the left of the colon.

      8: 5, 11, 3, 9, 13, 2, 9, 6.

      What would this tell us about visual perception (that we didn't already know)?

      Authors: Our conclusion against a purely parallel computation is valid, because our data unequivocally support an increase in efficiency with time.

      Me: Most tasks are easier given more time. However, saying that I'm able to perform better on a task given more time doesn't actually make me more efficient at the task, in the ordinary meaning of the term. If one becomes more efficient at a task, they can do it better in the same amount of time. I think at best you're measuring improvement in “accuracy.” People's answers become more accurate given more time, especially when some durations are very brief. This is a pretty sure thing regardless of the task. How did your experimental conditions give such a finding added value for visual perception?

      References to “parallel computation” are empty of content if, as you said above, you're not interested in process, but only in performance. “Computation” refers to process. There are obviously different types of processes involved in the task, and all mental processes involve both serial and parallel neural processes. So unless you're more specific about the type of computations that you're referring to, and how your method allowed you to isolate them, you aren't saying anything interesting.

      Authors: However, as noted in the published paper, our conclusion against a purely serial computation isn’t as strong. It is based on the second of the aforementioned significant differences, but there remains the possibility that our most stringent time constraint (0.1 s) wasn’t sufficiently stringent.

      Me: a) Why didn't you make conditions sufficiently stringent to achieve your goals? b) Similarly to what I said above, the phrase “a purely serial computation” is lumping observers' experience and mental activity and decision-making strategy together into an undifferentiated and wholly uninformative reference to “a computation.”

      All in all, you seem to be saying that you devised a task that observers are bad at (as ascertained by your special measure but as would be obvious using any measure) and do better at if given more time (as would be expected for most tasks), and that you don't care how they were doing it or how it aids in the understanding of visual perception.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 01, Joshua A Solomon commented:

      Our conclusions do not depend on any model, falsifiable or otherwise. They were based on the mathematical incompatibility between certain visual computations and empirical measurements reported in our paper. (It should also be noted that we also described a model consistent with our measurements: The Markovian Subsampler. However, that model was ignored by the commentator.)

      Our conclusions follow logically from our measurements of efficiency. These measurements do not rest on any unproven theoretical constructs. Efficiency is a purely descriptive measure of performance in relation to the available information. Given any sample size N, efficiency is the ratio of M to N, where M is the sample size that the ideal observer would need in order to estimate a statistic with the same precision as a human observer. Since the ideal observer is, by definition, at least as good as any other observer, human decisions are necessarily based on M or more elements.

      Estimates of efficiency tell us nothing about visual appearance. The relationship between appearance and performance would be something interesting to study, but our study was about performance alone.

      Our observers were asked to estimate expected values, and they were pressed for time. Inferring the minimum sample sizes used by our observers is a purely mathematical exercise. On average (i.e. across observers) we found that this minimum increased from approximately 2 to approximately 3 (average efficiencies increased from approximately 2/8 to approximately 3/8), as we eased the time constraint by providing longer presentations. Of course these numbers are subject to measurement error, so we performed statistical tests to see whether the 3 was significantly greater than the 2 and whether the 2 was significantly greater than 1. Both differences proved significant. 

      Our conclusion against a purely parallel computation is valid, because our data unequivocally support an increase in efficiency with time. However, as noted in the published paper, our conclusion against a purely serial computation isn’t as strong. It is based on the second of the aforementioned significant differences, but there remains the possibiility that our most stringent time constraint (0.1 s) wasn’t sufficiently stringent.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Feb 10, Lydia Maniatis commented:

      Part 2 DATA, MODEL, AND MODEL-FITTING

      The authors propose to “quantify how well summary statistics like averages are calculated [using] an Equivalent Noise (Nagaraja, 1964; Pelli, 1990; Dakin, 2001) framework...” (p. 1). The first two references discuss “luminance noise” and contrast thresholds. The mathematical framework and supporting arguments seems chiefly provided by the Pelli, (1990) reference (Dakin, 2001, takes the applicability of the Equivalent Noise paradigm to orientation for granted). However, the conclusion of the Pelli chapter includes the following statements: "It will be important to test the model of Fig. 1.7 [the most general, schematic expression of the proposed model – which nevertheless refers specifically to contrast-squared)]. For gratings in dynamic white noise, [the main prediction of the model] has been confirmed by Pelli (1981), disconfirmed by Kersten (1984) and reconfirmed by Thomas (1985). More work is warranted.” (p. 18).

      Also, Pelli's arguments seem to overlook basic facts of vision, such as the inhibitory mechanisms at the retinal level. Has his model actually been tested in the 25 years since the chapter was written, with respect to contrast, with respect to orientation? Where are the supporting references? (It is worth noting that Pelli seems to be unfamiliar with the special significance of “disconfirmations,” i.e. falsifications, in the testing of scientific hypotheses. Newton's theory has been confirmed many times, and can continue to be confirmed indefinitely, but it stopped being an acceptable theory after the falsification of a necessary prediction).

      Agnostic as to the perceptual abilities, processes or functional mechanisms underlying observer performance (the method confounds perception, attention and cognition), and assuming that a “just-noticeable contrast level” is computationally interchangable with a “just-comparable angle (via "averaging")," the authors proceed to fit the data to a mathematical model.

      From data points at two locations on the x-axis, they construct non-linear curves, which differ significantly from observer to observer. If the curves mean anything at all, they predict performance at intermediate levels of x-axis values - unless we are required to assume a priori that the model makes accurate predictions (in which case it is a metaphysical, not an empirical, model). The problem, as mentioned above, is that there is high inter-observer variability, such that the curves differ significantly from one observer to the next. (I also suspect that there was high intra-observer variability, though this statistic is not reported. ). Thus, a test of the model predictions for intermediate x-values would seem to require that we retest the same observers at new levels of the independent variable. (Why weren't observers tested with at least one more x-value?) I'm not at all sure that the results would confirm the predictions, but even if they did, this is supposed to be a general model. So what if we wanted to test it on new observers at new, intermediate levels of the independent variable? How would the investigators arrive at their predictions for this case?

      If there are no criteria for testing (i.e. potentially rejecting) the model - if any two data points can always be - can ONLY be - fitted post hoc - then this type of model-fitting exercise lies outside the domain of empirical science.

      It is always possible to compare “models” purporting to answer the wrong question, to investigate a nonexistent phenomenon. To use a rough example, we could ask,”Is the Sun's orbit around the Earth more consistent with a circular or an elliptical model?” Using the apparent movements of the Sun in relation to the Earth and other cosmic landmarks, we could compare models and conclude that one of them “better fits” the data or that it "fits the data well" (it's worth noting that the model being fitted here has dozens of free parameters: "The model with the fewest parameters had 55 free parameters" (p. 5)). But this wouldn't amount to a theoretical advance. I think that this is the kind of thing going on here.

      Asking what later turns out to be the wrong question is par for the course in science, and excusable if you have a solid rationale consistent with knowledge at the time. Here, this does not seem to be the case.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Feb 10, Lydia Maniatis commented:

      Part 1 The authors are drawing conclusions about a non-existent “visual computation” using an uncorroborated, unfalsifiable model.

      In this study, Solomon, May and Tyler (2016) investigate how observers arrive at a “statistical summary of visual input” (p. 1) which they refer to as “orientation averaging.” They ask “whether observers...can process the feature content of multiple items in parallel, or...cognitively combine serial estimates from individual items in order to attain an estimate for the desired statistic (in our case, the average orientation of an array of [striped discs].” They propose to quantify “how well summary statistics like average orientation are calculated [using] an Equivalent noise (Nagaraja, 1964; Pelli, 1990; Dakin, 2001).

      There two major problems with such a project. First, the authors offer no theoretical or empirical evidence in support of the notion that observers can or do actually calculate average orientations. On the contrary, studies cited by the authors seem to indicate that they cannot: “Solomon (2010) describes one professional psychophysicist who completed 2,000 trials, yet achieved an effective set size no greater than 1 [i.e. only a single item was “averaged.”]” The authors speculation that this poor result may have been due to the memory challenges involved in that particular study task, but here they claim that the results of the present study falsify this hypothesis.

      Readers may judge for themselves whether they personally possess the ability to estimate the average orientation of the discs in the stimuli presented by Solomon, May and Tyler (2016) by inspecting their Figure 1. Do you perceive an average orientation of the striped discs? If you were asked to decide whether the “average orientation” of the eight surrounding discs is clockwise or counterclockwise to the orientation of the “probe” in the center, how would you go about it?

      The method used by the investigators does not allow a decision as to whether observers are actually averaging anything, or whether they are using a rule of thumb such as: “Look at the first comparison disc, and if it's clockwise say clockwise”; or: “Look at the first disc, and then at a second, and if they're both clockwise say clockwise, otherwise look at a third tie-breaker disc.” Such a strategy is consistent with the authors' conclusions that observers are using a very small subsample of discs (“an effective set size of 2” (p. 6)) to generate their responses.

      Given that there does not seem to be something in our perceptual experience corresponding to an “average orientation of a set of striped discs,” perhaps we are supposed to be dealing with a kind of blindsight, where what feels like guessing yields an uncannily high percentage of correct responses. This also does not seem to be the case, given the “inefficiency” of the performance.

      Interestingly, the authors' own description of the problem, quoted in the first paragraph of this comment, doesn't necessarily imply actual averaging. When I look at Figure 1, I am perceiving multiple items in parallel (simultaneously, at least in experience), but I am still not seeing averages. Likewise, serial attention to the orientation of individual items does not equal generating an average.

      Thus, the potential “mechanisms” the authors claim to be evaluating are not necessarily “averaging mechanisms.” In other words, in stating that it is possible that “the same mechanism is responsible for computing the average orientation of crowded and uncrowded Gabors (p. 6)” the authors may well be referring to a mechanism underlying a process that doesn't exist.

      Given that the task involves perception, attention, and cognition, there can be no question that both “serial and parallel” (neural) processes underlie it, so it's not clear how why the authors suggest either “purely serial” or “purely parallel” as possibilities, especially since they appear to be unconcerned with distinguishing among the various processes/functions that are engaged between the stimulus “input” and the response “output.” Confusingly, however, and in contradiction to their titular conclusions, they acknowledge that: “It is conceivable [that there was enough time, even at the smallest stimulus duration] for a serial mechanism to utilize two items (p. 6)” If the data are to be described as compatible with a “purely serial” mechanism, then we are probably talking about the known-to-be-serial inspection of items via the conscious shifting of attention, which, again, does not necessarily imply any averaging.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 26, Donald Forsdyke commented:

      An accurate account of the exciting lead up to the 1977 discovery of split genes and obviously, as indicated by the title, Arnold Berk's fine perspective review does not deal with the alternative hypotheses that then appeared. However, readers are left to conclude that the "original suggestion" of Gilbert is now backed by "considerable evidence," so that perhaps the alternatives are disposed of. We should not forget that, in step with Gilbert, Darryl Reanney in Australia was fostering a viewpoint for which "considerable evidence" has also accumulated [1].

      [1] Forsdyke DR (2013) Introns first. Biological Theory 7:196-203 http://post.queensu.ca/~forsdyke/introns3.htm


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 28, Jochen P Müller commented:

      Cell adhesion was not in the focus of this study.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jan 25, Robert Eibl commented:

      The publication could have included a reference on the first specific AFM measurements of cell adhesion molecules on living cells:

      Eibl RH, Benoit M. IEE Proc Nanobiotechnol. 2004 Jun;151(3):128-32. Molecular resolution of cell adhesion forces.

      PMID: 16475855


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 26, Leigh Jackson commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Apr 26, Leigh Jackson commented:

      Acupressure requires finger pressure across an extensive surface area of the skin by comparison with the pressure of the Park blunt needle which is concentrated at a small point on the skin. If pressure of the Park needle implies acupressure so must the pressure involved in acupuncture. Pressure is required in order to puncture the skin. In which case acupuncture appears to be redundant.

      If puncturing is not an essential aspect of acupuncture how can the serious risks involved be justified?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Apr 21, Arthur Yin Fan commented:

      Trial suggests both acupuncture and acupressure are effective at reducing menopausal hot flashes

      In their recent report in the Annals of Internal Medicine, Ee et al1 state that Chinese medical acupuncture was no better than non-insertive sham acupuncture for women with moderately severe menopausal hot flashes in a randomised controlled trial. The authors conclude that they “cannot recommend skin-penetrating acupuncture as an efficacious treatment of this indication”.1 In my opinion, the authors might have misinterpreted the results.

      The ‘sham acupuncture’ used in this clinical trial was the Park sham device, which is supposed to serve as a placebo treatment. It uses a 0.35×40 mm blunt needle supported by a plastic ring and guide tube (base unit) attached to the skin with a double-sided adhesive ring. The needle telescopes into itself and shortens on manipulation, giving the visual and physical impression of insertion into the skin.1 Although the blunt needle does not insert into the skin, it does cause considerable pressure and thereby mechanical stimulation, especially given the small diameter at its tip. This Park sham device should arguably be relabelled as an acupressure device, instead of a form of sham acupuncture treatment. Indeed, this type of device and needling method is historically recognised as an active form of treatment; it is otherwise known as a Di needle (鍉针 or Di Zhen, a style of pressing needle that does not penetrate the skin), as documented in The Spiritual Pivot: Nine Needles and Twelve Source Points (Ling Shu: Jiu Zhen Shi Er Yuan) in the second part of the Yellow Emperor's Inner Classics, which was published 2000 years ago.2 For this reason, the trial design contained an obvious weakness; it compared acupuncture with acupressure, rather than acupuncture with truly inert sham acupuncture.

      According to the trial's results, hot flash scores decreased after both interventions by about 40% between baseline and the end of treatment (10 sessions, ending after 8 weeks) and these effects were sustained for 6 months. Statistically, there is no evidence that acupuncture was better than acupressure (called ‘sham acupuncture’ in the paper) in its impact on quality of life, anxiety or depression.1 This can equally be interpreted as evidence that both acupuncture and acupressure effectively decrease hot flashes and related symptoms, as well as quality of life, if we compare the results immediately after treatment (8 weeks) and at the 3- and 6-month follow-up, with baseline in the same group (self-control) or comparator group (as a waiting list-like control).

      As regards the placebo effect, evidence from the literature3 and a review of multiple trials4 shows that patients receiving placebo interventions exhibit an average decrease of 21–25% in hot flash frequency and intensity. Therefore, a 40% decrease in hot flash symptom scores with either acupuncture or acupressure treatment is notably higher than that expected with a placebo and likely to be clinically significant. Further research with a more appropriate control group is needed. Meanwhile, however, if a patient declines or cannot tolerate conventional drug treatment, then it would not be unreasonable to offer either acupuncture or acupressure as an alternative treatment for this condition.

      References 1. Ee C, Xue C, Chondros P, et al. Acupuncture for menopausal hot flashes: a randomized trial. Ann Intern Med 2016;164:146–54. doi:10.7326/M15-1380 [Medline] 2. Wu JN (translator). Ling Shu or The Spiritual Pivot. University of Hawaii Press, 2002. 3. Loprinzi CL, Michalak JC, Quella SK, et al. Megestrol acetate for the prevention of hot flashes. N Engl J Med 1994;331:347–52. doi:10.1056/NEJM199408113310602 [CrossRef][Medline][Web of Science] 4. Sloan JA, Loprinzi CL, Novotny PJ, et al. Methodologic lessons learned from hot flash studies. J Clin Oncol 2001;19:4280–90. [Abstract/FREE Full text]

      Fan AY. Trial suggests both acupuncture and acupressure are effective at reducing menopausal hot flashes. Acupunct Med doi:10.1136/acupmed-2016-011119.

      http://aim.bmj.com/content/early/2016/04/19/acupmed-2016-011119.full


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Mar 03, Arthur Yin Fan commented:

      The acupuncture points used in this trial were based on Kidney Yin Deficiency in Chinese medicine theory.

      I agree with the authors that the Kidney Yin Deficiency is the root cause of the hot flashes, however, the signs of hot flashes are more closely related to Heart Fire and Live Qi. If some acupuncture points related to Heart Fire and Live Qi were added, the treatment would have been more effective.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Nov 06, David Mage commented:

      Perhaps the authors are unfamiliar with the pediatric literature on SIDS and SUDC. They reported 94 male, 57 female SUDC cases for a male fraction of 0.6225, that has been modeled as a recessive X-linkage in Hardy-Weinberg Equilibrium for a dominant allele protective of respiratory failure with frequency p = 1/3. The XY male will be at risk of possessing a non-protective recessive allele with frequency q = 2/3 and the XX female will be at risk with frequency (2/3)(2/3) = 4/9 leading to the observed 50% male excess for equal numbers of males and females at risk. San Diego during the years 1999-2011 had a male excess in that age group, 1 to 4 years, of 4.48%, leading to a prediction of the male fraction of SIDS < 1 and SUDC > 1 of (60 x 1.048/[40 + 60 x 1.048]) = 0.610

      Even Dr. Brad Thach (PMID 18246101), a co-author to one of these authors (PMID 19692691) reported this 0.61 prediction and 0.61 observation for SIDS that we have published on for over 20 years (PMID 8748092, 9076995, 15384886, 20050322, 20042039, 24164639, 27188625). We wonder why then the authors of this SUDC paper seem still to ignore the persistent 0.61 male fraction we and so many others now have reported, and wonder what explication they have for it, other than an X-linkage or a pure random happenstance?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 02, Clive Bates commented:

      It is worth drawing out the most telling criticism made by Hajek, McRobbie and Bullen (above) in their response in the Lancet Respiratory Medicine (above).

      There are other problems—such as selective inclusion of studies, and selective reporting of data from studies that were included—and limitations the authors acknowledge in the text but ignore in their conclusions. Detailed criticism of the methods is, however, not needed, because lumping incongruous studies together—which were mostly not designed to evaluate the efficacy of e-cigarettes, and contain no useful information on this topic unless misinterpreted—makes no scientific sense in the first place.

      The fundamental problem really is of lumping together completely different studies designed to observe different behaviours in different populations with different outcome measures (mostly not to see how well e-cigarettes help smokers quit). This problem is fatal, and has not been addressed convincingly anywhere, and cannot be, by the authors. Meta-anlaysis is fine for pooling, for example, several drug trials conducted in more with almost identical methodology, but not as it used here.

      Once all the studies that should not be 'lumped together' are not lumped together, we come back to something more like the Cochrane review: Can electronic cigarettes help people stop smoking or reduce the amount they smoke, and are they safe to use for this purpose?, which is tentatively positive, but weak given the small numbers of relevant studies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 May 01, Peter Hajek commented:

      There are several serious problems with this review, see http://www.thelancet.com/pdfs/journals/lanres/PIIS2213-2600(16)30024-8.pdf

      The most obvious issue is that the result is based on studies that have no bearing on whether e-cigarettes are effective or not. This is because vapers who successfully quit smoking were excluded and only those who failed to do so were retained. The studies were not at fault, they were just not set up to evaluate quit rates in smokers who try and not try vaping. The fault is with misinterpreting their results. The letter in LRM referenced above provides more details.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Apr 25, Stanton A Glantz commented:

      A detailed response to the "expert" criticism is available here: http://tobacco.ucsf.edu/our-new-meta-analysis-entire-relevant-literature-shows-e-cigarettes-used-are-associated-less-not-more-quit#comment-17171


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Apr 24, Clive Bates commented:

      This meta-analysis has been subject to considerable criticism from the moment of its publication, see Expert reaction to meta-analysis looking at e-cigarette use and smoking cessation via the Science Media Centre.

      For example, Professor Robert West, Professor of Health Psychology at University College London, commented: “Publication of this study represents a major failure of the peer review system in this journal.”

      A pre-publication version of this meta-analysis was severely criticised in evidence to the U.S. Food and Drug Administration by experts at the Truth Initiative, which describes itself as "America’s largest non-profit public health organization dedicated to making tobacco use a thing of the past". In the Truth Initiative submission to FDA the examination of the methodological issues begins on page 8 and the following comment appears on page 12, referring to the analysis subsequently published in Lancet Respiratory Medicine.

      "While the majority of the studies we reviewed are marred by poor measurement of exposures and unmeasured confounders, many of them have been included in a meta-analysis that claims to show that smokers who use e-cigarettes are less likely to quit smoking compared to those who do not. [73] This meta-analysis simply lumps together the errors of inference from these correlations. As described in detail above, quantitatively synthesizing heterogeneous studies is scientifically inappropriate and the findings of such meta-analyses are therefore invalid."

      I hope that in due course The Lancet Respiratory Medicine will publish a critique and reconsider its decision to publish this paper. In the meantime, my own critique is available on my blog.

      I would also like to draw readers' attention to a thoughtful discussion of the failings of this meta-analysis by Carl V. Phillips on his blog


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 03, Ewen Gallagher commented:

      Here we present a new genetic model (Lck<sup>Cre/+</sup> Map3k1<sup>f/f</sup>) used to analyse Mekk1 (encoded by Map3k1) conditionally in T cells. Our research shows new roles for Map3k1 in T cell development and in unconventional T cells. There is also an expanded view on the research (1). This work continues the genetic analysis of Map3k1 in T cells (2-5). Other related works are available analysing Mekk1 activation by cytokines and hyperosmotic stress (6-8).

      Related works. (1). Suddason T, Gallagher E. Genetic insights into Map3k-dependent proliferative expansion of T cells. Cell cycle 2016:0. (2). Gao M, Labuda T, Xia Y, Gallagher E, Fang D, Liu YC, Karin M. Jun turnover is controlled through JNK-dependent phosphorylation of the E3 ligase Itch. Science 2004; 306:271-5. (3). Gallagher E, Gao M, Liu YC, Karin M. Activation of the E3 ubiquitin ligase Itch through a phosphorylation-induced conformational change. Proceedings of the National Academy of Sciences of the United States of America 2006; 103:1717-22. (4). Venuprasad K, Elly C, Gao M, Salek-Ardakani S, Harada Y, Luo JL, Yang C, Croft M, Inoue K, Karin M, et al. Convergence of Itch-induced ubiquitination with MEKK1-JNK signaling in Th2 tolerance and airway inflammation. J Clin Invest 2006; 116:1117-26. (5). Gallagher E, Enzler T, Matsuzawa A, Anzelon-Mills A, Otero D, Holzer R, Janssen E, Gao M, Karin M. Kinase MEKK1 is required for CD40-dependent activation of the kinases Jnk and p38, germinal center formation, B cell proliferation and antibody production. Nature immunology 2007; 8:57-63. (6). Matsuzawa A, Tseng PH, Vallabhapurapu S, Luo JL, Zhang W, Wang H, Vignali DA, Gallagher E, Karin M. Essential cytoplasmic translocation of a cytokine receptor-assembled signaling complex. Science 2008; 321:663-8. (7). Steed E, Elbediwy A, Vacca B, Dupasquier S, Hemkemeyer SA, Suddason T, Costa AC, Beaudry JB, Zihni C, Gallagher E, et al. MarvelD3 couples tight junctions to the MEKK1-JNK pathway to regulate cell behavior and survival. The Journal of cell biology 2014; 204:821-38. (8). Charlaftis N, Suddason T, Wu X, Anwar S, Karin M, Gallagher E. The MEKK1 PHD ubiquitinates TAB1 to activate MAPKs in response to cytokines. The EMBO journal 2014; 33:2581-96.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 02, Arnaud Chiolero MD PhD commented:

      A brief and excellent review to be updated on the tobacco epidemic


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 07, Laura E Kwako commented:

      Thank you for the question. We plan to administer the Digit Span task after the Trier Social Stress Test to explore the effects of stress on working memory, thus, included it in the Negative Emotionality domain (although it asures an aspect of Executive Function). We appreciate the opportunity to clarify this issue.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jan 30, Francisco Xavier Castellanos commented:

      Can the authors clarify why Digit Span is included as part of the Negative Emotionality construct? It would seem to be better located within the Executive Function domain.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 26, Christopher Southan commented:

      Update July 2016: team above have published the Zika virus protease structure http://www.ncbi.nlm.nih.gov/pubmed/27386922

      Thanks to the author's response in sending InChI strings from compoounds reviewed, most are now mapped into PubChem http://cdsouthan.blogspot.se/2016/02/med-chem-starting-points-for-zika.html


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 28, Donald Forsdyke commented:

      HYPOTHESIS-DRIVEN RESEARCH

      The discoveries of a cytosolic microbial adaptive immune system (CRISPR) and its applications to genome editing are major scientific advances. A review of the history of this magnificent achievement, made mainly by young people close to those with abundant research funds, is welcome. But the implication that this history supports the non-hypothesis-driven approach to research is questionable.

      Backed by inexpensive bioinformatic analyses, a hypothesis of cytosolic innate immunity was developed in the 1990s [1-3]. Had this CRISPR-analogous hypothesis been backed by funding, CRISPR and its applications might have been achieved more expeditiously. Thus, there are many roads to Rome. Because the well-equipped army that took route A arrive first, it does not follow that route A is superior to route B. Likewise, this comment could have been written in prose or poetry. Your liking (perhaps) of the present prose rendition, does not disprove the proposition that a poetic version might have been superior.

      [1] Forsdyke & Mortimer (2000) Chargaff’s legacy. Gene 261, 127-137.Forsdyke DR, 2000

      [2] Cristillo et al. (2001) Double-stranded RNA as a not-self alarm signal: to evade, most viruses purine-load their RNAs, but some (HTLV-1, Epstein-Barr) pyrimidine-load. J Theor Biol 208, 475-491.Cristillo AD, 2001

      [3] Forsdyke, Madill & Smith (2002) Immunity as a function of the unicellular state: implications of emerging genomic data. Trends Immunol 23, 575-579.Forsdyke DR, 2002


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jan 26, Donald Forsdyke commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.