14 Matching Annotations
  1. Jul 2018
    1. On 2015 Dec 03, Joshua L Cherry commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Nov 30, John P A Ioannidis commented:

      Dear Joshua,

      thank you again for your comments. I am worried that you continue to cut and paste to distort my sentences.

      1. The headline over my text was written by the Nature editors as their introduction to the paper, so perhaps you should blame them and ask them to replace it with "Here follows a horrible paper by Ioannidis". Yet, I think you would still be unfair to blame them, because their headline says "most innovative and influential", not just "most innovative". The terms "influential", "influence", "major influence" pervade my paper multiple times, but you pick one sentence with "innovative" instead, and interpret it entirely out of its context.<br>
      2. The phrases "the most important" and "very important" are not identical. Very important papers may not necessarily be THE most important. But they are very important - and influential. [As an aside, honestly, this repeated cross-examining quotation-comment style makes me feel as if I am answering the Spanish Inquisition. Am I going to be burnt at the stake now (please!) or there is one more round of torture?]
      3. We agree we need evidence, more evidence - evidence is good, on everything, including the current NIH funding system, which has practically no evidence that it better than other options, but still distributes tens of billions of dollars per year. Wisely, I am sure.<br>
      4. "your list contains...". This is not my list. This is the Scopus list. Right or wrong, I preferred not to manipulate it. Your colleagues did manipulate it and did not even share the data on how exactly they manipulated it.
      5. You continue to use the term "innovative thinker" out of its context. I scanned again carefully my paper and I can't find the word "excellent". In my mind, a student who has authored as first author a paper that got over 1000 citations (and the paper is not wrong/refuted) is already worthy to be given a shot as a principal investigator. If you disagree, what can I say, feel free not to fund him/her. And please don't worry, most of these guys are not funded anyhow currently, many of them even quit science. Hundreds of principal investigators who publish absolutely nothing or publish nothing with any substantial impact get funded again and again. Hurray!<br>

      I am afraid it is unlikely there will be more convergence in our views at this point. A million thanks once again, I have learnt a lot from your comments.

      John


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Nov 03, Joshua L Cherry commented:

      John,

      I, too, am wary of an endless discussion. In my view, my straightforward original point remains unscathed by our exchange. Your latest reply does compel me to make or repeat a few points.

      1) It is perplexing that you seemingly deny that your earlier work "claimed that NIH does a poor job of funding innovators, based on the assumption that the highly cited authors that you identified were innovators". Nicholson and Ioannidis say of these authors that

      Such innovative thinkers should not have so much trouble obtaining funding as principal investigators.

      The claim that the papers are highly innovative even appears in a headline above your text. This has nothing to do with how I might interpret "innovative"; you unambiguously asserted that these authors were highly innovative, according to your own meaning, based solely on their authorship of a very highly cited article.

      2) As already noted, the two publications are at odds with each other even if we consider "importance" rather than innovativeness. The discrepancy is reflected in your reply, where you confirm your belief that

      It is an open question whether “the most highly cited papers are the most important ones”

      and yet you write that

      My assumption was that papers with over 1000 citations (i.e. in the top 0.01% of citations) are very important

      I would add that one may rationally believe that there is a correlation between citation and importance while doubting that every primary author of every one of these papers should be funded as a principal investigator. (I add this because of your subtle replacement of "whether" with "the exact extent to which".)

      3) Again, I have not asserted, much less insisted, that anybody should not be funded. I have merely questioned whether a certain criterion is a reliable indicator that a scientist is among those most worthy of funding. Those who assert that it is bear the burden of proof.

      4) Much of your latest reply is an attack on others that has nothing to do with my comments here or with anything that I have written or done. I speak only for myself. I will note as a bystander that the letter to the editor that you criticize clearly does not say what you claim it does. Among other things, it characterizes only 11% of the papers as unrelated to human health. (I imagine that you have noticed that your list of "life sciences" papers includes some that clearly appear to belong to other disciplines.)

      5)

      Let us please collect more empirical evidence and fewer opinions.

      Indeed. Let us also acknowledge that the leap from "author of a very highly cited paper" to "innovative thinker", "excellent scientist", or "person who should certainly be a principal investigator" is based largely on opinion rather than empirical evidence, as your later statements about what we do not know would suggest.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2015 Oct 31, John P A Ioannidis commented:

      Dear Joshua,

      Thank you for your additional insights, I suspect this topic can be hotly debated ad infinitum. Let me please try again to convey what I think:

      You claim that my earlier paper was “based on the assumption that the highly cited authors identified were innovators.” I think that either your interpretation of my assumption is misleading and/or we understand the term “innovators” differently. My assumption was that papers with over 1000 citations (i.e. in the top 0.01% of citations) are very important and thus typically their leading authors merit support (unless the papers were wrong). Importance could include disruptive innovation narrowly defined, but also other equally major qualities that merit recognition and funding. I certainly do not believe that NIH should fund only “disruptive innovators” if the definition of “disruptive innovators” excludes influential experimental studies, randomized trials, other forms of evidence-based research, and interdisciplinary research – the types of work that were excluded by the letter-to-the-editor which was coauthored by David Lipman from your team whom I greatly admire but who threw out of the NIH-relevant agenda almost all health research that is important and almost everything that matters for health. I should also confess that I am disappointed by the stance of the authors of that letter. Their lead author had asked for my raw data and I had shared everything with him within 10 minutes of his request. Then, when I saw in his re-analysis that he had excluded two-thirds of the most extremely-cited papers with the excuse that they are not within the remit of NIH (even though they are objectively categorized by Scopus as belonging to life and health sciences), I asked him to share his raw data to understand his subjective arbitrations. I was very curious to see how leading NIH officials determined that the majority of the most influential medical and health-relevant research is not within the remit of NIH. Almost three years later, I still have not had the pleasure to see their raw data. Perhaps they did not want to reveal that their re-analysis was embarrassing: the re-analysis was based on the untenable assumption that the National Institutes of Health have almost nothing to do with health and with the majority of the most influential health-related research!

      I think our inability to converge in our views lies in our difficulty to agree on what is “innovative” and “important”. For example, I argue that randomized trials and other experimental studies, meta-analyses, guidelines, implementation research, team science, and interdisciplinary science can be extremely innovative and important to fund by NIH, while probably much/most of the funded R01 type of bench work and so-called “mechanistic” research currently funded by NIH is neither "innovative" nor "important", no matter how you want to define these terms within the confines of common sense.

      The exact extent to which “the most highly cited papers are the most important ones” is indeed an open question and the 2014 Nature paper tried to contribute towards answering this question. I hope that other scientists will revisit this question and improve on what we did. I do not expect a perfect correlation between citations and importance, but this does not mean that we know nothing about citations or that they have no value. When selecting papers in the top 0.01% of citations, it is hard to claim that they would not typically be even in the top 10-15% of importance so as to be worth funding. As I said already in my previous response, the papers assessed in the 2014 Nature analysis were less cited than the ones analyzed in the 2012 Nature analysis. The median number of citations in the papers analyzed in the 2014 Nature paper was 180. Only 39 of these papers (3%) had over 1000 citations, i.e. in the same range as the extremely cited papers evaluated in the 2012 Nature analysis. All of these 39 papers actually scored well in at least one of the 5 dimensions of importance assessed (excluding publication difficulty), with an average maximum score of 83/100. This means that practically all of these papers were considered to be important; thus it is fair to assume that the work would be worth funding by NIH. Nevertheless, if you still insist that NIH should not fund the people behind papers that are so extremely influential (provided they are not wrong), I am afraid I have run out of arguments and there is no way that I will ever convince you.

      I trust both you and me and the previous letter writers, we all want to support science and celebrate the best science possible. It is fine to disagree on how to achieve this noble goal. Let us please collect more empirical evidence and fewer opinions. I would cherish to have more robust evidence, even if it were to prove me wrong. Thank you for giving me an opportunity to discuss again this interesting topic.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2015 Oct 28, Joshua L Cherry commented:

      Thank you, John, for your response. As I see it, the apparent inconsistencies between the two publications remain unresolved.

      The earlier work claimed that NIH does a poor job of funding innovators, based on the assumption that the highly cited authors that you identified were innovators. The later work not only provides evidence suggesting otherwise, but explicitly states that the relationship between citations and innovation is unknown. Would you agree, then, that the earlier claim about funding of innovators is unfounded? I am asking about the particular case presented there, not about other arguments or claims that might or might not involve innovativeness.

      I would note, since it seems to be necessary, that the emphasis on innovativeness does not originate with me, but with your earlier publication. Your reply warns against a focus on disruptive innovativeness, but Nicholson and Ioannidis (2012) focused on innovativeness. I am merely responding to your argument. We can certainly consider whether highly cited authors necessarily have other qualities, but the shift in argument should be acknowledged.

      What, then, of the argument that your list of highly cited authors can be assumed to be among the best of the best on grounds other than innovativeness? According to your later piece there is much that we do not know about citation patterns and it is an open question whether "the most highly cited papers [are] the most important ones". How, then, can you be so certain that these authors are all exceptionally good scientists who should undoubtedly have been funded as principal investigators?

      The final paragraph of your response seems to suggest that by pointing out inconsistencies between these two publications I have laid the groundwork for an argument against funding of biomedical research. If this were correct, it is not clear how it would be relevant. (Surely you, of all people, are not suggesting self-censorship on those grounds.) But it is incorrect, and in fact backwards. I have never suggested, any more than you have, that anybody is unworthy of funding. Rather, we are discussing how to identify those most worthy of funding. Your remarks rely on the very point that we are debating: your conviction that your list of highly cited authors reliably identifies extraordinarily good scientists. Unlike my comment, your Conform and Be Funded piece, which claimed to have demonstrated empirically that NIH spends its funds unwisely, might make it difficult to argue for greater NIH funding. By pointing out that this claim was based on an unfounded assumption, I do, if anything, the opposite.

      Thank you again for engaging in this discussion.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2015 Oct 25, John P A Ioannidis commented:

      Dear Joshua,

      thank you for trying to make a connection between these two papers. I welcome further brainstorming and investigation on these topics. My interpretation of our results in the current paper is that highly-cited papers may be important for various reasons. Disruptive innovation is one of them, but continued progress, broader interest, and greater synthesis are more prominent features of these influential papers. This does not mean that extremely highly-cited papers are not important (even if some are more important than others), or that I would feel happy if the largest biomedical funding agency in the world does not have sufficient funds to support even the leading authors of extremely highly-cited papers. Also of note, the articles evaluated in the 2012 Nature analysis were far more cited, on average, than the papers evaluated in the 2014 Nature analysis and the sampling was very different, so the connection between the two analyses is even more tenuous.

      I continue to think that if someone has been a leader in a paper that has reached the top 0.01% of citation impact in the scientific literature (as in the articles in the 2012 analysis), that person warrants to have his/her research funded, unless this research has been clearly proven wrong and a dead end in the meanwhile. I never argued that the authors of the top-0.01% of cited papers should be the only researchers to be funded, that only disruptive innovative research should be supported, or that all great work is extremely highly-cited. I believe that it is important to support research that is innovative, but it is also important to support research that achieves continued progress, broader interest, and greater synthesis. Scientific excellence has many faces, and focusing only on disruptive innovation may even limit scientific progress and may lead to exaggerated expectations and exaggerated claims by researchers and funders who try to justify their existence in a societal environment that is unfortunately not very supportive of science.

      There is also a wider issue to be discussed in your criticism. I have always tried to make the strongest case for public support for science, I never tire to say that science is the best thing that has happened to human beings. In my humble opinion, the 2012 analysis should have offered strong support that the funding budget of NIH should be increased, since currently funds are so limited that NIH cannot even fund many of the people standing behind papers with the utmost extreme citation impact. I really do not see how it helps to make a case for more support for science, when one claims that even people behind the top-0.01% of cited work in the biomedical literature do not merit funding because their extremely high-impact work is unimportant or not relevant to NIH.

      Thank you again for your comments.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2015 Oct 16, Joshua L Cherry commented:

      This piece is quite astounding in light of earlier claims made by the first author in the pages of the same periodical. In the earlier piece, Nicholson and Ioannidis (Nicholson JM, 2012) based a harsh indictment of NIH funding decisions, along with a recommendation for a new policy, on a questionable assumption: that a scientist's authorship of a very highly cited article is a reliable indicator of excellence or innovativeness. This more recent work by Ioannidis et al. suggests that this assumption is false, or at best unsubstantiated, seemingly undermining the earlier work, while creating the impression that the authors never had any definite opinion on the matter.

      In a piece with the provocative title "Research grants: Conform and be funded", Nicholson and Ioannidis (Nicholson JM, 2012) analyzed the pattern of subsequent NIH funding of the primary authors (first, last, or sole authors) of very highly cited articles. Because the fraction funded as NIH principal investigators was lower than they believed it should be, they concluded that NIH does a poor job of funding innovative research. They also suggested that such authors, whom they regarded as having demonstrated exceptional innovativeness or excellence, be automatically funded as principal investigators. Several people, myself among them, argued that such authors are not necessarily innovative or exceptional scientists, but Nicholson and Ioannidis staunchly maintained their position (http://tinyurl.com/npojxk2; http://tinyurl.com/ozme26j).

      This more recent piece paints a very different picture. It begins by telling us how little we know about the meaning of citation patterns, posing such questions as "Are the most highly cited papers the most important ones?" If, as Ioannidis et al. have it, these were open questions, what basis could there have been for the conclusions of Nicholson and Ioannidis? Furthermore, to the extent that the evidence presented here tells us anything about what citation patterns actually mean, it tells us that very highly cited publications do not tend to be highly innovative, contrary to the assertions of Nicholson and Ioannidis. Strikingly, in the concluding section Ioannidis et al. tell us that

      It would be particularly useful to know whether successful out-of-the-box ideas are generated and defended largely by the most influential scientists or by colleagues lower on the citation rankings.

      Such knowledge would, indeed, be useful. It would, in fact, seem to be a prerequisite for the arguments of Nicholson and Ioannidis, a prerequisite that Ioannidis et al. tell us was unfulfilled.

      This piece makes no mention of the earlier arguments that it appears to undermine. This leaves one wondering whether Ioannidis maintains his earlier conclusions. If so, it is not clear how this can be reconciled with the present publication. If not, an indication of the change in position would be helpful.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2015 Oct 16, Joshua L Cherry commented:

      This piece is quite astounding in light of earlier claims made by the first author in the pages of the same periodical. In the earlier piece, Nicholson and Ioannidis (Nicholson JM, 2012) based a harsh indictment of NIH funding decisions, along with a recommendation for a new policy, on a questionable assumption: that a scientist's authorship of a very highly cited article is a reliable indicator of excellence or innovativeness. This more recent work by Ioannidis et al. suggests that this assumption is false, or at best unsubstantiated, seemingly undermining the earlier work, while creating the impression that the authors never had any definite opinion on the matter.

      In a piece with the provocative title "Research grants: Conform and be funded", Nicholson and Ioannidis (Nicholson JM, 2012) analyzed the pattern of subsequent NIH funding of the primary authors (first, last, or sole authors) of very highly cited articles. Because the fraction funded as NIH principal investigators was lower than they believed it should be, they concluded that NIH does a poor job of funding innovative research. They also suggested that such authors, whom they regarded as having demonstrated exceptional innovativeness or excellence, be automatically funded as principal investigators. Several people, myself among them, argued that such authors are not necessarily innovative or exceptional scientists, but Nicholson and Ioannidis staunchly maintained their position (http://tinyurl.com/npojxk2; http://tinyurl.com/ozme26j).

      This more recent piece paints a very different picture. It begins by telling us how little we know about the meaning of citation patterns, posing such questions as "Are the most highly cited papers the most important ones?" If, as Ioannidis et al. have it, these were open questions, what basis could there have been for the conclusions of Nicholson and Ioannidis? Furthermore, to the extent that the evidence presented here tells us anything about what citation patterns actually mean, it tells us that very highly cited publications do not tend to be highly innovative, contrary to the assertions of Nicholson and Ioannidis. Strikingly, in the concluding section Ioannidis et al. tell us that

      It would be particularly useful to know whether successful out-of-the-box ideas are generated and defended largely by the most influential scientists or by colleagues lower on the citation rankings.

      Such knowledge would, indeed, be useful. It would, in fact, seem to be a prerequisite for the arguments of Nicholson and Ioannidis, a prerequisite that Ioannidis et al. tell us was unfulfilled.

      This piece makes no mention of the earlier arguments that it appears to undermine. This leaves one wondering whether Ioannidis maintains his earlier conclusions. If so, it is not clear how this can be reconciled with the present publication. If not, an indication of the change in position would be helpful.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Oct 25, John P A Ioannidis commented:

      Dear Joshua,

      thank you for trying to make a connection between these two papers. I welcome further brainstorming and investigation on these topics. My interpretation of our results in the current paper is that highly-cited papers may be important for various reasons. Disruptive innovation is one of them, but continued progress, broader interest, and greater synthesis are more prominent features of these influential papers. This does not mean that extremely highly-cited papers are not important (even if some are more important than others), or that I would feel happy if the largest biomedical funding agency in the world does not have sufficient funds to support even the leading authors of extremely highly-cited papers. Also of note, the articles evaluated in the 2012 Nature analysis were far more cited, on average, than the papers evaluated in the 2014 Nature analysis and the sampling was very different, so the connection between the two analyses is even more tenuous.

      I continue to think that if someone has been a leader in a paper that has reached the top 0.01% of citation impact in the scientific literature (as in the articles in the 2012 analysis), that person warrants to have his/her research funded, unless this research has been clearly proven wrong and a dead end in the meanwhile. I never argued that the authors of the top-0.01% of cited papers should be the only researchers to be funded, that only disruptive innovative research should be supported, or that all great work is extremely highly-cited. I believe that it is important to support research that is innovative, but it is also important to support research that achieves continued progress, broader interest, and greater synthesis. Scientific excellence has many faces, and focusing only on disruptive innovation may even limit scientific progress and may lead to exaggerated expectations and exaggerated claims by researchers and funders who try to justify their existence in a societal environment that is unfortunately not very supportive of science.

      There is also a wider issue to be discussed in your criticism. I have always tried to make the strongest case for public support for science, I never tire to say that science is the best thing that has happened to human beings. In my humble opinion, the 2012 analysis should have offered strong support that the funding budget of NIH should be increased, since currently funds are so limited that NIH cannot even fund many of the people standing behind papers with the utmost extreme citation impact. I really do not see how it helps to make a case for more support for science, when one claims that even people behind the top-0.01% of cited work in the biomedical literature do not merit funding because their extremely high-impact work is unimportant or not relevant to NIH.

      Thank you again for your comments.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2015 Oct 28, Joshua L Cherry commented:

      Thank you, John, for your response. As I see it, the apparent inconsistencies between the two publications remain unresolved.

      The earlier work claimed that NIH does a poor job of funding innovators, based on the assumption that the highly cited authors that you identified were innovators. The later work not only provides evidence suggesting otherwise, but explicitly states that the relationship between citations and innovation is unknown. Would you agree, then, that the earlier claim about funding of innovators is unfounded? I am asking about the particular case presented there, not about other arguments or claims that might or might not involve innovativeness.

      I would note, since it seems to be necessary, that the emphasis on innovativeness does not originate with me, but with your earlier publication. Your reply warns against a focus on disruptive innovativeness, but Nicholson and Ioannidis (2012) focused on innovativeness. I am merely responding to your argument. We can certainly consider whether highly cited authors necessarily have other qualities, but the shift in argument should be acknowledged.

      What, then, of the argument that your list of highly cited authors can be assumed to be among the best of the best on grounds other than innovativeness? According to your later piece there is much that we do not know about citation patterns and it is an open question whether "the most highly cited papers [are] the most important ones". How, then, can you be so certain that these authors are all exceptionally good scientists who should undoubtedly have been funded as principal investigators?

      The final paragraph of your response seems to suggest that by pointing out inconsistencies between these two publications I have laid the groundwork for an argument against funding of biomedical research. If this were correct, it is not clear how it would be relevant. (Surely you, of all people, are not suggesting self-censorship on those grounds.) But it is incorrect, and in fact backwards. I have never suggested, any more than you have, that anybody is unworthy of funding. Rather, we are discussing how to identify those most worthy of funding. Your remarks rely on the very point that we are debating: your conviction that your list of highly cited authors reliably identifies extraordinarily good scientists. Unlike my comment, your Conform and Be Funded piece, which claimed to have demonstrated empirically that NIH spends its funds unwisely, might make it difficult to argue for greater NIH funding. By pointing out that this claim was based on an unfounded assumption, I do, if anything, the opposite.

      Thank you again for engaging in this discussion.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2015 Oct 31, John P A Ioannidis commented:

      Dear Joshua,

      Thank you for your additional insights, I suspect this topic can be hotly debated ad infinitum. Let me please try again to convey what I think:

      You claim that my earlier paper was “based on the assumption that the highly cited authors identified were innovators.” I think that either your interpretation of my assumption is misleading and/or we understand the term “innovators” differently. My assumption was that papers with over 1000 citations (i.e. in the top 0.01% of citations) are very important and thus typically their leading authors merit support (unless the papers were wrong). Importance could include disruptive innovation narrowly defined, but also other equally major qualities that merit recognition and funding. I certainly do not believe that NIH should fund only “disruptive innovators” if the definition of “disruptive innovators” excludes influential experimental studies, randomized trials, other forms of evidence-based research, and interdisciplinary research – the types of work that were excluded by the letter-to-the-editor which was coauthored by David Lipman from your team whom I greatly admire but who threw out of the NIH-relevant agenda almost all health research that is important and almost everything that matters for health. I should also confess that I am disappointed by the stance of the authors of that letter. Their lead author had asked for my raw data and I had shared everything with him within 10 minutes of his request. Then, when I saw in his re-analysis that he had excluded two-thirds of the most extremely-cited papers with the excuse that they are not within the remit of NIH (even though they are objectively categorized by Scopus as belonging to life and health sciences), I asked him to share his raw data to understand his subjective arbitrations. I was very curious to see how leading NIH officials determined that the majority of the most influential medical and health-relevant research is not within the remit of NIH. Almost three years later, I still have not had the pleasure to see their raw data. Perhaps they did not want to reveal that their re-analysis was embarrassing: the re-analysis was based on the untenable assumption that the National Institutes of Health have almost nothing to do with health and with the majority of the most influential health-related research!

      I think our inability to converge in our views lies in our difficulty to agree on what is “innovative” and “important”. For example, I argue that randomized trials and other experimental studies, meta-analyses, guidelines, implementation research, team science, and interdisciplinary science can be extremely innovative and important to fund by NIH, while probably much/most of the funded R01 type of bench work and so-called “mechanistic” research currently funded by NIH is neither "innovative" nor "important", no matter how you want to define these terms within the confines of common sense.

      The exact extent to which “the most highly cited papers are the most important ones” is indeed an open question and the 2014 Nature paper tried to contribute towards answering this question. I hope that other scientists will revisit this question and improve on what we did. I do not expect a perfect correlation between citations and importance, but this does not mean that we know nothing about citations or that they have no value. When selecting papers in the top 0.01% of citations, it is hard to claim that they would not typically be even in the top 10-15% of importance so as to be worth funding. As I said already in my previous response, the papers assessed in the 2014 Nature analysis were less cited than the ones analyzed in the 2012 Nature analysis. The median number of citations in the papers analyzed in the 2014 Nature paper was 180. Only 39 of these papers (3%) had over 1000 citations, i.e. in the same range as the extremely cited papers evaluated in the 2012 Nature analysis. All of these 39 papers actually scored well in at least one of the 5 dimensions of importance assessed (excluding publication difficulty), with an average maximum score of 83/100. This means that practically all of these papers were considered to be important; thus it is fair to assume that the work would be worth funding by NIH. Nevertheless, if you still insist that NIH should not fund the people behind papers that are so extremely influential (provided they are not wrong), I am afraid I have run out of arguments and there is no way that I will ever convince you.

      I trust both you and me and the previous letter writers, we all want to support science and celebrate the best science possible. It is fine to disagree on how to achieve this noble goal. Let us please collect more empirical evidence and fewer opinions. I would cherish to have more robust evidence, even if it were to prove me wrong. Thank you for giving me an opportunity to discuss again this interesting topic.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    5. On 2015 Nov 03, Joshua L Cherry commented:

      John,

      I, too, am wary of an endless discussion. In my view, my straightforward original point remains unscathed by our exchange. Your latest reply does compel me to make or repeat a few points.

      1) It is perplexing that you seemingly deny that your earlier work "claimed that NIH does a poor job of funding innovators, based on the assumption that the highly cited authors that you identified were innovators". Nicholson and Ioannidis say of these authors that

      Such innovative thinkers should not have so much trouble obtaining funding as principal investigators.

      The claim that the papers are highly innovative even appears in a headline above your text. This has nothing to do with how I might interpret "innovative"; you unambiguously asserted that these authors were highly innovative, according to your own meaning, based solely on their authorship of a very highly cited article.

      2) As already noted, the two publications are at odds with each other even if we consider "importance" rather than innovativeness. The discrepancy is reflected in your reply, where you confirm your belief that

      It is an open question whether “the most highly cited papers are the most important ones”

      and yet you write that

      My assumption was that papers with over 1000 citations (i.e. in the top 0.01% of citations) are very important

      I would add that one may rationally believe that there is a correlation between citation and importance while doubting that every primary author of every one of these papers should be funded as a principal investigator. (I add this because of your subtle replacement of "whether" with "the exact extent to which".)

      3) Again, I have not asserted, much less insisted, that anybody should not be funded. I have merely questioned whether a certain criterion is a reliable indicator that a scientist is among those most worthy of funding. Those who assert that it is bear the burden of proof.

      4) Much of your latest reply is an attack on others that has nothing to do with my comments here or with anything that I have written or done. I speak only for myself. I will note as a bystander that the letter to the editor that you criticize clearly does not say what you claim it does. Among other things, it characterizes only 11% of the papers as unrelated to human health. (I imagine that you have noticed that your list of "life sciences" papers includes some that clearly appear to belong to other disciplines.)

      5)

      Let us please collect more empirical evidence and fewer opinions.

      Indeed. Let us also acknowledge that the leap from "author of a very highly cited paper" to "innovative thinker", "excellent scientist", or "person who should certainly be a principal investigator" is based largely on opinion rather than empirical evidence, as your later statements about what we do not know would suggest.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    6. On 2015 Nov 30, John P A Ioannidis commented:

      Dear Joshua,

      thank you again for your comments. I am worried that you continue to cut and paste to distort my sentences.

      1. The headline over my text was written by the Nature editors as their introduction to the paper, so perhaps you should blame them and ask them to replace it with "Here follows a horrible paper by Ioannidis". Yet, I think you would still be unfair to blame them, because their headline says "most innovative and influential", not just "most innovative". The terms "influential", "influence", "major influence" pervade my paper multiple times, but you pick one sentence with "innovative" instead, and interpret it entirely out of its context.<br>
      2. The phrases "the most important" and "very important" are not identical. Very important papers may not necessarily be THE most important. But they are very important - and influential. [As an aside, honestly, this repeated cross-examining quotation-comment style makes me feel as if I am answering the Spanish Inquisition. Am I going to be burnt at the stake now (please!) or there is one more round of torture?]
      3. We agree we need evidence, more evidence - evidence is good, on everything, including the current NIH funding system, which has practically no evidence that it better than other options, but still distributes tens of billions of dollars per year. Wisely, I am sure.<br>
      4. "your list contains...". This is not my list. This is the Scopus list. Right or wrong, I preferred not to manipulate it. Your colleagues did manipulate it and did not even share the data on how exactly they manipulated it.
      5. You continue to use the term "innovative thinker" out of its context. I scanned again carefully my paper and I can't find the word "excellent". In my mind, a student who has authored as first author a paper that got over 1000 citations (and the paper is not wrong/refuted) is already worthy to be given a shot as a principal investigator. If you disagree, what can I say, feel free not to fund him/her. And please don't worry, most of these guys are not funded anyhow currently, many of them even quit science. Hundreds of principal investigators who publish absolutely nothing or publish nothing with any substantial impact get funded again and again. Hurray!<br>

      I am afraid it is unlikely there will be more convergence in our views at this point. A million thanks once again, I have learnt a lot from your comments.

      John


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    7. On 2015 Dec 03, Joshua L Cherry commented:

      None


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.