Peer review report
Reviewer: Adam Marcus, co-founder Retraction Watch & Alison Abritis, PhD, researcher at Retraction Watch
General comments
Major Problems: I found serious deficits in both for this article, and thus I have serious concerns as to the usefulness of this article. Therefore, I have not proceeded in a line-by-line, as I consider the overall problems to be grave enough to require attention and revision before getting to lesser items of clarity.
I would like to point out that the authors show a marvelous attention to their work, and they have much to contribute to the field of retraction studies, and I do honestly look forward to their future work. However, in order for the field to move ahead with accuracy and validity, we must no longer just rely on superficial number crunching, and must start including the complexities of publishing in our analyses, as difficult and labor-intensive as it might be.
1) The authors stated that they used the search protocol (and therefore presumably the same dataset) as described in Toma & Padureanu, 2021, and do not indicate any process to compensate for its weaknesses. In the referenced study, the authors (same as for this article) utilized a PubMed search using only “Retracted Publication” in Publication Type. This search method is immediately insufficient, as some retracted articles are not bannered or indexed as retracted in PubMed. This issue is well-understood among scholars who search databases for retractions, and by now one would expect that these searches would strive to be more comprehensive.
A better method, if one insists on restricting the search to PubMed, would have been to use Publication Type to search for “retracted publication,” and then to search for “retraction of publication,” and to compare the output to eliminate duplications. There are even more comprehensive ways to search PubMed, especially since some articles are retitled as “Withdrawn” – Elsevier, for example, uses the term instead of “Retracted” for papers removed within a year of their publication date – but do not come in searches for either publication type. Even better would have been to use databases with more comprehensive indexing of retractions.
2) The authors are using the time from publication to retraction based on the notice dates and using them to indicate efficacy of oversight by publishers. However, this approach is seriously problematic. It takes no notice of when the publisher was first informed that the article was potentially compromised. Publishers who respond rapidly to information that affects years/decades old publications will inevitably show worse scores than those who are advised upon an article’s faults immediately upon its publication, but who drag their heels a few months in dealing with the problem.
Second, there is little consistency in dealing with retractions between publishers, within the same publishers or even within the same journal. Under the same publisher, one journal editor may be highly responsive during their term, while the next editor may not be. Most problems with articles quite often are first addressed by contacting the authors and/or journal editors, and publishers – especially those with hundreds of journals – may not have any idea of the ensuing problem for weeks or months, if at all. Therefore, the larger publishers would be far more likely to show worse scores than publishers with few journals to manage oversight.
Third, the dates on retraction notices are not always representative of when an article was watermarked or otherwise indicated as retracted. Elsevier journals often overwrite the html page of the original article with the retraction notice, leaving the original article’s date of publication alone. A separate retraction notice may not be published until days, weeks or even years after the article has been retracted. Springer and Sage have done this as well, as have other publishers – though not to the same extent (yet).
Historically, The Journal of Biological Chemistry would publish a retraction notice and link it immediately to the original article, but a check of the article’s PDF would show it having been retracted days to weeks earlier. They have recently been acquired by Elsevier, so it is unknown how this trend will play out. And keep in mind, in some ways this is in itself not a bad thing – as it gives the user quicker notice that an article is unsuitable for citation, even while the notice itself is still undergoing revisions. It just makes tracking the time of publication to retraction especially difficult.
3) As best as can be determined, the authors are taking the notices at face value, and that has been repeatedly shown to be flawed. Many notices are written as a cooperative effort between the authors and journal, regardless of who initiated the retraction and under the looming specter of potential litigation.
Trying to establish who initiated a retraction process strictly by analyzing the notice language is destined to produce faulty conclusions. Looking just at PubPeer comments, questions about the data quality may be raised days/month/years before a retraction, with indications of having contacted the journal or publisher. And yet, an ensuing notice may be that the authors requested the retraction because of concerns about the data/image – where the backstory clearly shows that impetus for the retraction was prompted by a journal’s investigation of outside complaints. As an example, the recent glut of retractions of papers coming from paper mills often suggest the authors are requesting the retraction. This interpretation would be false, however, as those familiar with the backstory are aware that the driving force for many of these retractions were independent investigators contacting the journals/publishers for retraction of these manuscripts.
Assigning the reason for retraction from only the text of the notice will absolutely skew results. As already stated, in many cases, journal editors and authors work together to produce the language. Thus, the notice may convey an innocuous but unquestionable cause (e.g., results not reproducible) because the fundamental reason (e.g., data/image was fabricated or falsified) is too difficult to prove to a reasonable degree. Even the use of the word “plagiarism” is triggering for authors’ reputations – and notices have been crafted to avoid any suggestion of such, with euphemisms that steer well clear of the “p” word. Furthermore, it has been well-documented that some retractions required by institutional findings of misconduct have used language in the notice indicating simple error or other innocuous reasons as the definitive cause.
The authors also discuss changes in the quality of notices increasing or decreasing in publishers – but without knowing the backstory. Having more words in a notice or giving one or two specific causes cannot in itself be an indicator of the quality (i.e., accuracy) of said notice.
4) The authors tend to infer that the lack of a retraction in a journal implies a degree of superiority over journals with retractions. Although they qualify it a bit ( “Are over 90% of journals without a retracted article perfect? It is a question that is quite difficult to answer at this time, but we believe that the opinion that, in reality, there are many more articles that should be retracted (Oransky et al. 2021) is justified and covered by the actual figures.”), the inference is naive. First, they have not looked at the number of corrections within these journals. Even ignoring that these corrections may be disproportionate within different journals and require responsive editorial staff, some journals have gone through what can only be called great contortions to issue corrections rather than retractions.
Second, the lack of retractions in a journal speaks nothing to the quality of the articles therein. Predatory journals generally avoid issuing retractions, even when presented with outright proof of data fabrication or plagiarism. Meanwhile, high-quality journals are likely to have more, and possibly more astute, readers, who could be more adept at spotting errors that require retraction.
Third, smaller publishers/journals may not have the fiscal resources to deal with the issues that come with a retraction. As an example, even though there was an institutional investigation finding data fabrication, at least one journal declined to issue a retraction for an article by Joachim Boldt (who has more than 160 retractions for misconduct) after his attorneys made threats of litigation.
Simply put, the presence or lack of a retraction in a journal is no longer a reasonable speculation about the quality of the manuscripts or the efficiency of the editorial process.
5) I am concerned that the authors appear to have made significant errors in their analysis of publishers. For example, they claim that neither PLOS nor Elsevier retracted papers in 2020 for problematic images. That assertion is demonstrably false.
Decision
Requires revisions: The manuscript contains objective errors or fundamental flaws that must be addressed and/or major revisions are suggested.