On 2015 May 29, Michal Kicinski commented:
I thank Dr. Hilda Bastian for her interest in our recent study (Kicinski M, 2015). I strongly believe that post-publication comments very often raise important issues and help the readers to better understand the merits of a study and its limitations. However, I was disappointed to see that the comments of dr. Hilda Bastian do not correspond with the content of our study. For this reason, I feel obliged to clarify a number of issues.
The study of Ioannidis JP, 2007 points out one of the limitations of a large part of publication bias methods based on the asymmetry of the funnel plot, namely that they do not take between-study heterogeneity into account. This is indeed an important limitation of these methods, as also discussed by other researchers (Song F, 2010). However, please note that we did not rely on the asymmetry of the funnel plot in our analysis. Additionally, please note that our model is just an extension of the standard random effects meta-analysis model, which is a valid approach when between-study variability is present. In fact, the study of Ioannidis JP, 2007 is one of the contributions that motivates our approach to model publication bias since our model takes heterogeneity into account.
Dr. Hilda Bastian correctly points out that our study is not the first study on publication bias. There are many valuable studies on this topic and we discussed those most relevant to our research questions in our article. The contribution of our study is that we analyzed a very large number of meta-analyses using a model with strong theoretical foundations. Our study is the largest study on publication bias in meta-analyses to date. Please note that previous studies, e.g., Ioannidis JP, 2007, which Dr. Hilda Bastian mentioned, considered small study effects, a phenomenon that may have many different causes, including publication bias (Song F, 2010, Sterne JA, 2011). Another merit of our study is that we estimated the association between the size of publication bias and the publication year of the studies included in the meta-analyses.
I completely agree that the best solution to the problem of publication bias is the complete reporting of study results. In fact, our findings showing that publication bias is smaller in the meta-analyses of more recent studies support the effectiveness of the measures used to reduce publication bias in clinical trials. I strongly advocate the introduction of new policies aimed to completely eliminate reporting biases from clinical trials and, as written in our article, the implementation of measures to reduce publication bias in research domains other than clinical trials, such as observational studies and preclinical research.
Although we did not investigate the use of publication bias methods in the meta-analyses from the Cochrane Library, it is clear from previous research that the potential presence of publication bias is often ignored by researchers performing meta-analyses and that the methods accounting for publication bias based on the statistical significance are hardly ever used (Song F, 2010, Onishi A, 2014). When publication bias is present in a meta-analysis, ignoring the problem leads to biased estimates of the effect size (Normand SL, 1999). Therefore, similar to others (Sterne JA, 2011), we argue that researchers should investigate the presence of publication bias and perform sensitivity analyses taking publication bias into account. One difficulty with the use of publication bias methods is that they require researchers to make certain assumptions about the nature of publication bias. For example, the trim and fill method defines publication bias as suppression of a certain number of most extreme negative studies (Duval S, 2000). The use of the Egger’s test (Egger M, 1997) as a publication bias detection tool requires researchers to make the assumption that publication bias leads to a negative association between effect size and precision. The performance of a certain publication bias method depends on whether or not the method’s assumptions are met. For example, it has been demonstrated that publication bias detection tests based on the funnel are characterized by a very low power when publication bias based on the statistical significance is present and the mean effect size equals zero (Kicinski M, 2014). Publication bias based on the statistical significance is the best-documented form of publication bias (Song F, 2009, Dwan K, 2013), The results of our study add to this body of evidence. Therefore, we argue that publication bias tools designed to handle publication bias based on the statistical significance should be used by researchers.
In the tweet with the link to her comment on PubMed, Dr. Hilda Bastian wrote on the 25th of May: ‘27% of cochranecollab reviews over-estimate effects cos of publication bias? Hmm.’ Please note that our study did not investigate the proportion of meta-analyses that overestimate effects. In fact, the objectives of our study were completely different. We estimated the ratio of the probability of including statistically significant outcomes favoring treatment to the probability of including other outcomes in the meta-analyses of efficacy and the ratio of the probability of including results showing no evidence of adverse effects to the probability of including results demonstrating the presence of adverse effects in the meta-analyses of safety.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.