On 2018 Jan 01, Hilda Bastian commented:
It is great to see randomized trials to test the effects of an infographic. However, I have concerns with the interpretation of the results of this set of 3 trials. The abstract states that these were randomized trials of 171 students, 99 consumers, and 64 doctors. However, those are the numbers of people who completed the knowledge and reading experience questions, not the number randomized: 171 students, 212 consumers, and 108 doctors were randomized. The extremely high dropout rate (e.g. 53% for consumers) leaves only the trial in students as a reliable base for conclusions. And for them, there was no difference in knowledge or reported reading experience - they did not prefer the infographic.
The authors point out that the high dropout rate may have affected the results for consumers and doctors, especially as they faced a numeracy test after being given the infographic or summary to read. That must have skewed the results. In particular, since the infographic (here) has such different content to the plain language summary (here), this seems inevitably related to the issue of numeracy: the plain language summary is almost number-free, while the infographic is number-heavy (an additional 16 numerical expressions).
The knowledge test comprised 10 questions, one of which related to the quality of the evidence included in the systematic review. The infographic and plain language summary contained very different information on this. The article's appendix suggests that the correct answer expected was included in the infographic but not in the plain language summary. It would be helpful to know whether this affected the knowledge scores for readers of the plain language summary.
Cohen's d effect sizes are not reported for the 3 trials separately, and given the heterogeneity in those results, it is not accurate to use the combined result to conclude that all 3 participant groups preferred the infographic and reading it. (In addition, the method for the meta-analysis of effect sizes of the 3 trials is not reported.)
The specific summary and infographic, although high quality, also point to some of the underlying challenges in communicating with these media to consumers. For example, the infographic uses a coffin as pictograph for mortality, which I don't believe is appropriate in patient information. This highlights the risks inherent in using graphic elements where there aren't well-established conventions. Both the infographic and the plain language summary focus on information about the baby's wellbeing and the birth - but not the impact of the intervention on the pregnant woman, or their views of it. Whatever the format, issues remain with the process of determining the content of research summaries for consumers. (I have written more about the evidence on infographics and this study here.)
Disclosure: The Cochrane (text) plain language summaries were an initiative of mine in the early days of the Cochrane Collaboration, when I was a consumer advocate. Although I wrote or edited most of those early Cochrane summaries, I had no involvement with the one studied here.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.