The solution to all these problems is the same as the answer to “How do I organise my journals if I don’t use cornflakes boxes?” Use the internet. We can change papers into mini-websites (sometimes called “notebooks”) that openly report the results of a given study. Not only does this give everyone a view of the full process from data to analysis to write-up – the dataset would be appended to the website along with all the statistical code used to analyse it, and anyone could reproduce the full analysis and check they get the same numbers – but any corrections could be made swiftly and efficiently, with the date and time of all updates publicly logged.
This seems not feasible for a few reasons: lack of tools, lack of incentives for researchers, and variance of the output for readers. The author already mentions the lack of tools/skills to make this view possible. This is perhaps the most easily solvable problem here. With time, new tools could come that would require less skills while scientists could teach themselves new skills as well. Scientists are already a highly skilled group and certainly it is plausible the next generation could become familiar with publishing these kinds of notebook reports. The bigger problems are the lack of incentives for researchers and the variance in output for readers that this will generate. Academia moves slowly and respect is still granted through journal articles and conference papers. Any one researcher moving away from this will not be able to have a career. At the same time, since publishing journal articles and conference papers is already so time consuming, it is often hard for researchers to find time for writing an additional interactive website to document their findings, thereby setting up a new infrastructure. Now, even if we can solve the above two problems, it's still not clear that this would actually lead to an improvement for the readers. Yes, the interactive articles from New York Times are indeed more pleasant and more informative than the average journal article. However, take an average Jupyter notebook. It meanders between code and figures. Often the computations are not in the same order as the clearest story of the findings. The figures are often non-interactive and thus really no better than the figures in journal articles. What do we really gain here? A strong standard for such computational notebooks could resolve this, but it may limit researchers just as much as journal articles. Perhaps a peer review / mentoring process could enforce readability , although it would need some external funding to really happen.