2 Matching Annotations
  1. Jul 2018
    1. On 2014 Jan 04, Dorothy V M Bishop commented:

      I was pleased to see that Professor Farthing took the opportunity to tackle the subject of research misconduct in his lecture. He cogently notes the nature of the problem and makes suggestions to deal with it. I thought his analysis was generally on-target, but I was concerned about his second suggested solution: enhanced monitoring and audit, and his failure to consider an additional approach, which is to change the incentive structure for researchers. The following points are taken from a blogpost I wrote on these topics (http://deevybee.blogspot.co.uk/2013/06/research-fraud-more-scrutiny-by.html).

      I agree we need to think about how to fix science, and that many of our current practices lead to non-replicable findings. I just don't think more scrutiny by administrators is the solution.

      So what would I do? The answers fall into three main categories: incentives, publication practices, and research methods.

      Incentives: Currently, we have a situation where research stardom, assessed by REF criteria, is all-important. Farthing notes that RAE/REF criteria have been devised to stress quality rather than quantity of research, which is a good thing, but it is still the case that too much emphasis goes on the prestige of journals (see http://deevybee.blogspot.co.uk/2013/01/journal-impact-factors-and-ref-2014.html).

      Instead of valuing papers in top journals, we should be valuing research replicability. This would entail a massive change in our culture, but a start has already been made in my discipline of psychology :see http://www.nature.com/news/psychologists-strike-a-blow-for-reproducibility-1.14232.

      Publication practices: the top journals prioritize exciting results over methodological rigour. There is therefore a strong temptation to do post hoc analyses of data until an exciting result emerges. I agree with Farthing that pre-registration of research projects is a good way of dealing with this. I'm pleased to say that here too, psychology is leading the way in extending research registration beyond the domain of clinical trials: http://blogs.lse.ac.uk/impactofsocialsciences/tag/registered-reports/

      Research methods: we need better training of scientists to become more aware of the limitations of the methods that they use. Too often statistical training is a dry and inaccessible discipline. All scientists should be taught how to generate random datasets: nothing is quite as good at instilling a proper understanding of p-values as seeing the apparent patterns in data that will inevitably arise if you look hard enough at some random numbers. In addition, not enough researchers receive training in best practices for ensuring quality of data entry, or in exploratory data analysis to check the numbers are coherent and meet assumptions of the analytic approach.

      Finally, before any new regulation is introduced, there should be a cold-blooded cost-benefit analysis that considers, among other things, the cost of the regulation both in terms of the salaries of people who implement it, and the time and other costs to those affected by it. My concern is that among the 'other costs' is something rather nebulous that could easily get missed. Quite simply, doing good research takes time and mental space of the researchers. Most researchers are geeks who like nothing better than staring at data and thinking about complicated problems. If you require them to spend time satisfying bureaucratic requirements, this saps the spirit and reduces creativity.

      I think we can learn much from the way ethics regulations have panned out. When a new system was first introduced in response to the Alder Hey scandal, I'm sure many thought it was a good idea. It has taken several years for the full impact to be appreciated. The problems are documented in a report by the Academy of Medical Sciences, which noted "Urgent changes are required to the regulation and governance of health research in the UK because unnecessary delays, bureaucracy and complexity are stifling medical advances, without additional benefits to patient safety"


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2014 Jan 04, Dorothy V M Bishop commented:

      I was pleased to see that Professor Farthing took the opportunity to tackle the subject of research misconduct in his lecture. He cogently notes the nature of the problem and makes suggestions to deal with it. I thought his analysis was generally on-target, but I was concerned about his second suggested solution: enhanced monitoring and audit, and his failure to consider an additional approach, which is to change the incentive structure for researchers. The following points are taken from a blogpost I wrote on these topics (http://deevybee.blogspot.co.uk/2013/06/research-fraud-more-scrutiny-by.html).

      I agree we need to think about how to fix science, and that many of our current practices lead to non-replicable findings. I just don't think more scrutiny by administrators is the solution.

      So what would I do? The answers fall into three main categories: incentives, publication practices, and research methods.

      Incentives: Currently, we have a situation where research stardom, assessed by REF criteria, is all-important. Farthing notes that RAE/REF criteria have been devised to stress quality rather than quantity of research, which is a good thing, but it is still the case that too much emphasis goes on the prestige of journals (see http://deevybee.blogspot.co.uk/2013/01/journal-impact-factors-and-ref-2014.html).

      Instead of valuing papers in top journals, we should be valuing research replicability. This would entail a massive change in our culture, but a start has already been made in my discipline of psychology :see http://www.nature.com/news/psychologists-strike-a-blow-for-reproducibility-1.14232.

      Publication practices: the top journals prioritize exciting results over methodological rigour. There is therefore a strong temptation to do post hoc analyses of data until an exciting result emerges. I agree with Farthing that pre-registration of research projects is a good way of dealing with this. I'm pleased to say that here too, psychology is leading the way in extending research registration beyond the domain of clinical trials: http://blogs.lse.ac.uk/impactofsocialsciences/tag/registered-reports/

      Research methods: we need better training of scientists to become more aware of the limitations of the methods that they use. Too often statistical training is a dry and inaccessible discipline. All scientists should be taught how to generate random datasets: nothing is quite as good at instilling a proper understanding of p-values as seeing the apparent patterns in data that will inevitably arise if you look hard enough at some random numbers. In addition, not enough researchers receive training in best practices for ensuring quality of data entry, or in exploratory data analysis to check the numbers are coherent and meet assumptions of the analytic approach.

      Finally, before any new regulation is introduced, there should be a cold-blooded cost-benefit analysis that considers, among other things, the cost of the regulation both in terms of the salaries of people who implement it, and the time and other costs to those affected by it. My concern is that among the 'other costs' is something rather nebulous that could easily get missed. Quite simply, doing good research takes time and mental space of the researchers. Most researchers are geeks who like nothing better than staring at data and thinking about complicated problems. If you require them to spend time satisfying bureaucratic requirements, this saps the spirit and reduces creativity.

      I think we can learn much from the way ethics regulations have panned out. When a new system was first introduced in response to the Alder Hey scandal, I'm sure many thought it was a good idea. It has taken several years for the full impact to be appreciated. The problems are documented in a report by the Academy of Medical Sciences, which noted "Urgent changes are required to the regulation and governance of health research in the UK because unnecessary delays, bureaucracy and complexity are stifling medical advances, without additional benefits to patient safety"


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.