Tuesday, 27 May 2014

How reliable are medical research checks?

Numbers of publications increasing over
the last 100 years: Note log scale!
A recent report from the BBC highlights a surprising increase in the number of scientific papers that have been peer reviewed but which have subsequently been discovered to contain errors.

Peer review is a time honoured methodology used when a scientist submits a body of work for publication in a scientific journal. Since 1665 when the Royal Society in London published its first journal, peer review has been used to check the accuracy and veracity of scientific work and as such is the bedrock of scientific research. It is used as the minimum standard of acceptance by other scientists when they design their new experiments.

Many experiments give us interesting observations but a subsequent experiment might give us a different result, throwing reasonable doubt over the first result. How do we know what is a good result we can believe? The answer is usually via peer review. Once your work has been (anonymously) picked apart by others working in your field  of research and found to be good, then we can start to believe the result is good.

However peer review isn't a perfect system. There are clear weaknesses - those who review a paper must be your peers, therefore in practice it is possible to deduce who they might be, losing the anonymity of the process. Once there is no anonymity the system is vulnerable to accusations of bias and lack of objectivity. 

There is a steady (very small) trickle of known failures of peer review resulting in the publication of flawed papers - one very recent example of a new, simple method to generate stem cells is an outstanding example, though in this case we are told fraud is the cause rather than just simple error or bias.

There are of course further checks on the quality of research, particularly when results are very far reaching and would be widely used. The first step is often to attempt to repeat the experiment in other laboratories. If results cannot be repeated the doubt is widely circulated in the scientific community and/or formally presented as a 'letter to the editor' of the journal concerned and ultimately the paper can be withdrawn.

The BBC article tells us that the number of paper that have been withdrawn has soared from around 30 to 400 in 2010. We could say that the huge increase is a consequence of a focusing of attention of the whole scientific community on ensuring poor papers do not get through, though that tends to suggest that something is wrong with the system of review rather than we are getting it right!

Having witnessed the writing of many scientific papers, up to and including the point where an author considered whether or not to include a piece of data in the paper that would increase the chances of acceptance of that paper by a 'better journal', I don't think there are major problems. There might be some lingering doubt about some data and it can be a very subjective thought process that leads to that data being included. Every now and then there will be honest mistakes, though it was always a doubt over some additional point rather than the main body of work - after all others would be repeating this and if it didn't work your name would be ruined, no-one would trust your data for a long time!

There is probably a case for including open review for some journals - in this system the paper is available to the whole community to read before publication and for then to openly criticize. It is hard to see this being used for hugely important papers as entire careers are at stake and competition between researchers is fierce. Many would not want to give their competitors access to their data unless it was absolutely necessary - open review would mean they would read your data much earlier than usual (even a year or so earlier).

The system works, frauds are detected. There are many many more papers that need review now than there were even 20-30 years ago, never mind 350 years ago when the peer review system was invented. Reviewers have to limit the amount of time they give each paper - the more senior a reviewer the greater the problem. Perhaps attention needs to be paid to training reviewers, spreading the load a little, providing administrative assistance to reviewers?

No comments:

Amazon Contextual Product Ads

Contact us at admin@aspergillus.org.uk