
Dr Richard Smith (seen at right) is a physician who served as editor of the British Medical Journal (BMJ) between 1991 and 2004. We can safely assume that during his tenure at such a prestigious journal he sent many thousands of papers out to be reviewed by experts. Yet, in the past five years he has become one of the most outspoken critics of the process, suggesting that peer review is “a sacred cow that is ready to be slain.”
In 2010 the journal Breast Cancer Research published a special supplement on controversies in breast cancer to which Smith contributed a short communication. He quoted another journal editor who had said that if peer review was a drug it would never have been allowed onto the market because there was no convincing evidence of its benefits.
Smith continued, “To my continuing surprise, almost no scientists know anything about the evidence on peer review. It is a process that is central to science – deciding which grant proposals will be funded, which papers will be published, who will be promoted, and who will receive a Nobel prize. We might thus expect that scientists, people who are trained to believe nothing until presented with evidence, would want to know all the evidence available on this important process. Yet not only do scientists know little about the evidence on peer review, but most continue to believe in peer review, thinking it essential for the progress of science.”
Smith gave two important examples in which peer review was ineffective and had led to great harm: a paper in The Lancet that erroneously suggested a causative link between the MMR vaccine and autism; and an article in the New England Journal of Medicine that led to an arthritis drug causing thousands of patients to suffer heart attacks. He identified six problems with the peer review process: (1) it is expensive, costing $3 billion per annum, excluding the time of the reviewers; (2) it is exceedingly slow; (3) it can be a lottery; (4) it does not detect errors; (5) scientific bias is prevalent, especially against truly original work; and (6) the system is open to abuse.
He made an interesting distinction between “pre-publication” peer review – the usual meaning – and “post-publication” review by peers. Smith honestly believes the best approach would be to publish everything and then let the world decide what is important. That’s a radical idea but unlikely to be adopted anytime soon.
Interesting concept suited for scientific evaluation.