Peer Review at a Crossroads

Posted on: September 29th, 2023 by admin 2 Comments
Print Page

What are the origins of peer review? In September 2015 I addressed this question in my Friday blog, pointing out that The Royal Society, founded in London in 1660, played a key role. In 1665, the first edition of the Philosophical Transactions of the Royal Society was published, and its founding editor was Henry Oldenburg (seen right). The German theologian and natural philosopher who had settled in England served as the Society’s first secretary, and it was he who introduced the practice of sending a manuscript to knowledgeable experts who could judge the quality of the science before publication.

Later in 2015, I published a book entitled On the Shoulders of Oldenburg with the subtitle A Biography of the Academic Rating System in South Africa (Click here for PDF). Beginning in 1984, the Council for Scientific and Industrial Research (CSIR) introduced a system in which researchers applied for a rating that determined the amount of grant support they received. There were three categories: A = researchers performing at the very highest international level; B = researchers of considerable distinction; and C = researchers of proven accomplishment. As indicated by the book’s title, a rating was determined by the assessment of international peer reviewers.

In 1986, at about the time the CSIR’s rating system was gathering steam, the first International Congress on Peer Review and Scientific Publication was held in Chicago. The 10th congress will be held in two years’ time and, to drum up support, the organisers have published an editorial in the Journal of the American Medical Association (JAMA). Entitled “Peer review and scientific publication at a crossroads,” John Ioannidis and his colleagues have warned now is the time to ensure that peer review and the scientific dissemination process become more efficient, fair, open, transparent, reliable, and equitable.

They said the Covid-19 pandemic was a major quake that shook how research was conducted, while “advances in AI and large language models may be another, potentially even larger, seismic force” with some scientists “believing them to be an existential threat to truth and all of humanity.”

Despite these concerns, the authors nevertheless believe there are opportunities for “novel empirical investigations of processes, biases, policies, and innovations.” They are seeking submission of research papers on various topics, including AI in peer review and editorial decision-making, conflicts of interest, effects of sponsorship, evaluation of censorship, use of biometrics to assess quality, and the effects of social media. The meeting will be organised under the auspices of JAMA and the British Medical Journal, with contributions from all scientific disciplines encouraged.

2 Responses

  1. Daniel Kopans says:

    Although I feel that peer review is critical for trying to ensure the accuracy of medical publications, as far as breast cancer screening is concerned, peer review has failed. A number of high visibility medical journals have taken an undeclared position against breast cancer screening, particularly for women ages 40-49, and have refused to publish papers in support of screening while publishing information that opposes screening that has no scientific support. I was able to publish an article exposing some of these publications (Kopans DB. More misinformation on breast cancer screening. Gland Surg. 2017 Feb;6(1):125-129.), but journals don’t want to criticize each other and so are reluctant to publish these critiques.

    I have contacted ethics committees to find that they refuse to do anything, claiming the journals have to be completely “independent”. Somehow we need to return to objective and high quality peer review, but, unfortunately, I do not see this on the horizon. Readers need to not just rely on abstracts but need to read publications very carefully to at least be certain that the conclusions are justified by the data.

    Another associated problem that has been missed by most is the fact that the “Journal of the National Cancer Institute” is not the NCI’s journal. It was sold to Oxford University Press in 1998. The media do not understand this and, incorrectly, think that JNCI articles have the imprimatur of the NCI.

  2. Kit Vaughan says:

    Many thanks for this feedback, Dan. As a colleague of mine commented in response to this blog, “Peer review is like democracy (and science) – the best system we have until we find a better one.”

    That said, I can certainly understand your frustration that there are journal editors who use their positions of influence to decide what does and does not get published — the very antithesis of peer review!

    Because peer review is, by definition, highly subjective, there is plenty of scope for personal bias, and even jealousy, to creep in. That shows through not only in journal publications, but in grant reviews and academic promotions. This of course is one of the downsides of the peer review process.