In August last year, I published a blog entitled “Machines that mimic human creativity” and showed how an AI algorithm was able to generate two remarkable paragraphs. These described how engineers might go about combining X-rays and ultrasound in a single device to diagnose breast cancer early. Three months later, in November 2022, OpenAI released a new version of their open source, natural language processing tool called ChatGPT, where GPT stands for “generative pre-trained transformer.” The huge potential of ChatGPT has been recognised by Microsoft which has already invested $3 billion and is now poised to invest a further $10 billion in OpenAI.
Earlier this week, four editors working for the Journal of the American Medical Association (JAMA) published an editorial entitled, “Nonhuman ‘authors’ and implications for the integrity of scientific publication and medical knowledge.” They pointed out that the release of ChatGPT “has prompted immediate excitement about its many potential uses but also trepidation about potential misuse” such as cheating on homework assignments and writing student essays. In fact, the tool has already been employed to take, and pass, medical licensing examinations.
Last month, Nature raised concerns about articles published in the health field where ChatGPT was listed as a co-author – including an institutional affiliation, and even an e-mail address! Although Nature has acknowledged this was “an error that will soon be corrected,” the articles and their nonhuman “authors” have already been indexed in Google Scholar and PubMed. The publisher has now instituted a policy which prohibits an algorithm being listed as an author because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”
In response to these concerns, JAMA has also revised its instructions for authors to state that “nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.” The guidelines do allow the use of these tools for the creation of content or assistance with writing and editing, but the onus is on the authors to describe this in the Acknowledgement section of the manuscript.
While the JAMA editors are clear that tools such as ChatGPT hold great promise, they recognise there are nevertheless risks for all who are involved in the scientific enterprise. They are particularly concerned about the current era of pervasive misinformation and mistrust and called for the responsible use of AI language models to “promote and protect the credibility and integrity of medical research and trust in medical knowledge.” Now that is something we can all strive for.