
It was 30 years ago, when I was a professor of biomedical engineering at the University of Virginia in Charlottesville that I drove through to Washington, DC. I was there at the invitation of the National Science Foundation (NSF), having been assigned 20 research grants to review. It had been a busy time and I arrived at my hotel – which incidentally was across the road from the infamous Watergate Hotel (seen right) – having written up my reviews for only half the grant applications. I subsequently worked into the wee hours of the next morning to complete my reviews.
Interestingly, I had just published an original paper, together with two post-graduate students, on the application of artificial neural networks to model the human locomotor system. Little did I realise that three decades later the same technology would be used to develop a product known as ChatGPT that could generate critiques of grant proposals. As Jocelyn Kaiser reported in today’s edition of Science, ChatGPT could be a major time saver: “Drafting a review might entail just posting parts of a proposal, such as the abstract, aims, and research strategy, into the AI and asking it to evaluate the information.”
Greg Siegle (seen left), a neuroscientist at the University of Pittsburgh, was alarmed when he overheard another scientist at a conference describe how ChatGPT had become an indispensable tool for drafting his grant reviews submitted to the National Institutes of Health (NIH). Siegle then wrote to the NIH authorities, expressing his deep concerns, and on 23 June 2023 the agency banned the use of online generative AI tools like ChatGPT “for analysing and formulating peer-review critiques.”
Topping the list of concerns is confidentiality – when sections of a proposal are fed into an online AI tool, this information then automatically enters the public domain. “The originality of thought that we value is lost and homogenised with this process and may even constitute plagiarism,” wrote concerned NIH officials.
On 7 July 2023, the Australian Research Council (ARC) banned grant reviewers from using generative AI tools after an anonymous Twitter account called @ARC_Tracker reported that “scientists had received reviews that appeared to be written by ChatGPT.” Not all scientists believe that AI-assisted reviews set a dangerous precedent, however. Jake Michaelson of the University of Iowa (seen right) sees AI becoming the first line of the peer-review process, commenting, “I would rather have my own proposals reviewed by ChatGPT than a lazy human reviewer.”
Hi Kit,
I enjoyed your reminiscing about your neural networks research at UVa and the (in)famous Watergate hotel! That researcher clearly went against the confidentiality agreement (among other failings such as the fact that Chat GPT is often erroneous. It is a great resource for researchers who understand its limitations. Also, more to your work, my mammography center is now using AI to review the scans – which I am sure you are working on at CapeRay.
Hi Diane
It’s good to hear from you! Yes, this was a good opportunity for me to reminisce about the early 90s when we were at UVA together.
I agree that the scientist using ChatGPT to review NIH broke confidentiality rules and should probably have been sanctioned in some way.
And, yes, we are certainly working on AI as a tool to detect breast cancer. It will undoubtedly assist — but not replace — the role of radiologists. The next decade will be an exciting one!