Learning from Past Mistakes

Posted on: March 4th, 2022 by admin
Print Page

Almost 25 years ago, in 1998, the FDA approved computer-aided detection (CAD) tools as an adjunct to mammography. This was before the FDA had approved full-field digital mammography (FFDM), which meant the plain film (i.e. analogue) mammograms had to be scanned and converted into digital format to apply the CAD algorithm. The technology was rapidly adopted by breast screening clinics in the USA, prompting Hologic, an emerging manufacturer of FFDM systems, to acquire R2 Technology for $220 million in 2006. However, just a year later, a seminal article reported that CAD failed to improve mammography performance.

In the past few years, a new type of CAD based on artificial intelligence (AI) and deep learning has emerged, and radiologists are again jumping on the bandwagon. Every few months another company secures FDA approval for its AI algorithm to detect breast cancer on digital mammograms. In a recent opinion piece published in JAMA Health ForumJoann Elmore from Los Angeles and Christoph Lee from Seattle have suggested the industry should learn from its past mistakes and prevent history from repeating itself.

The authors (seen left) identified four key lessons that need to be learned. First, they pointed out that when a radiologist studies the mammogram to make a diagnosis, there is a complex interaction with the AI algorithm, and advocated for a better understanding of different user interfaces. Second, they thought reimbursement for using the AI algorithm should be based on better patient outcomes and not simply on improved performance in an artificial setting. This would require large, real-world clinical trials with longitudinal data collection.

Third, Elmore and Lee suggested that the FDA needs to revise its clearance process, recognising that AI algorithms are not static and their performance improves over time. In fact, the FDA is addressing this issue and has developed an action plan for software as a medical device (SaMD). Fourth is the age-old issue – in America, at least – of litigation: should the radiologist be liable for damages if the AI algorithm misses a cancer?

The authors concluded, “We need to learn from our past embrace of emerging computer support tools and make conscientious changes. Inaction now risks repeating past mistakes.” In a comment about the article, Daniel Kopans stated that he had a “healthy” skepticism about AI, particularly since it is a “black box” and is unable to explain the rationale for a particular diagnostic decision. He argued we need to understand why decisions are made and not simply rely on “Trust me”!

Comments are closed.