Earlier this week in the United States, the Food and Drug Administration (FDA) published an action plan for the use of artificial intelligence (AI) algorithms and machine learning (ML) software. The FDA plays a critical role in the regulation of medical devices and their plan highlights five goals: (1) introduce a framework for AI/ML to be considered Software as a Medical Device (SaMD); (2) develop good ML practices; (3) support a patient-centered approach; (4) develop methodologies for improving AI algorithms; and (5) advance pilot studies of real-world performance. Breast imaging appears poised to embrace these goals.
There has been considerable interest recently in applying deep-learning AI algorithms to full-field digital mammography (FFDM) and digital breast tomosynthesis (DBT) images, but these efforts have highlighted two problems: obtaining large amounts of training data, and being able to generalise across different equipment, imaging modalities and patient populations. Researchers from DeepHealth, collaborating with clinicians in the USA and China, have just published a paper in Nature Medicine in which their AI algorithm outperformed five radiologists specialised in reading FFDM and DBT images.
Lead author Bill Lotter commented: “By leveraging prior information learned in each successive training stage, this strategy results in AI that detects cancer accurately. Our approach and validation encompass DBT, which is particularly important given the growing use of DBT.” As seen at left (© DeepHealth), the algorithm was able to detect breast cancer up to two years before it was diagnosed during conventional interpretation. In fact, there was a 14% improvement in sensitivity compared to the five breast-imaging specialists.
One of the problems when using hand-held ultrasound (HHUS) or automated breast ultrasound (ABUS) to screen for cancer is the high number of false-positive findings that result in unnecessary biopsies. The major challenge faced by clinicians is to characterise correctly the morphological features of a lesion, separating benign from malignant. In a paper published this week in Scientific Reports, researchers from Korea have presented an AI diagnostic model that reduced the number of false-positive findings by 40% without having an impact on sensitivity.
Seen at right (© Nature) is a malignant breast mass that has been correctly classified by the AI model. While the authors acknowledged that they need to conduct prospective studies with greater patient numbers, they have demonstrated the significant potential of AI algorithms applied to ultrasound data. At CapeRay we believe the ultimate power of AI will be unleashed when well-trained algorithms are applied to co-registered DBT and ABUS images of the breast.