As we have previously noted, ABUS – automated breast ultrasound – is an acronym to remember when considering imaging modalities for the early detection of breast cancer. In fact, it is one of the two modalities in our patented Aceso system that integrates ABUS and full-field digital mammography (FFDM) in a single platform (seen at right). In the recent past, we have reviewed the potential of AI – artificial intelligence – to assist radiologists when reading FFDM images. Perhaps it was inevitable, then, that AI algorithms would be used to help clinicians classify lesions seen on ABUS exams.
Engineers and radiologists from Canada and South Korea have just published their findings in Ultrasound in Medicine and Biology. They trained a convolutional neural network (CNN) to differentiate between malignant and benign lesions seen on ABUS images and demonstrated that their algorithm was able to improve the diagnostic performance of their clinical readers. This led them to observe, “Our results suggest that the proposed CNN has advantages as a second reviewer to support diagnostic decisions of human reviewers and provide opportunities to compare decisions.”
Because ABUS acquires multiple 2D slices, it is possible to reconstruct 3D images of the underlying breast tissue. While this provides valuable additional insights, the sheer volume of extra data also presents a significant challenge for the radiologist: the time required to make a diagnosis. And this is where AI can play a vital role. The researchers validated their CNN algorithm based on a dataset of 263 patients with 316 breast lesions, of which 135 were malignant and 181 were benign, with an average size of 13 mm.
The ABUS system used was the Siemens Acuson S2000 in which the patients lay in a supine position (seen above left) and the ultrasound slices were acquired in the transverse plane. Illustrated on the right is a benign lesion in the transverse plane (top) and the reconstructed coronal plane (© World Federation for Ultrasound in Medicine & Biology). The AI algorithm yielded a sensitivity of 88.6% and specificity of 87.6%, with an area under the curve (AUC) – a measure of diagnostic success – equal to 0.95.
Five reviewers – a breast radiologist, three radiology students and a physician – interpreted the ABUS images before and after seeing the AI findings, and all readers except the experienced radiologist had a statistically significant improvement in AUC. The authors therefore concluded, “In future, it would be worthwhile to evaluate our present work with dedicated detection algorithms for breast lesion detection and classification as an end-to-end computer aided detection solution.”