
In some countries, hand-held ultrasound (HHUS) examinations are widely used to screen healthy women for breast cancer because this modality is convenient, low cost and does not expose women to ionising radiation. Given the growth in the number of ultrasound exams and the increased workload on sonographers, artificial intelligence (AI) offers the potential to address the problem. Heqing Zhang and colleagues from Sichuan University in Chengdu, China have recently published a paper in the Journal of Digital Imaging entitled “Diagnostic efficiency of the breast ultrasound computer-aided prediction model based on convolutional neural network in breast cancer.”
First, the authors carried out a retrospective analysis in which they used 5,000 breast ultrasound images (2,500 benign and 2,500 malignant) to train their convolutional neural network CNN). The images were labelled, sensitive information was removed, and a region of interest marked manually (see below right). Four different prediction models – well established in the AI field – were constructed using the CNN. The next step was to test these models using a different set of 1,007 images (788 benign and 219 malignant). Finally, the model with the best performance was compared using 683 images (493 benign, 190 malignant) evaluated by sonographers.
The receiver operating curves, plotted as sensitivity vs specificity, were used to determine the corresponding area under the curve (AUC). Seen in the diagram at left (© Society for Imaging Informatics in Medicine), the InceptionV3 model performed best with an AUC of 0.905 compared to values of 0.866, 0.851 and 0.847 for the other three models. The differences between InceptionV3 and the other three models were statistically significant.
The InceptionV3 model was therefore used to compare the performance of the CNN with sonographers. In the diagnosis of malignant breast lesions in the comparison group, the CNN had an AUC of 0.913 which was statistically higher than the value of 0.846 obtained by the sonographers. The sensitivity and specificity for the CNN were 85.8% and 81.5% respectively, compared with values of 93.2% and 76.1% for the sonographers.
Although the findings in this study revealed the prediction accuracy of the CNN model was higher than that of sonographers, the authors acknowledged the lower sensitivity (85.8% vs 93.2%) meant that the model had a higher number of missed diagnoses (false negatives). The success of any AI algorithm is dependent on the training data and, given that HHUS is dependent on operator skill, it is likely that future algorithms based on the repeatability of automated breast ultrasound (ABUS) will be a successful addition to screening programmes.