Shear-Wave Elastography and Deep Learning

Posted on: November 18th, 2016 by admin
Print Page

It is well known that malignant breast tumours are mechanically stiffer than benign tumours and the surrounding healthy tissues. While standard ultrasound is able to identify anatomical changes caused by breast cancer, the images do not convey functional information about the lesion. By implementing a technique called shear-wave elastography (SWE), an ultrasound system can generate shear waves that travel at varying speeds through tissue, with the speed of propagation being correlated with tissue stiffness.

An early pioneer of SWE for imaging the breast is SuperSonic Imagine (SSI), based in Aix-en-Provence, whose Aixplorer system is seen above right. Another company that has been promoting SWE for quantifying tissue stiffness is Toshiba Medical. Computer-aided diagnosis (CAD) for distinguishing benign and malignant tumours can take advantage of SWE although this approach relies on statistical features such as shape, texture and tissue heterogeneity that must be extracted by experts. This is labour intensive and the classification performance relies heavily on the choice of features.

Researchers from Shanghai have just published a paper in Ultrasonics in which they applied an algorithm known as deep learning to SWE images for classifying breast tumours. Their experimental evaluation was performed on a set of 227 SWE images: 135 benign and 92 malignant tumours from 121 patients. All examinations were carried out with an Aixplorer system from SSI, with the image below left (© Elsevier) illustrating a malignant lesion: on top, standard B-mode ultrasound; and at bottom, SWE. All lesions underwent excisional biopsy, core needle biopsy or fine-needle aspiration for pathologic diagnosis which served as the gold standard for comparison purposes.

Zhang and colleagues designed their deep learning algorithm so that it was able to automatically extract and learn from the SWE images and then classify the lesion as either benign or malignant. Their classification performance was impressive – accuracy of 93%, sensitivity of 89%, and specificity of 97% – which was statistically superior to a traditional CAD approach which employed statistical features and principal components analysis, for which the comparable metrics were 88%, 81% and 93% respectively.

The researchers acknowledge that in future their deep learning architecture should be applied to a much larger group of patients with a greater variety of tumour types and histopathologic severity. They believe their approach could be adapted for other ultrasound techniques, including B-mode, Doppler and contrast-enhanced ultrasound. They conclude, “Combining multiple modalities could be valuable for diagnosis of breast tumours. The combination of deep learning with multi-view learning seems to be a promising technique.”

Comments are closed.