Twelve years ago researchers at IBM established the DREAM (Dialogue for Reverse Engineering Assessment and Methods) Challenges, an open science, crowdsourcing effort to tackle difficult problems in medicine. These have ranged from predicting drug behaviour to the treatment of acute myeloid leukemia, with participants leveraging the wisdom of the crowd to discover novel and superior computational models, and to make these widely available. Last year saw the completion of the Digital Mammography DREAM Challenge that asked: Can machine learning help improve accuracy in breast cancer screening?
The team that placed 2nd in the Challenge was from Hungary and the scientists have just published “Detecting and classifying lesions in mammograms with Deep Learning” in Scientific Reports. First author Dezső Ribli, who is a doctoral candidate in physics at Eötvös Loránd University in Budapest, said: “We think that a deep-learning tool will be very useful for doctors. It could help improve cancer detections, and it could also alleviate the pressure on radiologists.”
Computer-aided diagnosis (CAD) has been around since the dawn of digital mammography two decades ago, but adoption has been limited, with considerable room for improvement. During the past six years convolutional neural networks (CNNs) have had tremendous success in complex image recognition tasks, approaching human performance, and finding application on our smart phones, for example. As Ribli told AuntMinnie.com, “Improving CAD in mammography with deep learning is especially interesting because this is one of the most challenging image analysis tasks in medicine.”
Classification performance is measured as the area under the curve (AUC) when sensitivity is plotted versus the false positive rate. In the DREAM Challenge, Ribli’s team had an AUC = 0.85, but their latest results have established the state-of-the-art in classification performance with an AUC = 0.95 for the publicly available INbreast database (see below right, © Scientific Reports). Their CNN algorithm not only identifies the precise location of lesions, but it is also able to classify them as benign or malignant.
In the spirit of open science, the Hungarians have made their source code and trained model freely available online, including a plugin for the OsiriX diagnostic viewing station. As described in our recently published brochure, CapeRay’s AcesoFusion product – which enables the co-registration of X-ray and ultrasound images – has also been implemented as a plugin for OsiriX, providing us an opportunity to build on Ribli’s optimism: “Deep learning is not just a promise for medical image analysis, it is already the best way to do it.”