By Julie Steenhuysen
CHICAGO (Reuters) – A Google Artificial Intelligence system has proven to be just as good as experienced radiologists to use mammogram screenings to determine which women have breast cancer, and has shown promising potential, according to researchers in the U.S. and UK, Reduce errors.
The study, published on Wednesday in the journal Nature, shows that artificial intelligence (AI) has the potential to improve the accuracy of breast cancer screening, which affects one in eight women worldwide.
According to the American Cancer Society, radiologists miss about 20% of breast cancer cases on mammograms, and half of all women who have been examined over a 10-year period have a false positive result.
The results of the study, developed with Alphabet Inc’s DeepMind AI division, which merged with Google Health in September, are a major advance in Chicago breast cancer screening, said.
The team, which included researchers from Imperial College London and the British National Health Service, trained the system to identify breast cancer in tens of thousands of mammograms.
They then compared the system’s performance to the actual results of 25,856 mammograms in the UK and 3,097 mammograms in the USA.
The study showed that the AI system identified cancers with a level of accuracy similar to that of experienced radiologists, while the number of false positives was 5.7% in the US-based group and 1 in the UK-based group. 2% decreased.
The number of false negative results that incorrectly classify tests as normal was reduced by 9.4% in the US group and by 2.7% in the UK group.
These differences reflect the way mammograms are read. In the United States, only one radiologist reads the results and the tests are performed every one to two years. In the UK, the tests are performed every three years and are read by two radiologists each. If they do not agree, a third will be consulted.
In a separate test, the group compared the AI system to six radiologists and found that it surpassed the AI system in accurately detecting breast cancer.
Connie Lehman, head of the breast imaging department at Massachusetts General Hospital at Harvard, said the results are consistent with the findings of several groups using AI to improve cancer detection in mammograms, including their own work.
The idea of using computers to improve cancer diagnosis is decades old and computer-aided recognition (CAD) systems are widely used in mammography clinics, but CAD programs have not improved clinical performance.
Lehman said current CAD programs have been trained to identify things that human radiologists can see while AI computers are learning to recognize cancer based on actual results from thousands of mammograms.
This has the potential to “outperform human ability to identify subtle clues that the human eye and brain cannot perceive,” added Lehman.
Although computers have not been “super helpful” so far, “we have shown at least tens of thousands of mammograms that the tool can actually make a very well-informed decision,” said Etemadi.
The study has some limitations. Most tests were done using the same type of imaging equipment, and the U.S. group included many patients with confirmed breast cancer.
What matters is that the team has not yet shown that the tool improves patient care, said Dr. Lisa Watanabe, Chief Medical Officer of CureMetrix, whose AI mammography program received US approval last year.
“AI software is only helpful when it moves the radiologist’s dial,” she said.
Etemadi agreed that these studies as well as regulatory approval of a process that may take several years are required.
(Reporting by Julie Steenhuysen in Chicago; editing by Alexander Smith and Matthew Lewis)