Artificial intelligence (AI) has penetrated modern society in many ways; and now the development of a low-cost AI could potentially screen for cervical cancer better than humans.
 
After a decade of work, scientists from Lehigh University have created a cervical cancer screening technique that has the potential to outperform humans at a significantly lower cost.
 
The investigators believe the technique could be used in developing countries, and are currently seeking funding to conduct clinical trials using the data-drive method.
 
The screening system uses image-based classifiers constructed from a number of Cervigram images.
 
“Cervigrams have great potential as a screening tool in resource-poor regions where clinical tests such as Pap and HPV are too expensive to be made widely available,” said Sharon Xiaolei Huang, author of the paper published in Medical Image Analysis. “However, there is concern about Cervigrams’ overall effectiveness due to reports of poor correlation between visual lesions recognition and high-grade disease, as well as disagreement among experts when grading visual findings.”
 
To identify characteristics that are most beneficial in screening for cancer, the investigators created hand-crafted pyramid features and examined the performance of convolutional neural networks for cervical disease classification.
 
The investigators used data from the US National Cancer Institute of 1113 patient visits to build the screening tool. Of the patients, 345 were found to have lesions that were positive for moderate to severe dysplasia and 767 had lesions that were negative.
 
“The program we’ve created automatically segments tissue regions seen in photos of the cervix, correlating visual features from the images to the development of precancerous lesions,” Huang said. “In practice, this could mean that medical staff analyzing a new patient’s Cervigram could retrieve data about similar cases––not only in terms of optics, but also pathology since the dataset contains information about the outcomes of women at various stages of pathology.”
 
Huang said the PLBP-PLAB-PHOG feature descriptor outperformed every Pap test of HPV test, when achieving a specificity of 90%.
 
“When not constrained by the 90% specificity requirement, our-image based classifier can achieve even better overall accuracy,” she said. “For example, our fine-tuned CNN features with Softmax classifier can achieve an accuracy of 78.41% with 80.87% sensitivity and 75.94% specificity at the default probability threshold 0.5. Consequently, on this dataset, our lower-cost image-based classifiers can perform comparably or better than human interpretation based on widely-used Pap and HPV tests…”
 
The classifiers achieve higher sensitivity, particularly, in detecting moderate and sever dysplasia.