Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures

Uddin, A. Hasib and Chen, Yen-Lin and Akter, Miss Rokeya and Ku, Chin Soon and Yang, Jing and Por, Lip Yee (2024) Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures. Heliyon, 10 (9). e30625. ISSN 2405-8440, DOI https://doi.org/10.1016/j.heliyon.2024.e30625.

Full text not available from this repository.
Official URL: https://doi.org/10.1016/j.heliyon.2024.e30625

Abstract

Automatic classification of colon and lung cancer images is crucial for early detection and accurate diagnostics. However, there is room for improvement to enhance accuracy, ensuring better diagnostic precision. This study introduces two novel dense architectures (D1 and D2) and emphasizes their effectiveness in classifying colon and lung cancer from diverse images. It also highlights their resilience, efficiency, and superior performance across multiple datasets. These architectures were tested on various types of datasets, including NCT-CRC-HE-100K (set of 100,000 non -overlapping image patches from hematoxylin and eosin (H &E) stained histological images of human colorectal cancer (CRC) and normal tissue), CRC-VAL-HE-7K (set of 7180 image patches from N = 50 patients with colorectal adenocarcinoma, no overlap with patients in NCTCRC-HE-100K), LC25000 (Lung and Colon Cancer Histopathological Image), and IQ-OTHNCCD (Iraq -Oncology Teaching Hospital/National Center for Cancer Diseases), showcasing their effectiveness in classifying colon and lung cancers from histopathological and Computed Tomography (CT) scan images. This underscores the multi -modal image classification capability of the proposed models. Moreover, the study addresses imbalanced datasets, particularly in CRC-VAL-HE7K and IQ-OTHNCCD, with a specific focus on model resilience and robustness. To assess overall performance, the study conducted experiments in different scenarios. The D1 model achieved an impressive 99.80 % accuracy on the NCT-CRC-HE-100K dataset, with a Jaccard Index (J) of 0.8371, a Matthew ` s Correlation Coefficient (MCC) of 0.9073, a Cohen ` s Kappa (Kp) of 0.9057, and a Critical Success Index (CSI) of 0.8213. When subjected to 10 -fold cross -validation on LC25000, the D1 model averaged (avg) 99.96 % accuracy (avg J, MCC, Kp, and CSI of 0.9993, 0.9987, 0.9853, and 0.9990), surpassing recent reported performances. Furthermore, the ensemble of D1 and D2 reached 93 % accuracy (J, MCC, Kp, and CSI of 0.7556, 0.8839, 0.8796, and 0.7140) on the IQ-OTHNCCD dataset, exceeding recent benchmarks and aligning with other reported results. Efficiency evaluations were conducted in various scenarios. For instance, training on only 10 % of LC25000 resulted in high accuracy rates of 99.19 % (J, MCC, Kp, and CSI of 0.9840, 0.9898, 0.9898, and 0.9837) (D1) and 99.30 % (J, MCC, Kp, and CSI of 0.9863, 0.9913, 0.9913, and 0.9861) (D2). In NCT-CRC-HE-100K, D2 achieved an impressive 99.53 % accuracy (J, MCC, Kp, and CSI of 0.9906, 0.9946, 0.9946, and 0.9906) with training on only 30 % of the dataset and testing on the remaining 70 %. When tested on CRC-VAL-HE-7K, D1 and D2 achieved 95 % accuracy (J, MCC, Kp, and CSI of 0.8845, 0.9455, 0.9452, and 0.8745) and 96 % accuracy (J, MCC, Kp, and CSI of 0.8926, 0.9504, 0.9503, and 0.8798), respectively, outperforming previously reported results and aligning closely with others. Lastly, training D2 on just 10 % of NCT-CRC-HE-100K and testing on CRC-VAL-HE-7K resulted in significant outperformance of InceptionV3, Xception, and DenseNet201 benchmarks, achieving an accuracy rate of 82.98 % (J, MCC, Kp, and CSI of 0.7227, 0.8095, 0.8081, and 0.6671). Finally, using explainable AI algorithms such as Grad -CAM, Grad -CAM ++, Score -CAM, and Faster Score -CAM, along with their emphasized versions, we visualized the features from the last layer of DenseNet201 for histopathological as well as CT -scan image samples. The proposed dense models, with their multi -modality, robustness, and efficiency in cancer image classification, hold the promise of significant advancements in medical diagnostics. They have the potential to revolutionize early cancer detection and improve healthcare accessibility worldwide.

Item Type: Article
Funders: National Science and Technology Council in Taiwan (NSTC-112-2221-E-027-088-MY2) ; (NSTC-112-2622-8-027-008), Ministry of Education, Taiwan (1122302319), UTAR
Uncontrolled Keywords: Dense neural networks (DNN); Cancer image classification; Multi-modal network; Histopathological imaging; CT-Scan imaging; Lung cancer; Colon cancer
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
R Medicine
Divisions: Faculty of Computer Science & Information Technology > Department of Computer System & Technology
Depositing User: Ms. Juhaida Abd Rahim
Date Deposited: 30 Sep 2024 02:13
Last Modified: 30 Sep 2024 02:13
URI: http://eprints.um.edu.my/id/eprint/45236

Actions (login required)

View Item View Item