Copyright
©The Author(s) 2024.
World J Gastroenterol. Oct 21, 2024; 30(39): 4267-4280
Published online Oct 21, 2024. doi: 10.3748/wjg.v30.i39.4267
Published online Oct 21, 2024. doi: 10.3748/wjg.v30.i39.4267
Ref. | Data sets | Endoscopic modality | Main study aim | AI algorithms | Best result |
Yuan et al[37], 2022 | Image data set: 53933; images from 2621 patients; video data set: 142; videos from 19 patients | WLI, NM-NBI, ME -NBI | Detection of superficial ESCC | DCNN | Image data set: Sensitivity: 92.5%-99.7%; specificity: 78.5%-89.0%; AUC: 0.906-0.989; video data set: Sensitivity: 89.5%-100%; specificity: 73.7%-89.5% |
Zhao et al[38], 2019 | Image data set: 1383; images from 219 patients | ME-NBI | Classification of IPCLs | FCN | Mean accuracy of senior IPCL: 92.0%; mid-level IPCL: 82.0%; junior IPCL: 73.3% |
Li et al[39], 2021 | Image data set: 5367 images | NM-NBI | Detection of early ESCC | FCN | AUC: 0.9761; sensitivity: 91.0%; specificity: 96.7%; accuracy: 94.3% |
Fukuda et al[41], 2020 | Image data set: 28333 images | ME-NBI | Detection and characterization of early ESCC | CNN | Detection: Sensitivity: 91.0%; specificity: 51.0%; accuracy: 63.0%; characterization: Sensitivity: 86.0% specificity: 89.0%; accuracy: 88.0% |
Liu et al[42], 2022 | Image data set: 13083; WLI images from 1239 patients | WLI | Detection and delineation of ESCC margins | DCNN | Detection: Accuracy: 85.7% (internal validation) and 84.5% (external validation); delineation: Accuracy: 93.4% (internal validation) and 95.7% (external validation) |
Ikenoyama et al[44], 2021 | Image data set: 7301; images from 667 patients | LCE | Predict multiple Lugol-voiding lesions | GoogLeNet deep neural network | Sensitivity: 84.4%; specificity: 70.0%; accuracy: 76.4% |
Yuan et al[47], 2023 | Image data set: 10047; images from 1112 patients; video data set: 140; videos from 1183 lesions | ME-NBI | Detection and delineation of ESCC margins | DCNN | Detection: Accuracy: 92.4% (internal validation) and 89.9% (external validation); delineation: Accuracy: 88.9% (internal validation) and 87.0% (external validation) |
Shimamoto et al[48], 2020 | Image data set: 23977; images from 909 patients; video data set: 102 videos | NM-NBI | Assessment of tumor infiltration depth | CNN | Sensitivity: 50.0%; specificity: 99.0%; accuracy: 87.0% |
Wang et al[49], 2023 | ME-BLI data set: 2887; images from 246 patients; ME-NBI data set: 493; images from 81 patients | ME-BLI, ME-NBI | Identification of IPCL | R-CNN | Recall: 79.25%; precision: 75.54%; F1-score: 0.764; mean average precision: 74.95% |
Zhang et al[53], 2023 | Image data set: 5119; images from 581 patients; video data set: 33 videos | ME-NBI | Infiltration depth prediction | AI-IDPS | For differentiating SM2-3; lesions: Image data set: Sensitivity: 85.7%; specificity: 86.3%; accuracy: 86.2%; video data set: Sensitivity: 87.5%; specificity: 84.0%; accuracy: 84.9% |
Yuan et al[54], 2022 | Image data set: 7094; images from 685 patients | ME-NBI | Classification of IPCLs | DCNN | Accuracy (internal validation): 91.3%; accuracy (external validation): 89.8% |
- Citation: Zhang WY, Chang YJ, Shi RH. Artificial intelligence enhances the management of esophageal squamous cell carcinoma in the precision oncology era. World J Gastroenterol 2024; 30(39): 4267-4280
- URL: https://www.wjgnet.com/1007-9327/full/v30/i39/4267.htm
- DOI: https://dx.doi.org/10.3748/wjg.v30.i39.4267