1
|
Huang YH, Lin Q, Jin XY, Chou CY, Wei JJ, Xing J, Guo HM, Liu ZF, Lu Y. Classification of pediatric video capsule endoscopy images for small bowel abnormalities using deep learning models. World J Gastroenterol 2025; 31:107601. [DOI: 10.3748/wjg.v31.i21.107601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2025] [Revised: 04/14/2025] [Accepted: 05/19/2025] [Indexed: 06/06/2025] Open
Abstract
BACKGROUND Video capsule endoscopy (VCE) is a noninvasive technique used to examine small bowel abnormalities in both adults and children. However, manual review of VCE images is time-consuming and labor-intensive, making it crucial to develop deep learning methods to assist in image analysis.
AIM To employ deep learning models for the automatic classification of small bowel lesions using pediatric VCE images.
METHODS We retrospectively analyzed VCE images from 162 pediatric patients who underwent VCE between January 2021 and December 2023 at the Children's Hospital of Nanjing Medical University. A total of 2298 high-resolution images were extracted, including normal mucosa and lesions (erosions/erythema, ulcers, and polyps). The images were split into training and test datasets in a 4:1 ratio. Four deep learning models: DenseNet121, Visual geometry group-16, ResNet50, and vision transformer were trained using 5-fold cross-validation, with hyperparameters adjusted for optimal classification performance. The models were evaluated based on accuracy, precision, recall, F1-score, and area under the receiver operating curve (AU-ROC). Lesion visualization was performed using gradient-weighted class activation mapping.
RESULTS Abdominal pain was the most common indication for VCE, accounting for 62% of cases, followed by diarrhea, vomiting, and gastrointestinal bleeding. Abnormal lesions were detected in 93 children, with 38 diagnosed with inflammatory bowel disease. Among the deep learning models, DenseNet121 and ResNet50 demonstrated excellent classification performance, achieving accuracies of 90.6% [95% confidence interval (CI): 89.2-92.0] and 90.5% (95%CI: 89.9-91.2), respectively. The AU-ROC values for these models were 93.7% (95%CI: 92.9-94.5) for DenseNet121 and 93.4% (95%CI: 93.1-93.8) for ResNet50.
CONCLUSION Our deep learning-based diagnostic tool developed in this study effectively classified lesions in pediatric VCE images, contributing to more accurate diagnoses and increased diagnostic efficiency.
Collapse
Affiliation(s)
- Yi-Hsuan Huang
- Department of Gastroenterology, Children’s Hospital of Nanjing Medical University, Nanjing 210008, Jiangsu Province, China
| | - Qian Lin
- Department of Gastroenterology, Children’s Hospital of Nanjing Medical University, Nanjing 210008, Jiangsu Province, China
| | - Xin-Yan Jin
- School of Electronic Science and Engineering, Nanjing University, Nanjing 210023, Jiangsu Province, China
| | - Chih-Yi Chou
- College of Medicine, National Taiwan University, Taipei 100, Taiwan
| | - Jia-Jie Wei
- Department of Gastroenterology, Children’s Hospital of Nanjing Medical University, Nanjing 210008, Jiangsu Province, China
| | - Jiao Xing
- Department of Gastroenterology, Children’s Hospital of Nanjing Medical University, Nanjing 210008, Jiangsu Province, China
| | - Hong-Mei Guo
- Department of Gastroenterology, Children’s Hospital of Nanjing Medical University, Nanjing 210008, Jiangsu Province, China
| | - Zhi-Feng Liu
- Department of Gastroenterology, Children’s Hospital of Nanjing Medical University, Nanjing 210008, Jiangsu Province, China
| | - Yan Lu
- Department of Gastroenterology, Children’s Hospital of Nanjing Medical University, Nanjing 210008, Jiangsu Province, China
| |
Collapse
|
2
|
Dhali A, Kipkorir V, Maity R, Srichawla B, Biswas J, Rathna R, Bharadwaj H, Ongidi I, Chaudhry T, Morara G, Waithaka M, Rugut C, Lemashon M, Cheruiyot I, Ojuka D, Ray S, Dhali G. Artificial Intelligence-Assisted Capsule Endoscopy Versus Conventional Capsule Endoscopy for Detection of Small Bowel Lesions: A Systematic Review and Meta-Analysis. J Gastroenterol Hepatol 2025; 40:1105-1118. [PMID: 40083189 PMCID: PMC12062924 DOI: 10.1111/jgh.16931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 01/04/2025] [Accepted: 03/01/2025] [Indexed: 03/16/2025]
Abstract
BACKGROUND Capsule endoscopy (CE) is a valuable tool used in the diagnosis of small intestinal lesions. The study aims to systematically review the literature and provide a meta-analysis of the diagnostic accuracy, specificity, sensitivity, and negative and positive predictive values of AI-assisted CE in the diagnosis of small bowel lesions in comparison to CE. METHODS Literature searches were performed through PubMed, SCOPUS, and EMBASE to identify studies eligible for inclusion. All publications up to 24 November 2024 were included. Original articles (including observational studies and randomized control trials), systematic reviews, meta-analyses, and case series reporting outcomes on AI-assisted CE in the diagnosis of small bowel lesions were included. The extracted data were pooled, and a meta-analysis was performed for the appropriate variables, considering the clinical and methodological heterogeneity among the included studies. Comprehensive Meta-Analysis v4.0 (Biostat Inc.) was used for the analysis of the data. RESULTS A total of 14 studies were included in the present study. The mean age of participants across the studies was 54.3 years (SD 17.7), with 55.4% men and 44.6% women. The pooled accuracy for conventional CE was 0.966 (95% CI: 0.925-0.988), whereas for AI-assisted CE, it was 0.9185 (95% CI: 0.9138-0.9233). Conventional CE exhibited a pooled sensitivity of 0.860 (95% CI: 0.786-0.934) compared with AI-assisted CE at 0.9239 (95% CI: 0.8648-0.9870). The positive predictive value for conventional CE was 0.982 (95% CI: 0.976-0.987), whereas AI-assisted CE had a PPV of 0.8928 (95% CI: 0.7554-0.999). The pooled specificity for conventional CE was 0.998 (95% CI: 0.996-0.999) compared with 0.5367 (95% CI: 0.5244-0.5492) for AI-assisted CE. Negative predictive values were higher in AI-assisted CE at 0.9425 (95% CI: 0.9389-0.9462) versus 0.760 (95% CI: 0.577-0.943) for conventional CE. CONCLUSION AI-assisted CE displays superior diagnostic accuracy, sensitivity, and positive predictive values albeit the lower pooled specificity in comparison with conventional CE. Its use would ensure accurate detection of small bowel lesions and further enhance their management.
Collapse
Affiliation(s)
- Arkadeep Dhali
- Academic Unit of GastroenterologySheffield Teaching Hospitals NHS Foundation TrustSheffieldUK
- School of Medicine and Population HealthUniversity of SheffieldSheffieldUK
| | | | - Rick Maity
- Institute of Post Graduate Medical Education and ResearchKolkataIndia
| | | | | | | | | | - Ibsen Ongidi
- Faculty of Health SciencesUniversity of NairobiNairobiKenya
| | - Talha Chaudhry
- Faculty of Health SciencesUniversity of NairobiNairobiKenya
| | - Gisore Morara
- Faculty of Health SciencesUniversity of NairobiNairobiKenya
| | | | - Clinton Rugut
- Faculty of Health SciencesUniversity of NairobiNairobiKenya
| | | | | | - Daniel Ojuka
- Faculty of Health SciencesUniversity of NairobiNairobiKenya
| | - Sukanta Ray
- Institute of Post Graduate Medical Education and ResearchKolkataIndia
| | | |
Collapse
|
3
|
Piccirelli S, Salvi D, Pugliano CL, Tettoni E, Facciorusso A, Rondonotti E, Mussetto A, Fuccio L, Cesaro P, Spada C. Unmet Needs of Artificial Intelligence in Small Bowel Capsule Endoscopy. Diagnostics (Basel) 2025; 15:1092. [PMID: 40361910 PMCID: PMC12071857 DOI: 10.3390/diagnostics15091092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2025] [Revised: 04/21/2025] [Accepted: 04/22/2025] [Indexed: 05/15/2025] Open
Abstract
Small bowel capsule endoscopy (SBCE) has emerged in the past two decades as the cornerstone for assessing small bowel disorders, and its use is supported by several guidelines. However, there are several limitations, such as the considerable time required for gastroenterologists to review these videos and reach a diagnosis. To address these limitations, researchers have explored the integration of artificial intelligence in the interpretation of these videos. In our review, we explore the evolving and emerging role of artificial intelligence in SBCE and examine the latest advancements and ongoing studies in these areas, aiming at overcoming current limitations.
Collapse
Affiliation(s)
- Stefania Piccirelli
- Department of Gastroenterology and Endoscopy, Fondazione Poliambulanza Istituto Ospedaliero, 25124 Brescia, Italy; (S.P.); (C.L.P.); (E.T.); (P.C.)
| | - Daniele Salvi
- Department of Gastroenterology and Endoscopy, Fondazione Poliambulanza Istituto Ospedaliero, 25124 Brescia, Italy; (S.P.); (C.L.P.); (E.T.); (P.C.)
| | - Cecilia Lina Pugliano
- Department of Gastroenterology and Endoscopy, Fondazione Poliambulanza Istituto Ospedaliero, 25124 Brescia, Italy; (S.P.); (C.L.P.); (E.T.); (P.C.)
| | - Enrico Tettoni
- Department of Gastroenterology and Endoscopy, Fondazione Poliambulanza Istituto Ospedaliero, 25124 Brescia, Italy; (S.P.); (C.L.P.); (E.T.); (P.C.)
| | - Antonio Facciorusso
- Department of Experimental Medicine, Università del Salento, 73100 Lecce, Italy
| | | | - Alessandro Mussetto
- Gastroenterology Unit, Santa Maria delle Croci Hospital, 48121 Ravenna, Italy;
| | - Lorenzo Fuccio
- Gastroenterology Unit, University of Bologna, 40136 Bologna, Italy;
| | - Paola Cesaro
- Department of Gastroenterology and Endoscopy, Fondazione Poliambulanza Istituto Ospedaliero, 25124 Brescia, Italy; (S.P.); (C.L.P.); (E.T.); (P.C.)
| | - Cristiano Spada
- Digestive Endoscopy Unit, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168 Rome, Italy;
- Department of Translational Medicine and Surgery, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| |
Collapse
|
4
|
Parikh M, Tejaswi S, Girotra T, Chopra S, Ramai D, Tabibian JH, Jagannath S, Ofosu A, Barakat MT, Mishra R, Girotra M. Use of Artificial Intelligence in Lower Gastrointestinal and Small Bowel Disorders: An Update Beyond Polyp Detection. J Clin Gastroenterol 2025; 59:121-128. [PMID: 39774596 DOI: 10.1097/mcg.0000000000002115] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/11/2025]
Abstract
Machine learning and its specialized forms, such as Artificial Neural Networks and Convolutional Neural Networks, are increasingly being used for detecting and managing gastrointestinal conditions. Recent advancements involve using Artificial Neural Network models to enhance predictive accuracy for severe lower gastrointestinal (LGI) bleeding outcomes, including the need for surgery. To this end, artificial intelligence (AI)-guided predictive models have shown promise in improving management outcomes. While much literature focuses on AI in early neoplasia detection, this review highlights AI's role in managing LGI and small bowel disorders, including risk stratification for LGI bleeding, quality control, evaluation of inflammatory bowel disease, and video capsule endoscopy reading. Overall, the integration of AI into routine clinical practice is still developing, with ongoing research aimed at addressing current limitations and gaps in patient care.
Collapse
Affiliation(s)
| | - Sooraj Tejaswi
- University of California, Davis
- Sutter Health, Sacramento
| | | | | | | | | | | | | | | | | | | |
Collapse
|
5
|
Houdeville C, Souchaud M, Leenhardt R, Goltstein LC, Velut G, Beaumont H, Dray X, Histace A. Toward automated small bowel capsule endoscopy reporting using a summarizing machine learning algorithm: The SUM UP study. Clin Res Hepatol Gastroenterol 2025; 49:102509. [PMID: 39622290 DOI: 10.1016/j.clinre.2024.102509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Revised: 11/25/2024] [Accepted: 11/28/2024] [Indexed: 12/11/2024]
Abstract
BACKGROUND AND OBJECTIVES Deep learning (DL) algorithms demonstrate excellent diagnostic performance for the detection of vascular lesions via small bowel (SB) capsule endoscopy (CE), including vascular abnormalities with high (P2), intermediate (P1) or low (P0) bleeding potential, while dramatically decreasing the reading time. We aimed to improve the performance of a DL algorithm by characterizing vascular abnormalities using a machine learning (ML) classifier, and selecting the most relevant images for insertion into reports. MATERIALS AND METHODS A training dataset of 75 SB CE videos was created, containing 401 sequences of interest that encompassed 1,525 images of various vascular lesions. Several image classification algorithms were tested, to discriminate "typical angiodysplasia" (P2/P1) and "other vascular lesion" (P0) and to select the most relevant image within sequences with repetitive images. The performances of the best-fitting algorithms were subsequently assessed on an independent test dataset of 73 full-length SB CE video recordings. RESULTS Following DL detection, a random forest (RF) method demonstrated a specificity of 91.1 %, an area under the receiving operating characteristic curve of 0.873, and an accuracy of 84.2 % for discriminating P2/P1 from P0 lesions while allowing an 83.2 % reduction in the number of reported images. In the independent testing database, after RF was applied, the output number decreased by 91.6 %, from 216 (IQR 108-432) to 12 (IQR 5-33). The RF algorithm achieved 98.0 % agreement with initial, conventional (human) reporting. Following DL detection, the RF method allowed better characterization and accurate selection of images of relevant (P2/P1) SB vascular abnormalities for CE reporting without impairing diagnostic accuracy. These findings pave the way for automated SB CE reporting.
Collapse
Affiliation(s)
- Charles Houdeville
- Sorbonne University, Center for Digestive Endoscopy, Saint-Antoine Hospital, APHP, 75012 Paris, France; Équipes Traitement de l'Information et Systèmes, ETIS UMR 8051, CY Paris Cergy University, ENSEA, CNRS, 95000 Cergy, France.
| | - Marc Souchaud
- Équipes Traitement de l'Information et Systèmes, ETIS UMR 8051, CY Paris Cergy University, ENSEA, CNRS, 95000 Cergy, France
| | - Romain Leenhardt
- Sorbonne University, Center for Digestive Endoscopy, Saint-Antoine Hospital, APHP, 75012 Paris, France
| | - Lia Cmj Goltstein
- Department of Gastroenterology and Hepatology, Radboud University Medical Center, Nijmegen, Netherlands
| | - Guillaume Velut
- Sorbonne University, Center for Digestive Endoscopy, Saint-Antoine Hospital, APHP, 75012 Paris, France; Department of Gastroenterology CHU Nantes, Hotel Dieu, Nantes, France
| | - Hanneke Beaumont
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Center, Amsterdam, Netherlands
| | - Xavier Dray
- Sorbonne University, Center for Digestive Endoscopy, Saint-Antoine Hospital, APHP, 75012 Paris, France; Équipes Traitement de l'Information et Systèmes, ETIS UMR 8051, CY Paris Cergy University, ENSEA, CNRS, 95000 Cergy, France
| | - Aymeric Histace
- Équipes Traitement de l'Information et Systèmes, ETIS UMR 8051, CY Paris Cergy University, ENSEA, CNRS, 95000 Cergy, France
| |
Collapse
|
6
|
Xu T, Li YY, Huang F, Gao M, Cai C, He S, Wu ZX. A Multi-task Neural Network for Image Recognition in Magnetically Controlled Capsule Endoscopy. Dig Dis Sci 2024; 69:4231-4239. [PMID: 39407081 DOI: 10.1007/s10620-024-08681-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Accepted: 10/04/2024] [Indexed: 11/16/2024]
Abstract
BACKGROUND AND AIMS Physicians are required to spend a significant amount of reading time of magnetically controlled capsule endoscopy. However, current deep learning models are limited to completing a single recognition task and cannot replicate the diagnostic process of a physician. This study aims to construct a multi-task model that can simultaneously recognize gastric anatomical sites and gastric lesions. METHODS A multi-task recognition model named Mul-Recog-Model was established. The capsule endoscopy image data from 886 patients were selected to construct a training set and a test set for training and testing the model. Based on the same test set, the model in this study was compared with the current single-task recognition model with good performance. RESULTS The sensitivity and specificity of the model for recognizing gastric anatomical sites were 99.8% (95% confidence intervals: 99.7-99.8) and 98.5% (95% confidence intervals: 98.3-98.7), and for gastric lesions were 98.8% (95% confidence intervals: 98.3-99.2) and 99.4% (95% confidence intervals: 99.1-99.7). Moreover, the positive predictive value, negative predictive value, and accuracy of the model were more than 95% in recognizing gastric anatomical sites and gastric lesions. Compared with the current single-task recognition model, our model showed comparable sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (p < 0.01, except for the negative predictive value of ResNet, p > 0.05). The Areas Under Curve of our model were 0.985 and 0.989 in recognizing gastric anatomical sites and gastric lesions. Furthermore, the model had 49.1 M parameters and 38.1G Float calculations. The model took 15.5 ms to recognize an image, which was less than the superposition of multiple single models (p < 0.01). CONCLUSIONS The Mul-Recog-Model exhibited high sensitivity, specificity, PPV, NPV, and accuracy. The model demonstrated excellent performance in terms of parameters quantity, Float computation, and computing time. The utilization of the model for recognizing gastric images can improve the efficiency of physicians' reports and meet complex diagnostic requirements.
Collapse
Affiliation(s)
- Ting Xu
- Department of Gastroenterology, The Second Affiliated Hospital of Chongqing Medical University, No. 74 Linjiang Road, Chongqing, China
| | - Yuan-Yi Li
- The Bioengineering College of Chongqing University, Chongqing, China
| | - Fang Huang
- Chongqing Jinshan Technology (Group) Co., Ltd., Chongqing, China
| | - Min Gao
- Chongqing Jinshan Technology (Group) Co., Ltd., Chongqing, China
| | - Can Cai
- Department of Gastroenterology, The Second Affiliated Hospital of Chongqing Medical University, No. 74 Linjiang Road, Chongqing, China
| | - Song He
- Department of Gastroenterology, The Second Affiliated Hospital of Chongqing Medical University, No. 74 Linjiang Road, Chongqing, China
| | - Zhi-Xuan Wu
- Department of Gastroenterology, The Second Affiliated Hospital of Chongqing Medical University, No. 74 Linjiang Road, Chongqing, China.
| |
Collapse
|
7
|
Xie X, Xiao YF, Yang H, Peng X, Li JJ, Zhou YY, Fan CQ, Meng RP, Huang BB, Liao XP, Chen YY, Zhong TT, Lin H, Koulaouzidis A, Yang SM. A new artificial intelligence system for both stomach and small-bowel capsule endoscopy. Gastrointest Endosc 2024; 100:878.e1-878.e14. [PMID: 38851456 DOI: 10.1016/j.gie.2024.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2024] [Revised: 05/31/2024] [Accepted: 06/02/2024] [Indexed: 06/10/2024]
Abstract
BACKGROUND AND AIMS Despite the benefits of artificial intelligence in small-bowel (SB) capsule endoscopy (CE) image reading, information on its application in the stomach and SB CE is lacking. METHODS In this multicenter, retrospective diagnostic study, gastric imaging data were added to the deep learning-based SmartScan (SS), which has been described previously. A total of 1069 magnetically controlled GI CE examinations (comprising 2,672,542 gastric images) were used in the training phase for recognizing gastric pathologies, producing a new artificial intelligence algorithm named SS Plus. A total of 342 fully automated, magnetically controlled CE examinations were included in the validation phase. The performance of both senior and junior endoscopists with both the SS Plus-assisted reading (SSP-AR) and conventional reading (CR) modes was assessed. RESULTS SS Plus was designed to recognize 5 types of gastric lesions and 17 types of SB lesions. SS Plus reduced the number of CE images required for review to 873.90 (median, 1000; interquartile range [IQR], 814.50-1000) versus 44,322.73 (median, 42,393; IQR, 31,722.75-54,971.25) for CR. Furthermore, with SSP-AR, endoscopists took 9.54 minutes (median, 8.51; IQR, 6.05-13.13) to complete the CE video reading. In the 342 CE videos, SS Plus identified 411 gastric and 422 SB lesions, whereas 400 gastric and 368 intestinal lesions were detected with CR. Moreover, junior endoscopists remarkably improved their CE image reading ability with SSP-AR. CONCLUSIONS Our study shows that the newly upgraded deep learning-based algorithm SS Plus can detect GI lesions and help improve the diagnostic performance of junior endoscopists in interpreting CE videos.
Collapse
Affiliation(s)
- Xia Xie
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Yu-Feng Xiao
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Huan Yang
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Xue Peng
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Jian-Jun Li
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Yuan-Yuan Zhou
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Chao-Qiang Fan
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Rui-Ping Meng
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Bao-Bao Huang
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Xi-Ping Liao
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Yu-Yang Chen
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Ting-Ting Zhong
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Hui Lin
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China; Department of Epidemiology, the Third Military Medical University, Chongqing, China.
| | - Anastasios Koulaouzidis
- Department of Clinical Research University of Southern Denmark, Odense, Denmark; Centre for Clinical Implementation of Capsule Endoscopy, Store Adenomer Tidlige Cancere Centre, Svendborg, Denmark.
| | - Shi-Ming Yang
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China.
| |
Collapse
|
8
|
Spada C, Piccirelli S, Hassan C, Ferrari C, Toth E, González-Suárez B, Keuchel M, McAlindon M, Finta Á, Rosztóczy A, Dray X, Salvi D, Riccioni ME, Benamouzig R, Chattree A, Humphries A, Saurin JC, Despott EJ, Murino A, Johansson GW, Giordano A, Baltes P, Sidhu R, Szalai M, Helle K, Nemeth A, Nowak T, Lin R, Costamagna G. AI-assisted capsule endoscopy reading in suspected small bowel bleeding: a multicentre prospective study. Lancet Digit Health 2024; 6:e345-e353. [PMID: 38670743 DOI: 10.1016/s2589-7500(24)00048-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 02/20/2024] [Accepted: 03/04/2024] [Indexed: 04/28/2024]
Abstract
BACKGROUND Capsule endoscopy reading is time consuming, and readers are required to maintain attention so as not to miss significant findings. Deep convolutional neural networks can recognise relevant findings, possibly exceeding human performances and reducing the reading time of capsule endoscopy. Our primary aim was to assess the non-inferiority of artificial intelligence (AI)-assisted reading versus standard reading for potentially small bowel bleeding lesions (high P2, moderate P1; Saurin classification) at per-patient analysis. The mean reading time in both reading modalities was evaluated among the secondary endpoints. METHODS Patients aged 18 years or older with suspected small bowel bleeding (with anaemia with or without melena or haematochezia, and negative bidirectional endoscopy) were prospectively enrolled at 14 European centres. Patients underwent small bowel capsule endoscopy with the Navicam SB system (Ankon, China), which is provided with a deep neural network-based AI system (ProScan) for automatic detection of lesions. Initial reading was performed in standard reading mode. Second blinded reading was performed with AI assistance (the AI operated a first-automated reading, and only AI-selected images were assessed by human readers). The primary endpoint was to assess the non-inferiority of AI-assisted reading versus standard reading in the detection (diagnostic yield) of potentially small bowel bleeding P1 and P2 lesions in a per-patient analysis. This study is registered with ClinicalTrials.gov, NCT04821349. FINDINGS From Feb 17, 2021 to Dec 29, 2021, 137 patients were prospectively enrolled. 133 patients were included in the final analysis (73 [55%] female, mean age 66·5 years [SD 14·4]; 112 [84%] completed capsule endoscopy). At per-patient analysis, the diagnostic yield of P1 and P2 lesions in AI-assisted reading (98 [73·7%] of 133 lesions) was non-inferior (p<0·0001) and superior (p=0·0213) to standard reading (82 [62·4%] of 133; 95% CI 3·6-19·0). Mean small bowel reading time was 33·7 min (SD 22·9) in standard reading and 3·8 min (3·3) in AI-assisted reading (p<0·0001). INTERPRETATION AI-assisted reading might provide more accurate and faster detection of clinically relevant small bowel bleeding lesions than standard reading. FUNDING ANKON Technologies, China and AnX Robotica, USA provided the NaviCam SB system.
Collapse
Affiliation(s)
- Cristiano Spada
- Department of Medicine, Gastroenterology and Endoscopy, Fondazione Poliambulanza Istituto Ospedaliero, Brescia, Italy; Università Cattolica del Sacro Cuore, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Stefania Piccirelli
- Department of Medicine, Gastroenterology and Endoscopy, Fondazione Poliambulanza Istituto Ospedaliero, Brescia, Italy; Università Cattolica del Sacro Cuore, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy.
| | - Cesare Hassan
- IRCCS Humanitas Research Hospital, Department of Biomedical Sciences, Rozzano, Milan, Italy
| | - Clarissa Ferrari
- Unit of Research and Clinical Trials, Fondazione Poliambulanza Istituto Ospedaliero, Brescia, Italy
| | - Ervin Toth
- Skåne University Hospital, Lund University, Department of Gastroenterology, Malmö, Sweden
| | - Begoña González-Suárez
- Hospital Clínic of Barcelona, Endoscopy Unit, Gastroenterology Department, Barcelona, Spain
| | - Martin Keuchel
- Agaplesion Bethesda Krankenhaus Bergedorf, Academic Teaching Hospital of the University of Hamburg, Clinic for Internal Medicine, Hamburg, Germany
| | - Marc McAlindon
- Sheffield Teaching Hospitals NHS Trust, Academic Department of Gastroenterology and Hepatology, Sheffield, UK; Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - Ádám Finta
- Endo-Kapszula Health Centre and Endoscopy Unit, Department of Gastroenterology, Székesfehérvár, Hungary
| | - András Rosztóczy
- University of Szeged, Department of Internal Medicine, Szeged, Hungary
| | - Xavier Dray
- Sorbonne University, Saint Antoine Hospital, APHP, Centre for Digestive Endoscopy, Paris, France
| | - Daniele Salvi
- Department of Medicine, Gastroenterology and Endoscopy, Fondazione Poliambulanza Istituto Ospedaliero, Brescia, Italy; Università Cattolica del Sacro Cuore, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Maria Elena Riccioni
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Digestive Endoscopy Unit, Rome, Italy
| | - Robert Benamouzig
- Hôpital Avicenne, Université Paris 13, Service de Gastroenterologie, Bobigny, France
| | - Amit Chattree
- South Tyneside and Sunderland NHS Foundation Trust, Gastroenterology, Stockton-on-Tees, UK
| | - Adam Humphries
- St Mark's Hospital and Academic Institute, Department of Gastroenterology, Middlesex, UK
| | - Jean-Christophe Saurin
- Hospices Civils de Lyon-Centre Hospitalier Universitaire, Gastroenterology Department, Lyon, France
| | - Edward J Despott
- The Royal Free Hospital and University College London (UCL) Institute for Liver and Digestive Health, Royal Free Unit for Endoscopy, London, UK
| | - Alberto Murino
- The Royal Free Hospital and University College London (UCL) Institute for Liver and Digestive Health, Royal Free Unit for Endoscopy, London, UK
| | | | - Antonio Giordano
- Hospital Clínic of Barcelona, Endoscopy Unit, Gastroenterology Department, Barcelona, Spain
| | - Peter Baltes
- Agaplesion Bethesda Krankenhaus Bergedorf, Academic Teaching Hospital of the University of Hamburg, Clinic for Internal Medicine, Hamburg, Germany
| | - Reena Sidhu
- Sheffield Teaching Hospitals NHS Trust, Academic Department of Gastroenterology and Hepatology, Sheffield, UK; Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - Milan Szalai
- Endo-Kapszula Health Centre and Endoscopy Unit, Department of Gastroenterology, Székesfehérvár, Hungary
| | - Krisztina Helle
- University of Szeged, Department of Internal Medicine, Szeged, Hungary
| | - Artur Nemeth
- Skåne University Hospital, Lund University, Department of Gastroenterology, Malmö, Sweden
| | | | - Rong Lin
- Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Department of Gastroenterology, Wuhan, China
| | - Guido Costamagna
- Department of Medicine, Gastroenterology and Endoscopy, Fondazione Poliambulanza Istituto Ospedaliero, Brescia, Italy; Università Cattolica del Sacro Cuore, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| |
Collapse
|
9
|
Xie W, Hu J, Liang P, Mei Q, Wang A, Liu Q, Liu X, Wu J, Yang X, Zhu N, Bai B, Mei Y, Liang Z, Han W, Cheng M. Deep learning-based lesion detection and severity grading of small-bowel Crohn's disease ulcers on double-balloon endoscopy images. Gastrointest Endosc 2024; 99:767-777.e5. [PMID: 38065509 DOI: 10.1016/j.gie.2023.11.059] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 11/14/2023] [Accepted: 11/27/2023] [Indexed: 04/24/2024]
Abstract
BACKGROUND AND AIMS Double-balloon endoscopy (DBE) is widely used in diagnosing small-bowel Crohn's disease (CD). However, CD misdiagnosis frequently occurs if inexperienced endoscopists cannot accurately detect the lesions. The CD evaluation may also be inaccurate owing to the subjectivity of endoscopists. This study aimed to use artificial intelligence (AI) to accurately detect and objectively assess small-bowel CD for more refined disease management. METHODS We collected 28,155 small-bowel DBE images from 628 patients from January 2018 to December 2022. Four expert gastroenterologists labeled the images, and at least 2 endoscopists made the final decision with agreement. A state-of-the-art deep learning model, EfficientNet-b5, was trained to detect CD lesions and evaluate CD ulcers. The detection included lesions of ulcer, noninflammatory stenosis, and inflammatory stenosis. Ulcer grading included ulcerated surface, ulcer size, and ulcer depth. A comparison of AI model performance with endoscopists was performed. RESULTS The EfficientNet-b5 achieved high accuracies of 96.3% (95% confidence interval [CI], 95.7%-96.7%), 95.7% (95% CI, 95.1%-96.2%), and 96.7% (95% CI, 96.2%-97.2%) for the detection of ulcers, noninflammatory stenosis, and inflammatory stenosis, respectively. In ulcer grading, the EfficientNet-b5 exhibited average accuracies of 87.3% (95% CI, 84.6%-89.6%) for grading the ulcerated surface, 87.8% (95% CI, 85.0%-90.2%) for grading the size of ulcers, and 85.2% (95% CI, 83.2%-87.0%) for ulcer depth assessment. CONCLUSIONS The EfficientNet-b5 achieved high accuracy in detecting CD lesions and grading CD ulcers. The AI model can provide expert-level accuracy and objective evaluation of small-bowel CD to optimize the clinical treatment plans.
Collapse
Affiliation(s)
- Wanqing Xie
- Department of Intelligent Medical Engineering, School of Biomedical Engineering, Anhui Medical University, Hefei, China; Beth Israel Deaconess Medical Center, Harvard Medical School, Harvard University, Boston, Massachusetts, USA
| | - Jing Hu
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Pengcheng Liang
- Department of Intelligent Medical Engineering, School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Qiao Mei
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Aodi Wang
- Department of Intelligent Medical Engineering, School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Qiuyuan Liu
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xiaofeng Liu
- Gordon Center for Medical Imaging, Harvard Medical School and Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Juan Wu
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xiaodong Yang
- Department of General Surgery, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Nannan Zhu
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Bingqing Bai
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Yiqing Mei
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhen Liang
- Department of Intelligent Medical Engineering, School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Wei Han
- Department of Gastroenterology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Mingmei Cheng
- Department of Intelligent Medical Engineering, School of Biomedical Engineering, Anhui Medical University, Hefei, China
| |
Collapse
|
10
|
Mascarenhas M, Martins M, Afonso J, Ribeiro T, Cardoso P, Mendes F, Andrade P, Cardoso H, Mascarenhas-Saraiva M, Ferreira J, Macedo G. Deep learning and capsule endoscopy: Automatic multi-brand and multi-device panendoscopic detection of vascular lesions. Endosc Int Open 2024; 12:E570-E578. [PMID: 38654967 PMCID: PMC11039033 DOI: 10.1055/a-2236-7849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 12/21/2023] [Indexed: 04/26/2024] Open
Abstract
Background and study aims Capsule endoscopy (CE) is commonly used as the initial exam for suspected mid-gastrointestinal bleeding after normal upper and lower endoscopy. Although the assessment of the small bowel is the primary focus of CE, detecting upstream or downstream vascular lesions may also be clinically significant. This study aimed to develop and test a convolutional neural network (CNN)-based model for panendoscopic automatic detection of vascular lesions during CE. Patients and methods A multicentric AI model development study was based on 1022 CE exams. Our group used 34655 frames from seven types of CE devices, of which 11091 were considered to have vascular lesions (angiectasia or varices) after triple validation. We divided data into a training and a validation set, and the latter was used to evaluate the model's performance. At the time of division, all frames from a given patient were assigned to the same dataset. Our primary outcome measures were sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), and an area under the precision-recall curve (AUC-PR). Results Sensitivity and specificity were 86.4% and 98.3%, respectively. PPV was 95.2%, while the NPV was 95.0%. Overall accuracy was 95.0%. The AUC-PR value was 0.96. The CNN processed 115 frames per second. Conclusions This is the first proof-of-concept artificial intelligence deep learning model developed for pan-endoscopic automatic detection of vascular lesions during CE. The diagnostic performance of this CNN in multi-brand devices addresses an essential issue of technological interoperability, allowing it to be replicated in multiple technological settings.
Collapse
Affiliation(s)
- Miguel Mascarenhas
- Gastroenterology, Centro Hospitalar Universitário de São João, Porto, Portugal
| | | | - João Afonso
- Gastroenterology, Centro Hospitalar Universitário de São João, Porto, Portugal
| | - Tiago Ribeiro
- Gastroenterology, Centro Hospitalar Universitário de São João, Porto, Portugal
| | - Pedro Cardoso
- Gastroenterology, Centro Hospitalar Universitário de São João, Porto, Portugal
| | - Franscisco Mendes
- Gastroenterology, Centro Hospitalar Universitário de São João, Porto, Portugal
| | - Patrícia Andrade
- Gastroenterology, Centro Hospitalar Universitário de São João, Porto, Portugal
| | - Helder Cardoso
- Gastroenterology, Centro Hospitalar Universitário de São João, Porto, Portugal
| | | | - João Ferreira
- Department of Mechanical Engineering., University of Porto Faculty of Engineering, Porto, Portugal
| | - Guilherme Macedo
- Gastroenterology, Centro Hospitalar Universitário de São João, Porto, Portugal
| |
Collapse
|
11
|
Yokote A, Umeno J, Kawasaki K, Fujioka S, Fuyuno Y, Matsuno Y, Yoshida Y, Imazu N, Miyazono S, Moriyama T, Kitazono T, Torisu T. Small bowel capsule endoscopy examination and open access database with artificial intelligence: The SEE-artificial intelligence project. DEN OPEN 2024; 4:e258. [PMID: 37359150 PMCID: PMC10288072 DOI: 10.1002/deo2.258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 05/31/2023] [Accepted: 06/05/2023] [Indexed: 06/28/2023]
Abstract
OBJECTIVES Artificial intelligence (AI) may be practical for image classification of small bowel capsule endoscopy (CE). However, creating a functional AI model is challenging. We attempted to create a dataset and an object detection CE AI model to explore modeling problems to assist in reading small bowel CE. METHODS We extracted 18,481 images from 523 small bowel CE procedures performed at Kyushu University Hospital from September 2014 to June 2021. We annotated 12,320 images with 23,033 disease lesions, combined them with 6161 normal images as the dataset, and examined the characteristics. Based on the dataset, we created an object detection AI model using YOLO v5 and we tested validation. RESULTS We annotated the dataset with 12 types of annotations, and multiple annotation types were observed in the same image. We test validated our AI model with 1396 images, and sensitivity for all 12 types of annotations was about 91%, with 1375 true positives, 659 false positives, and 120 false negatives detected. The highest sensitivity for individual annotations was 97%, and the highest area under the receiver operating characteristic curve was 0.98, but the quality of detection varied depending on the specific annotation. CONCLUSIONS Object detection AI model in small bowel CE using YOLO v5 may provide effective and easy-to-understand reading assistance. In this SEE-AI project, we open our dataset, the weights of the AI model, and a demonstration to experience our AI. We look forward to further improving the AI model in the future.
Collapse
Affiliation(s)
- Akihito Yokote
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Junji Umeno
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Keisuke Kawasaki
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Shin Fujioka
- Department of Endoscopic Diagnostics and Therapeutics Kyushu University Hospital Fukuoka Japan
| | - Yuta Fuyuno
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Yuichi Matsuno
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Yuichiro Yoshida
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Noriyuki Imazu
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Satoshi Miyazono
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Tomohiko Moriyama
- International Medical Department Kyushu University Hospital Fukuoka Japan
| | - Takanari Kitazono
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Takehiro Torisu
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| |
Collapse
|
12
|
Zhao SQ, Liu WT. Progress in artificial intelligence assisted digestive endoscopy diagnosis of digestive system diseases. WORLD CHINESE JOURNAL OF DIGESTOLOGY 2024; 32:171-181. [DOI: 10.11569/wcjd.v32.i3.171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/29/2024]
|
13
|
George AA, Tan JL, Kovoor JG, Lee A, Stretton B, Gupta AK, Bacchi S, George B, Singh R. Artificial intelligence in capsule endoscopy: development status and future expectations. MINI-INVASIVE SURGERY 2024. [DOI: 10.20517/2574-1225.2023.102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2025]
Abstract
In this review, we aim to illustrate the state-of-the-art artificial intelligence (AI) applications in the field of capsule endoscopy. AI has made significant strides in gastrointestinal imaging, particularly in capsule endoscopy - a non-invasive procedure for capturing gastrointestinal tract images. However, manual analysis of capsule endoscopy videos is labour-intensive and error-prone, prompting the development of automated computational algorithms and AI models. While currently serving as a supplementary observer, AI has the capacity to evolve into an autonomous, integrated reading system, potentially significantly reducing capsule reading time while surpassing human accuracy. We searched Embase, Pubmed, Medline, and Cochrane databases from inception to 06 Jul 2023 for studies investigating the use of AI for capsule endoscopy and screened retrieved records for eligibility. Quantitative and qualitative data were extracted and synthesised to identify current themes. In the search, 824 articles were collected, and 291 duplicates and 31 abstracts were deleted. After a double-screening process and full-text review, 106 publications were included in the review. Themes pertaining to AI for capsule endoscopy included active gastrointestinal bleeding, erosions and ulcers, vascular lesions and angiodysplasias, polyps and tumours, inflammatory bowel disease, coeliac disease, hookworms, bowel prep assessment, and multiple lesion detection. This review provides current insights into the impact of AI on capsule endoscopy as of 2023. AI holds the potential for faster and precise readings and the prospect of autonomous image analysis. However, careful consideration of diagnostic requirements and potential challenges is crucial. The untapped potential within vision transformer technology hints at further evolution and even greater patient benefit.
Collapse
|
14
|
Rosa B, Cotter J. Capsule endoscopy and panendoscopy: A journey to the future of gastrointestinal endoscopy. World J Gastroenterol 2024; 30:1270-1279. [PMID: 38596501 PMCID: PMC11000081 DOI: 10.3748/wjg.v30.i10.1270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 01/22/2024] [Accepted: 02/21/2024] [Indexed: 03/14/2024] Open
Abstract
In 2000, the small bowel capsule revolutionized the management of patients with small bowel disorders. Currently, the technological development achieved by the new models of double-headed endoscopic capsules, as miniaturized devices to evaluate the small bowel and colon [pan-intestinal capsule endoscopy (PCE)], makes this non-invasive procedure a disruptive concept for the management of patients with digestive disorders. This technology is expected to identify which patients will require conventional invasive endoscopic procedures (colonoscopy or balloon-assisted enteroscopy), based on the lesions detected by the capsule, i.e., those with an indication for biopsies or endoscopic treatment. The use of PCE in patients with inflammatory bowel diseases, namely Crohn's disease, as well as in patients with iron deficiency anaemia and/or overt gastrointestinal (GI) bleeding, after a non-diagnostic upper endoscopy (esophagogastroduodenoscopy), enables an effective, safe and comfortable way to identify patients with relevant lesions, who should undergo subsequent invasive endoscopic procedures. The recent development of magnetically controlled capsule endoscopy to evaluate the upper GI tract, is a further step towards the possibility of an entirely non-invasive assessment of all the segments of the digestive tract, from mouth-to-anus, meeting the expectations of the early developers of capsule endoscopy.
Collapse
Affiliation(s)
- Bruno Rosa
- Department of Gastroenterology, Hospital da Senhora da Oliveira, Guimarães 4835-044, Portugal
- Life and Health Sciences Research Institute, School of Medicine, University of Minho, Braga 4710-057, Portugal
- ICVS/3B's, PT Government Associate Laboratory, Braga 4710-057, Portugal
| | - José Cotter
- Department of Gastroenterology, Hospital da Senhora da Oliveira, Guimarães 4835-044, Portugal
- Life and Health Sciences Research Institute, School of Medicine, University of Minho, Braga 4710-057, Portugal
- ICVS/3B's, PT Government Associate Laboratory, Braga 4710-057, Portugal
| |
Collapse
|
15
|
Jiang B, Dorosan M, Leong JWH, Ong MEH, Lam SSW, Ang TL. Development and validation of a deep learning system for detection of small bowel pathologies in capsule endoscopy: a pilot study in a Singapore institution. Singapore Med J 2024; 65:133-140. [PMID: 38527297 PMCID: PMC11060635 DOI: 10.4103/singaporemedj.smj-2023-187] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/10/2023] [Indexed: 03/27/2024]
Abstract
INTRODUCTION Deep learning models can assess the quality of images and discriminate among abnormalities in small bowel capsule endoscopy (CE), reducing fatigue and the time needed for diagnosis. They serve as a decision support system, partially automating the diagnosis process by providing probability predictions for abnormalities. METHODS We demonstrated the use of deep learning models in CE image analysis, specifically by piloting a bowel preparation model (BPM) and an abnormality detection model (ADM) to determine frame-level view quality and the presence of abnormal findings, respectively. We used convolutional neural network-based models pretrained on large-scale open-domain data to extract spatial features of CE images that were then used in a dense feed-forward neural network classifier. We then combined the open-source Kvasir-Capsule dataset (n = 43) and locally collected CE data (n = 29). RESULTS Model performance was compared using averaged five-fold and two-fold cross-validation for BPMs and ADMs, respectively. The best BPM model based on a pre-trained ResNet50 architecture had an area under the receiver operating characteristic and precision-recall curves of 0.969±0.008 and 0.843±0.041, respectively. The best ADM model, also based on ResNet50, had top-1 and top-2 accuracies of 84.03±0.051 and 94.78±0.028, respectively. The models could process approximately 200-250 images per second and showed good discrimination on time-critical abnormalities such as bleeding. CONCLUSION Our pilot models showed the potential to improve time to diagnosis in CE workflows. To our knowledge, our approach is unique to the Singapore context. The value of our work can be further evaluated in a pragmatic manner that is sensitive to existing clinician workflow and resource constraints.
Collapse
Affiliation(s)
- Bochao Jiang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| | - Michael Dorosan
- Health Services Research Centre, Singapore Health Services Pte Ltd, Singapore
| | - Justin Wen Hao Leong
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| | - Marcus Eng Hock Ong
- Health Services and Systems Research, Duke-NUS Medical School, Singapore
- Department of Emergency Medicine, Singapore General Hospital, Singapore
| | - Sean Shao Wei Lam
- Health Services Research Centre, Singapore Health Services Pte Ltd, Singapore
| | - Tiing Leong Ang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| |
Collapse
|
16
|
Guo X, Xu L, Liu Z, Hao Y, Wang P, Zhu H, Du Y. Automated classification of ulcerative lesions in small intestine using densenet with channel attention and residual dilated blocks. Phys Med Biol 2024; 69:055017. [PMID: 38316034 DOI: 10.1088/1361-6560/ad2637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 02/05/2024] [Indexed: 02/07/2024]
Abstract
Objective. Ulceration of the small intestine, which has a high incidence, includes Crohn's disease (CD), intestinal tuberculosis (ITB), primary small intestinal lymphoma (PSIL), cryptogenic multifocal ulcerous stenosing enteritis (CMUSE), and non-specific ulcer (NSU). However, the ulceration morphology can easily be misdiagnosed through enteroscopy.Approach. In this study, DRCA-DenseNet169, which is based on DenseNet169, with residual dilated blocks and a channel attention block, is proposed to identify CD, ITB, PSIL, CMUSE, and NSU intelligently. In addition, a novel loss function that incorporates dynamic weights is designed to enhance the precision of imbalanced datasets with limited samples. DRCA-Densenet169 was evaluated using 10883 enteroscopy images, including 5375 ulcer images and 5508 normal images, which were obtained from the Shanghai Changhai Hospital.Main results. DRCA-Densenet169 achieved an overall accuracy of 85.27% ± 0.32%, a weighted-precision of 83.99% ± 2.47%, a weighted-recall of 84.36% ± 0.88% and a weighted-F1-score of 84.07% ± 2.14%.Significance. The results demonstrate that DRCA-Densenet169 has high recognition accuracy and strong robustness in identifying different types of ulcers when obtaining immediate and preliminary diagnoses.
Collapse
Affiliation(s)
- Xudong Guo
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, People's Republic of China
| | - Lei Xu
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, People's Republic of China
| | - Zhang Liu
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, People's Republic of China
| | - Youguo Hao
- Department of Rehabilitation, Shanghai Putuo People's Hospital, Shanghai 200060, People's Republic of China
| | - Peng Wang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, People's Republic of China
| | - Huiyun Zhu
- Department of Gastroenterology, Changhai Hospital, Naval Medical University, Shanghai 200433, People's Republic of China
| | - Yiqi Du
- Department of Gastroenterology, Changhai Hospital, Naval Medical University, Shanghai 200433, People's Republic of China
| |
Collapse
|
17
|
Mota J, Almeida MJ, Mendes F, Martins M, Ribeiro T, Afonso J, Cardoso P, Cardoso H, Andrade P, Ferreira J, Mascarenhas M, Macedo G. From Data to Insights: How Is AI Revolutionizing Small-Bowel Endoscopy? Diagnostics (Basel) 2024; 14:291. [PMID: 38337807 PMCID: PMC10855436 DOI: 10.3390/diagnostics14030291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 01/09/2024] [Accepted: 01/16/2024] [Indexed: 02/12/2024] Open
Abstract
The role of capsule endoscopy and enteroscopy in managing various small-bowel pathologies is well-established. However, their broader application has been hampered mainly by their lengthy reading times. As a result, there is a growing interest in employing artificial intelligence (AI) in these diagnostic and therapeutic procedures, driven by the prospect of overcoming some major limitations and enhancing healthcare efficiency, while maintaining high accuracy levels. In the past two decades, the applicability of AI to gastroenterology has been increasing, mainly because of the strong imaging component. Nowadays, there are a multitude of studies using AI, specifically using convolutional neural networks, that prove the potential applications of AI to these endoscopic techniques, achieving remarkable results. These findings suggest that there is ample opportunity for AI to expand its presence in the management of gastroenterology diseases and, in the future, catalyze a game-changing transformation in clinical activities. This review provides an overview of the current state-of-the-art of AI in the scope of small-bowel study, with a particular focus on capsule endoscopy and enteroscopy.
Collapse
Affiliation(s)
- Joana Mota
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Maria João Almeida
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Francisco Mendes
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Miguel Martins
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Tiago Ribeiro
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - João Afonso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Helder Cardoso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Patrícia Andrade
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - João Ferreira
- Department of Mechanical Engineering, Faculty of Engineering, University of Porto, R. Dr. Roberto Frias, 4200-465 Porto, Portugal;
- Digestive Artificial Intelligence Development, R. Alfredo Allen 455-461, 4200-135 Porto, Portugal
| | - Miguel Mascarenhas
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- ManopH Gastroenterology Clinic, R. de Sá da Bandeira 752, 4000-432 Porto, Portugal
| | - Guilherme Macedo
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| |
Collapse
|
18
|
Zhang RY, Qiang PP, Cai LJ, Li T, Qin Y, Zhang Y, Zhao YQ, Wang JP. Automatic detection of small bowel lesions with different bleeding risks based on deep learning models. World J Gastroenterol 2024; 30:170-183. [PMID: 38312122 PMCID: PMC10835517 DOI: 10.3748/wjg.v30.i2.170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 12/15/2023] [Accepted: 12/26/2023] [Indexed: 01/12/2024] Open
Abstract
BACKGROUND Deep learning provides an efficient automatic image recognition method for small bowel (SB) capsule endoscopy (CE) that can assist physicians in diagnosis. However, the existing deep learning models present some unresolved challenges. AIM To propose a novel and effective classification and detection model to automatically identify various SB lesions and their bleeding risks, and label the lesions accurately so as to enhance the diagnostic efficiency of physicians and the ability to identify high-risk bleeding groups. METHODS The proposed model represents a two-stage method that combined image classification with object detection. First, we utilized the improved ResNet-50 classification model to classify endoscopic images into SB lesion images, normal SB mucosa images, and invalid images. Then, the improved YOLO-V5 detection model was utilized to detect the type of lesion and its risk of bleeding, and the location of the lesion was marked. We constructed training and testing sets and compared model-assisted reading with physician reading. RESULTS The accuracy of the model constructed in this study reached 98.96%, which was higher than the accuracy of other systems using only a single module. The sensitivity, specificity, and accuracy of the model-assisted reading detection of all images were 99.17%, 99.92%, and 99.86%, which were significantly higher than those of the endoscopists' diagnoses. The image processing time of the model was 48 ms/image, and the image processing time of the physicians was 0.40 ± 0.24 s/image (P < 0.001). CONCLUSION The deep learning model of image classification combined with object detection exhibits a satisfactory diagnostic effect on a variety of SB lesions and their bleeding risks in CE images, which enhances the diagnostic efficiency of physicians and improves the ability of physicians to identify high-risk bleeding groups.
Collapse
Affiliation(s)
- Rui-Ya Zhang
- Department of Gastroenterology, The Fifth Clinical Medical College of Shanxi Medical University, Taiyuan 030012, Shanxi Province, China
| | - Peng-Peng Qiang
- School of Computer and Information Technology, Shanxi University, Taiyuan 030006, Shanxi Province, China
| | - Ling-Jun Cai
- Department of Gastroenterology, The Fifth Clinical Medical College of Shanxi Medical University, Taiyuan 030012, Shanxi Province, China
| | - Tao Li
- School of Life Sciences and Technology, Mudanjiang Normal University, Mudanjiang 157011, Heilongjiang Province, China
| | - Yan Qin
- Department of Gastroenterology, The Fifth Clinical Medical College of Shanxi Medical University, Taiyuan 030012, Shanxi Province, China
| | - Yu Zhang
- Department of Gastroenterology, The Fifth Clinical Medical College of Shanxi Medical University, Taiyuan 030012, Shanxi Province, China
| | - Yi-Qing Zhao
- Department of Gastroenterology, The Fifth Clinical Medical College of Shanxi Medical University, Taiyuan 030012, Shanxi Province, China
| | - Jun-Ping Wang
- Department of Gastroenterology, The Fifth Clinical Medical College of Shanxi Medical University, Taiyuan 030012, Shanxi Province, China
| |
Collapse
|
19
|
Zhu Y, Lyu X, Tao X, Wu L, Yin A, Liao F, Hu S, Wang Y, Zhang M, Huang L, Wang J, Zhang C, Gong D, Jiang X, Zhao L, Yu H. A newly developed deep learning-based system for automatic detection and classification of small bowel lesions during double-balloon enteroscopy examination. BMC Gastroenterol 2024; 24:10. [PMID: 38166722 PMCID: PMC10759410 DOI: 10.1186/s12876-023-03067-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Accepted: 11/28/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND Double-balloon enteroscopy (DBE) is a standard method for diagnosing and treating small bowel disease. However, DBE may yield false-negative results due to oversight or inexperience. We aim to develop a computer-aided diagnostic (CAD) system for the automatic detection and classification of small bowel abnormalities in DBE. DESIGN AND METHODS A total of 5201 images were collected from Renmin Hospital of Wuhan University to construct a detection model for localizing lesions during DBE, and 3021 images were collected to construct a classification model for classifying lesions into four classes, protruding lesion, diverticulum, erosion & ulcer and angioectasia. The performance of the two models was evaluated using 1318 normal images and 915 abnormal images and 65 videos from independent patients and then compared with that of 8 endoscopists. The standard answer was the expert consensus. RESULTS For the image test set, the detection model achieved a sensitivity of 92% (843/915) and an area under the curve (AUC) of 0.947, and the classification model achieved an accuracy of 86%. For the video test set, the accuracy of the system was significantly better than that of the endoscopists (85% vs. 77 ± 6%, p < 0.01). For the video test set, the proposed system was superior to novices and comparable to experts. CONCLUSIONS We established a real-time CAD system for detecting and classifying small bowel lesions in DBE with favourable performance. ENDOANGEL-DBE has the potential to help endoscopists, especially novices, in clinical practice and may reduce the miss rate of small bowel lesions.
Collapse
Affiliation(s)
- Yijie Zhu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiaoguang Lyu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiao Tao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Lianlian Wu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Anning Yin
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Fei Liao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Shan Hu
- School of Computer Science, Wuhan University, Wuhan, China
| | - Yang Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Mengjiao Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Li Huang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Junxiao Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Chenxia Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Dexin Gong
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiaoda Jiang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Liang Zhao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China.
| | - Honggang Yu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China.
| |
Collapse
|
20
|
Aoki T, Yamada A, Oka S, Tsuboi M, Kurokawa K, Togo D, Tanino F, Teshima H, Saito H, Suzuki R, Arai J, Abe S, Kondo R, Yamashita A, Tsuboi A, Nakada A, Niikura R, Tsuji Y, Hayakawa Y, Matsuda T, Nakahori M, Tanaka S, Kato Y, Tada T, Fujishiro M. Comparison of clinical utility of deep learning-based systems for small-bowel capsule endoscopy reading. J Gastroenterol Hepatol 2024; 39:157-164. [PMID: 37830487 DOI: 10.1111/jgh.16369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 09/01/2023] [Accepted: 09/17/2023] [Indexed: 10/14/2023]
Abstract
BACKGROUND AND AIM Convolutional neural network (CNN) systems that automatically detect abnormalities from small-bowel capsule endoscopy (SBCE) images are still experimental, and no studies have directly compared the clinical usefulness of different systems. We compared endoscopist readings using an existing and a novel CNN system in a real-world SBCE setting. METHODS Thirty-six complete SBCE videos, including 43 abnormal lesions (18 mucosal breaks, 8 angioectasia, and 17 protruding lesions), were retrospectively prepared. Three reading processes were compared: (A) endoscopist readings without CNN screening, (B) endoscopist readings after an existing CNN screening, and (C) endoscopist readings after a novel CNN screening. RESULTS The mean number of small-bowel images was 14 747 per patient. Among these images, existing and novel CNN systems automatically captured 24.3% and 9.4% of the images, respectively. In this process, both systems extracted all 43 abnormal lesions. Next, we focused on the clinical usefulness. The detection rates of abnormalities by trainee endoscopists were not significantly different across the three processes: A, 77%; B, 67%; and C, 79%. The mean reading time of the trainees was the shortest during process C (10.1 min per patient), followed by processes B (23.1 min per patient) and A (33.6 min per patient). The mean psychological stress score while reading videos (scale, 1-5) was the lowest in process C (1.8) but was not significantly different between processes B (2.8) and A (3.2). CONCLUSIONS Our novel CNN system significantly reduced endoscopist reading time and psychological stress while maintaining the detectability of abnormalities. CNN performance directly affects clinical utility and should be carefully assessed.
Collapse
Affiliation(s)
- Tomonori Aoki
- Department of Gastroenterology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
- Division of Next-Generation Endoscopic Computer Vision, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Atsuo Yamada
- Department of Gastroenterology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Shiro Oka
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Mayo Tsuboi
- Department of Gastroenterology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Ken Kurokawa
- Department of Gastroenterology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Daichi Togo
- Department of Gastroenterology, Sendai Kousei Hospital, Sendai, Japan
| | - Fumiaki Tanino
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Hajime Teshima
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Hiroaki Saito
- Department of Gastroenterology, Sendai Kousei Hospital, Sendai, Japan
| | - Ryuta Suzuki
- Department of Gastroenterology, Sendai Kousei Hospital, Sendai, Japan
| | - Junya Arai
- Department of Gastroenterology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Sohei Abe
- Department of Gastroenterology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Ryo Kondo
- Department of Gastroenterology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Aya Yamashita
- Department of Gastroenterology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Akiyoshi Tsuboi
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Ayako Nakada
- Department of Gastroenterology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Ryota Niikura
- Department of Gastroenterology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
- Department of Gastroenterological Endoscopy, Tokyo Medical University, Tokyo, Japan
| | - Yosuke Tsuji
- Department of Gastroenterology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
- Division of Next-Generation Endoscopic Computer Vision, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Yoku Hayakawa
- Department of Gastroenterology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Tomoki Matsuda
- Department of Gastroenterology, Sendai Kousei Hospital, Sendai, Japan
| | - Masato Nakahori
- Department of Gastroenterology, Sendai Kousei Hospital, Sendai, Japan
| | - Shinji Tanaka
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | | | - Tomohiro Tada
- AI Medical Service Inc, Tokyo, Japan
- Department of Surgical Oncology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
- Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
| | - Mitsuhiro Fujishiro
- Department of Gastroenterology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| |
Collapse
|
21
|
Ahn JC, Shah VH. Artificial intelligence in gastroenterology and hepatology. ARTIFICIAL INTELLIGENCE IN CLINICAL PRACTICE 2024:443-464. [DOI: 10.1016/b978-0-443-15688-5.00016-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
22
|
Bordbar M, Helfroush MS, Danyali H, Ejtehadi F. Wireless capsule endoscopy multiclass classification using three-dimensional deep convolutional neural network model. Biomed Eng Online 2023; 22:124. [PMID: 38098015 PMCID: PMC10722702 DOI: 10.1186/s12938-023-01186-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 11/29/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND Wireless capsule endoscopy (WCE) is a patient-friendly and non-invasive technology that scans the whole of the gastrointestinal tract, including difficult-to-access regions like the small bowel. Major drawback of this technology is that the visual inspection of a large number of video frames produced during each examination makes the physician diagnosis process tedious and prone to error. Several computer-aided diagnosis (CAD) systems, such as deep network models, have been developed for the automatic recognition of abnormalities in WCE frames. Nevertheless, most of these studies have only focused on spatial information within individual WCE frames, missing the crucial temporal data within consecutive frames. METHODS In this article, an automatic multiclass classification system based on a three-dimensional deep convolutional neural network (3D-CNN) is proposed, which utilizes the spatiotemporal information to facilitate the WCE diagnosis process. The 3D-CNN model fed with a series of sequential WCE frames in contrast to the two-dimensional (2D) model, which exploits frames as independent ones. Moreover, the proposed 3D deep model is compared with some pre-trained networks. The proposed models are trained and evaluated with 29 subject WCE videos (14,691 frames before augmentation). The performance advantages of 3D-CNN over 2D-CNN and pre-trained networks are verified in terms of sensitivity, specificity, and accuracy. RESULTS 3D-CNN outperforms the 2D technique in all evaluation metrics (sensitivity: 98.92 vs. 98.05, specificity: 99.50 vs. 86.94, accuracy: 99.20 vs. 92.60). In conclusion, a novel 3D-CNN model for lesion detection in WCE frames is proposed in this study. CONCLUSION The results indicate the performance of 3D-CNN over 2D-CNN and some well-known pre-trained classifier networks. The proposed 3D-CNN model uses the rich temporal information in adjacent frames as well as spatial data to develop an accurate and efficient model.
Collapse
Affiliation(s)
- Mehrdokht Bordbar
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | | | - Habibollah Danyali
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | - Fardad Ejtehadi
- Department of Internal Medicine, Gastroenterohepatology Research Center, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
23
|
Mascarenhas M, Martins M, Afonso J, Ribeiro T, Cardoso P, Mendes F, Andrade P, Cardoso H, Ferreira J, Macedo G. The Future of Minimally Invasive Capsule Panendoscopy: Robotic Precision, Wireless Imaging and AI-Driven Insights. Cancers (Basel) 2023; 15:5861. [PMID: 38136403 PMCID: PMC10742312 DOI: 10.3390/cancers15245861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 12/04/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023] Open
Abstract
In the early 2000s, the introduction of single-camera wireless capsule endoscopy (CE) redefined small bowel study. Progress continued with the development of double-camera devices, first for the colon and rectum, and then, for panenteric assessment. Advancements continued with magnetic capsule endoscopy (MCE), particularly when assisted by a robotic arm, designed to enhance gastric evaluation. Indeed, as CE provides full visualization of the entire gastrointestinal (GI) tract, a minimally invasive capsule panendoscopy (CPE) could be a feasible alternative, despite its time-consuming nature and learning curve, assuming appropriate bowel cleansing has been carried out. Recent progress in artificial intelligence (AI), particularly in the development of convolutional neural networks (CNN) for CE auxiliary reading (detecting and diagnosing), may provide the missing link in fulfilling the goal of establishing the use of panendoscopy, although prospective studies are still needed to validate these models in actual clinical scenarios. Recent CE advancements will be discussed, focusing on the current evidence on CNN developments, and their real-life implementation potential and associated ethical challenges.
Collapse
Affiliation(s)
- Miguel Mascarenhas
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (M.M.); (J.A.); (T.R.); (P.C.); (F.M.); (P.A.); (H.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-427 Porto, Portugal
| | - Miguel Martins
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (M.M.); (J.A.); (T.R.); (P.C.); (F.M.); (P.A.); (H.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
| | - João Afonso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (M.M.); (J.A.); (T.R.); (P.C.); (F.M.); (P.A.); (H.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-427 Porto, Portugal
| | - Tiago Ribeiro
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (M.M.); (J.A.); (T.R.); (P.C.); (F.M.); (P.A.); (H.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (M.M.); (J.A.); (T.R.); (P.C.); (F.M.); (P.A.); (H.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-427 Porto, Portugal
| | - Francisco Mendes
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (M.M.); (J.A.); (T.R.); (P.C.); (F.M.); (P.A.); (H.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
| | - Patrícia Andrade
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (M.M.); (J.A.); (T.R.); (P.C.); (F.M.); (P.A.); (H.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-427 Porto, Portugal
| | - Helder Cardoso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (M.M.); (J.A.); (T.R.); (P.C.); (F.M.); (P.A.); (H.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-427 Porto, Portugal
| | - João Ferreira
- Department of Mechanic Engineering, Faculty of Engineering, University of Porto, 4200-065 Porto, Portugal;
- DigestAID—Digestive Artificial Intelligence Development, 455/461, 4200-135 Porto, Portugal
| | - Guilherme Macedo
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (M.M.); (J.A.); (T.R.); (P.C.); (F.M.); (P.A.); (H.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
- Faculty of Medicine, University of Porto, 4200-427 Porto, Portugal
| |
Collapse
|
24
|
Brzeski A, Dziubich T, Krawczyk H. Visual Features for Improving Endoscopic Bleeding Detection Using Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:9717. [PMID: 38139563 PMCID: PMC10748269 DOI: 10.3390/s23249717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/19/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2023]
Abstract
The presented paper investigates the problem of endoscopic bleeding detection in endoscopic videos in the form of a binary image classification task. A set of definitions of high-level visual features of endoscopic bleeding is introduced, which incorporates domain knowledge from the field. The high-level features are coupled with respective feature descriptors, enabling automatic capture of the features using image processing methods. Each of the proposed feature descriptors outputs a feature activation map in the form of a grayscale image. Acquired feature maps can be appended in a straightforward way to the original color channels of the input image and passed to the input of a convolutional neural network during the training and inference steps. An experimental evaluation is conducted to compare the classification ROC AUC of feature-extended convolutional neural network models with baseline models using regular color image inputs. The advantage of feature-extended models is demonstrated for the Resnet and VGG convolutional neural network architectures.
Collapse
Affiliation(s)
- Adam Brzeski
- Faculty of Electronics, Telecommunications and Informatics, Gdańsk University of Technology, 80-233 Gdańsk, Poland; (T.D.); (H.K.)
| | | | | |
Collapse
|
25
|
Sumioka A, Tsuboi A, Oka S, Kato Y, Matsubara Y, Hirata I, Takigawa H, Yuge R, Shimamoto F, Tada T, Tanaka S. Disease surveillance evaluation of primary small-bowel follicular lymphoma using capsule endoscopy images based on a deep convolutional neural network (with video). Gastrointest Endosc 2023; 98:968-976.e3. [PMID: 37482106 DOI: 10.1016/j.gie.2023.07.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 07/01/2023] [Accepted: 07/09/2023] [Indexed: 07/25/2023]
Abstract
BACKGROUND AND AIMS Capsule endoscopy (CE) is useful in evaluating disease surveillance for primary small-bowel follicular lymphoma (FL), but some cases are difficult to evaluate objectively. This study evaluated the usefulness of a deep convolutional neural network (CNN) system using CE images for disease surveillance of primary small-bowel FL. METHODS We enrolled 26 consecutive patients with primary small-bowel FL diagnosed between January 2011 and January 2021 who underwent CE before and after a watch-and-wait strategy or chemotherapy. Disease surveillance by the CNN system was evaluated by the percentage of FL-detected images among all CE images of the small-bowel mucosa. RESULTS Eighteen cases (69%) were managed with a watch-and-wait approach, and 8 cases (31%) were treated with chemotherapy. Among the 18 cases managed with the watch-and-wait approach, the outcome of lesion evaluation by the CNN system was almost the same in 13 cases (72%), aggravation in 4 (22%), and improvement in 1 (6%). Among the 8 cases treated with chemotherapy, the outcome of lesion evaluation by the CNN system was improvement in 5 cases (63%), almost the same in 2 (25%), and aggravation in 1 (12%). The physician and CNN system reported similar results regarding disease surveillance evaluation in 23 of 26 cases (88%), whereas a discrepancy between the 2 was found in the remaining 3 cases (12%), attributed to poor small-bowel cleansing level. CONCLUSIONS Disease surveillance evaluation of primary small-bowel FL using CE images by the developed CNN system was useful under the condition of excellent small-bowel cleansing level.
Collapse
Affiliation(s)
- Akihiko Sumioka
- Department of Gastroenterology, Hiroshima University Hospital, Hiroshima, Japan
| | - Akiyoshi Tsuboi
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Shiro Oka
- Department of Gastroenterology, Hiroshima University Hospital, Hiroshima, Japan
| | | | - Yuka Matsubara
- Department of Gastroenterology, Hiroshima University Hospital, Hiroshima, Japan
| | - Issei Hirata
- Department of Gastroenterology, Hiroshima University Hospital, Hiroshima, Japan
| | - Hidehiko Takigawa
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Ryo Yuge
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Fumio Shimamoto
- Faculty of Health Sciences, Hiroshima Shudo University, Hiroshima, Japan
| | - Tomohiro Tada
- AI Medical Service Inc, Tokyo, Japan; Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan; Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
| | - Shinji Tanaka
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| |
Collapse
|
26
|
Turck D, Dratsch T, Schröder L, Lorenz F, Dinter J, Bürger M, Schiffmann L, Kasper P, Allo G, Goeser T, Chon SH, Nierhoff D. A convolutional neural network for bleeding detection in capsule endoscopy using real clinical data. MINIM INVASIV THER 2023; 32:335-340. [PMID: 37640056 DOI: 10.1080/13645706.2023.2250445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Accepted: 08/14/2023] [Indexed: 08/31/2023]
Abstract
BACKGROUND The goal of the present study was to develop a convolutional neural network for the detection of bleedings in capsule endoscopy videos using realistic clinical data from one single-centre. METHODS Capsule endoscopy videos from all 133 patients (79 male, 54 female; meanage = 53.73 years, SDage = 26.13) who underwent capsule endoscopy at our institution between January 2014 and August 2018 were screened for pathology. All videos were screened for pathology by two independent capsule experts and confirmed findings were checked again by a third capsule expert. From these videos, 125 pathological findings (individual episodes of bleeding spanning a total of 5696 images) and 103 non-pathological findings (sections of normal mucosal tissue without pathologies spanning a total of 7420 images) were used to develop and validate a neural network (Inception V3) using transfer learning. RESULTS The overall accuracy of the model for the detection of bleedings was 90.6% [95%CI: 89.4%-91.7%], with a sensitivity of 89.4% [95%CI: 87.6%-91.2%] and a specificity of 91.7% [95%CI: 90.1%-93.2%]. CONCLUSION Our results show that neural networks can detect bleedings in capsule endoscopy videos under realistic, clinical conditions with an accuracy of 90.6%, potentially reducing reading time per capsule and helping to improve diagnostic accuracy.
Collapse
Affiliation(s)
- Dorothee Turck
- Department of Medicine, University of Cologne, Cologne, Germany
| | - Thomas Dratsch
- Institute of Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany
| | - Lorenz Schröder
- Department of Medicine, University of Cologne, Cologne, Germany
| | - Florian Lorenz
- Department of Gastroenterology and Hepatology, University Hospital Cologne, Cologne, Germany
| | - Johanna Dinter
- Gastroenterologische Schwerpunktpraxis Stähler, Cologne, Germany
| | - Martin Bürger
- Department of Gastroenterology and Hepatology, University Hospital Cologne, Cologne, Germany
| | - Lars Schiffmann
- Department of General, Visceral, Cancer, and Transplant Surgery, University Hospital Cologne, Cologne, Germany
| | - Philipp Kasper
- Department of Gastroenterology and Hepatology, University Hospital Cologne, Cologne, Germany
| | - Gabriel Allo
- Department of Gastroenterology and Hepatology, University Hospital Cologne, Cologne, Germany
| | - Tobias Goeser
- Department of Gastroenterology and Hepatology, University Hospital Cologne, Cologne, Germany
| | - Seung-Hun Chon
- Department of Gastroenterology and Hepatology, University Hospital Cologne, Cologne, Germany
- Department of General, Visceral, Cancer, and Transplant Surgery, University Hospital Cologne, Cologne, Germany
| | - Dirk Nierhoff
- Department of Gastroenterology and Hepatology, University Hospital Cologne, Cologne, Germany
| |
Collapse
|
27
|
Singeap AM, Sfarti C, Minea H, Chiriac S, Cuciureanu T, Nastasa R, Stanciu C, Trifan A. Small Bowel Capsule Endoscopy and Enteroscopy: A Shoulder-to-Shoulder Race. J Clin Med 2023; 12:7328. [PMID: 38068379 PMCID: PMC10707315 DOI: 10.3390/jcm12237328] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 11/17/2023] [Accepted: 11/24/2023] [Indexed: 01/11/2025] Open
Abstract
Traditional methods have their limitations when it comes to unraveling the mysteries of the small bowel, an area historically seen as the "black box" of the gastrointestinal tract. This is where capsule endoscopy and enteroscopy have stepped in, offering a remarkable synergy that transcends the sum of their individual capabilities. From their introduction, small bowel capsule endoscopy and device-assisted enteroscopy have consistently evolved and improved, both on their own and interdependently. Each technique's history may be told as a success story, and their interaction has revolutionized the approach to the small bowel. Both have advantages that could be ideally combined into a perfect technique: safe, non-invasive, and capable of examining the entire small bowel, taking biopsies, and applying therapeutical interventions. Until the realization of this perfect tool becomes a reality, the key for an optimal approach lies in the right selection of exploration method. In this article, we embark on a journey through the intertwined development of capsule endoscopy and enteroscopy, exploring the origins, technological advancements, clinical applications, and evolving inquiries that have continually reshaped the landscape of small bowel imaging.
Collapse
Affiliation(s)
- Ana-Maria Singeap
- Department of Gastroenterology, Faculty of Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (A.-M.S.); (C.S.); (S.C.); (T.C.); (R.N.); (C.S.); (A.T.)
- Institute of Gastroenterology and Hepatology, “St. Spiridon” University Hospital, 700111 Iasi, Romania
| | - Catalin Sfarti
- Department of Gastroenterology, Faculty of Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (A.-M.S.); (C.S.); (S.C.); (T.C.); (R.N.); (C.S.); (A.T.)
- Institute of Gastroenterology and Hepatology, “St. Spiridon” University Hospital, 700111 Iasi, Romania
| | - Horia Minea
- Department of Gastroenterology, Faculty of Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (A.-M.S.); (C.S.); (S.C.); (T.C.); (R.N.); (C.S.); (A.T.)
- Institute of Gastroenterology and Hepatology, “St. Spiridon” University Hospital, 700111 Iasi, Romania
| | - Stefan Chiriac
- Department of Gastroenterology, Faculty of Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (A.-M.S.); (C.S.); (S.C.); (T.C.); (R.N.); (C.S.); (A.T.)
- Institute of Gastroenterology and Hepatology, “St. Spiridon” University Hospital, 700111 Iasi, Romania
| | - Tudor Cuciureanu
- Department of Gastroenterology, Faculty of Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (A.-M.S.); (C.S.); (S.C.); (T.C.); (R.N.); (C.S.); (A.T.)
- Institute of Gastroenterology and Hepatology, “St. Spiridon” University Hospital, 700111 Iasi, Romania
| | - Robert Nastasa
- Department of Gastroenterology, Faculty of Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (A.-M.S.); (C.S.); (S.C.); (T.C.); (R.N.); (C.S.); (A.T.)
- Institute of Gastroenterology and Hepatology, “St. Spiridon” University Hospital, 700111 Iasi, Romania
| | - Carol Stanciu
- Department of Gastroenterology, Faculty of Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (A.-M.S.); (C.S.); (S.C.); (T.C.); (R.N.); (C.S.); (A.T.)
- Institute of Gastroenterology and Hepatology, “St. Spiridon” University Hospital, 700111 Iasi, Romania
| | - Anca Trifan
- Department of Gastroenterology, Faculty of Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (A.-M.S.); (C.S.); (S.C.); (T.C.); (R.N.); (C.S.); (A.T.)
- Institute of Gastroenterology and Hepatology, “St. Spiridon” University Hospital, 700111 Iasi, Romania
| |
Collapse
|
28
|
Popa SL, Stancu B, Ismaiel A, Turtoi DC, Brata VD, Duse TA, Bolchis R, Padureanu AM, Dita MO, Bashimov A, Incze V, Pinna E, Grad S, Pop AV, Dumitrascu DI, Munteanu MA, Surdea-Blaga T, Mihaileanu FV. Enteroscopy versus Video Capsule Endoscopy for Automatic Diagnosis of Small Bowel Disorders-A Comparative Analysis of Artificial Intelligence Applications. Biomedicines 2023; 11:2991. [PMID: 38001991 PMCID: PMC10669430 DOI: 10.3390/biomedicines11112991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 10/26/2023] [Accepted: 11/05/2023] [Indexed: 11/26/2023] Open
Abstract
BACKGROUND Small bowel disorders present a diagnostic challenge due to the limited accessibility of the small intestine. Accurate diagnosis is made with the aid of specific procedures, like capsule endoscopy or double-ballon enteroscopy, but they are not usually solicited and not widely accessible. This study aims to assess and compare the diagnostic effectiveness of enteroscopy and video capsule endoscopy (VCE) when combined with artificial intelligence (AI) algorithms for the automatic detection of small bowel diseases. MATERIALS AND METHODS We performed an extensive literature search for relevant studies about AI applications capable of identifying small bowel disorders using enteroscopy and VCE, published between 2012 and 2023, employing PubMed, Cochrane Library, Google Scholar, Embase, Scopus, and ClinicalTrials.gov databases. RESULTS Our investigation discovered a total of 27 publications, out of which 21 studies assessed the application of VCE, while the remaining 6 articles analyzed the enteroscopy procedure. The included studies portrayed that both investigations, enhanced by AI, exhibited a high level of diagnostic accuracy. Enteroscopy demonstrated superior diagnostic capability, providing precise identification of small bowel pathologies with the added advantage of enabling immediate therapeutic intervention. The choice between these modalities should be guided by clinical context, patient preference, and resource availability. Studies with larger sample sizes and prospective designs are warranted to validate these results and optimize the integration of AI in small bowel diagnostics. CONCLUSIONS The current analysis demonstrates that both enteroscopy and VCE with AI augmentation exhibit comparable diagnostic performance for the automatic detection of small bowel disorders.
Collapse
Affiliation(s)
- Stefan Lucian Popa
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (S.L.P.); (A.I.); (S.G.); (A.-V.P.); (T.S.-B.)
| | - Bogdan Stancu
- 2nd Surgical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania;
| | - Abdulrahman Ismaiel
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (S.L.P.); (A.I.); (S.G.); (A.-V.P.); (T.S.-B.)
| | - Daria Claudia Turtoi
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Vlad Dumitru Brata
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Traian Adrian Duse
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Roxana Bolchis
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Alexandru Marius Padureanu
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Miruna Oana Dita
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Atamyrat Bashimov
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Victor Incze
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Edoardo Pinna
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Simona Grad
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (S.L.P.); (A.I.); (S.G.); (A.-V.P.); (T.S.-B.)
| | - Andrei-Vasile Pop
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (S.L.P.); (A.I.); (S.G.); (A.-V.P.); (T.S.-B.)
| | - Dinu Iuliu Dumitrascu
- Department of Anatomy, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400006 Cluj-Napoca, Romania;
| | - Mihai Alexandru Munteanu
- Department of Medical Disciplines, Faculty of Medicine and Pharmacy, University of Oradea, 410087 Oradea, Romania;
| | - Teodora Surdea-Blaga
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (S.L.P.); (A.I.); (S.G.); (A.-V.P.); (T.S.-B.)
| | - Florin Vasile Mihaileanu
- 2nd Surgical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania;
| |
Collapse
|
29
|
O'Hara FJ, Mc Namara D. Capsule endoscopy with artificial intelligence-assisted technology: Real-world usage of a validated AI model for capsule image review. Endosc Int Open 2023; 11:E970-E975. [PMID: 37828977 PMCID: PMC10567136 DOI: 10.1055/a-2161-1816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 08/25/2023] [Indexed: 10/14/2023] Open
Abstract
Background and study aims Capsule endoscopy is a time-consuming procedure with a significance error rate. Artificial intelligence (AI) can potentially reduce reading time significantly by reducing the number of images that need human review. An OMOM Artificial Intelligence-enabled small bowel capsule has been recently trained and validated for small bowel capsule endoscopy video review. This study aimed to assess its performance in a real-world setting in comparison with standard reading methods. Patients and methods In this single-center retrospective study, 40 patient studies performed using the OMOM capsule were analyzed first with standard reading methods and later using AI-assisted reading. Reading time, pathology identified, intestinal landmark identification and bowel preparation assessment (Brotz Score) were compared. Results Overall diagnosis correlated 100% between the two reading methods. In a per-lesion analysis, 1293 images of significant lesions were identified combining standard and AI-assisted reading methods. AI-assisted reading captured 1268 (98.1%, 95% CI 97.15-98.7) of these findings while standard reading mode captured 1114 (86.2%, 95% confidence interval 84.2-87.9), P < 0.001. Mean reading time went from 29.7 minutes with standard reading to 2.3 minutes with AI-assisted reading ( P < 0.001), for an average time saving of 27.4 minutes per study. Time of first cecal image showed a wide discrepancy between AI and standard reading of 99.2 minutes (r = 0.085, P = 0.68). Bowel cleansing evaluation agreed in 97.4% (r = 0.805 P < 0.001). Conclusions AI-assisted reading has shown significant time savings without reducing sensitivity in this study. Limitations remain in the evaluation of other indicators.
Collapse
Affiliation(s)
- Fintan John O'Hara
- Gastroenterology, Tallaght University Hospital, Dublin, Ireland
- Medicine, Trinity College Dublin School of Medicine, Dublin, Ireland
| | - Deirdre Mc Namara
- Gastroenterology, Tallaght University Hospital, Dublin, Ireland
- Medicine, Trinity College Dublin School of Medicine, Dublin, Ireland
| |
Collapse
|
30
|
Musha A, Hasnat R, Mamun AA, Ping EP, Ghosh T. Computer-Aided Bleeding Detection Algorithms for Capsule Endoscopy: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:7170. [PMID: 37631707 PMCID: PMC10459126 DOI: 10.3390/s23167170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 08/08/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023]
Abstract
Capsule endoscopy (CE) is a widely used medical imaging tool for the diagnosis of gastrointestinal tract abnormalities like bleeding. However, CE captures a huge number of image frames, constituting a time-consuming and tedious task for medical experts to manually inspect. To address this issue, researchers have focused on computer-aided bleeding detection systems to automatically identify bleeding in real time. This paper presents a systematic review of the available state-of-the-art computer-aided bleeding detection algorithms for capsule endoscopy. The review was carried out by searching five different repositories (Scopus, PubMed, IEEE Xplore, ACM Digital Library, and ScienceDirect) for all original publications on computer-aided bleeding detection published between 2001 and 2023. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) methodology was used to perform the review, and 147 full texts of scientific papers were reviewed. The contributions of this paper are: (I) a taxonomy for computer-aided bleeding detection algorithms for capsule endoscopy is identified; (II) the available state-of-the-art computer-aided bleeding detection algorithms, including various color spaces (RGB, HSV, etc.), feature extraction techniques, and classifiers, are discussed; and (III) the most effective algorithms for practical use are identified. Finally, the paper is concluded by providing future direction for computer-aided bleeding detection research.
Collapse
Affiliation(s)
- Ahmmad Musha
- Department of Electrical and Electronic Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh; (A.M.); (R.H.)
| | - Rehnuma Hasnat
- Department of Electrical and Electronic Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh; (A.M.); (R.H.)
| | - Abdullah Al Mamun
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia;
| | - Em Poh Ping
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia;
| | - Tonmoy Ghosh
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA;
| |
Collapse
|
31
|
Ukashi O, Soffer S, Klang E, Eliakim R, Ben-Horin S, Kopylov U. Capsule Endoscopy in Inflammatory Bowel Disease: Panenteric Capsule Endoscopy and Application of Artificial Intelligence. Gut Liver 2023; 17:516-528. [PMID: 37305947 PMCID: PMC10352070 DOI: 10.5009/gnl220507] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/23/2023] [Accepted: 01/30/2023] [Indexed: 06/13/2023] Open
Abstract
Video capsule endoscopy (VCE) of the small-bowel has been proven to accurately diagnose small-bowel inflammation and to predict future clinical flares among patients with Crohn's disease (CD). In 2017, the panenteric capsule (PillCam Crohn's system) was introduced for the first time, enabling a reliable evaluation of the whole small and large intestines. The great advantage of visualization of both parts of the gastrointestinal tract in a feasible and single procedure, holds a significant promise for patients with CD, enabling determination of the disease extent and severity, and potentially optimize disease management. In recent years, applications of machine learning, for VCE have been well studied, demonstrating impressive performance and high accuracy for the detection of various gastrointestinal pathologies, among them inflammatory bowel disease lesions. The use of artificial neural network models has been proven to accurately detect/classify and grade CD lesions, and shorten the VCE reading time, resulting in a less tedious process with a potential to minimize missed diagnosis and better predict clinical outcomes. Nevertheless, prospective, and real-world studies are essential to precisely examine artificial intelligence applications in real-life inflammatory bowel disease practice.
Collapse
Affiliation(s)
- Offir Ukashi
- Gastroenterology Institute, Sheba Medical Center, Tel Hashomer, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
- Department of Internal Medicine A, Sheba Medical Center, Tel Hashomer, Israel
| | - Shelly Soffer
- Deep Vision Lab, Sheba Medical Center, Tel Hashomer, Israel
- Internal Medicine B, Assuta Medical Center, Ashdod, Israel
- Faculty of Health Sciences, Ben Gurion University of the Negev, Beer Sheva, Israel
| | - Eyal Klang
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
- Deep Vision Lab, Sheba Medical Center, Tel Hashomer, Israel
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel
| | - Rami Eliakim
- Gastroenterology Institute, Sheba Medical Center, Tel Hashomer, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Shomron Ben-Horin
- Gastroenterology Institute, Sheba Medical Center, Tel Hashomer, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Uri Kopylov
- Gastroenterology Institute, Sheba Medical Center, Tel Hashomer, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
32
|
Chung J, Oh DJ, Park J, Kim SH, Lim YJ. Automatic Classification of GI Organs in Wireless Capsule Endoscopy Using a No-Code Platform-Based Deep Learning Model. Diagnostics (Basel) 2023; 13:diagnostics13081389. [PMID: 37189489 DOI: 10.3390/diagnostics13081389] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 04/03/2023] [Accepted: 04/10/2023] [Indexed: 05/17/2023] Open
Abstract
The first step in reading a capsule endoscopy (CE) is determining the gastrointestinal (GI) organ. Because CE produces too many inappropriate and repetitive images, automatic organ classification cannot be directly applied to CE videos. In this study, we developed a deep learning algorithm to classify GI organs (the esophagus, stomach, small bowel, and colon) using a no-code platform, applied it to CE videos, and proposed a novel method to visualize the transitional area of each GI organ. We used training data (37,307 images from 24 CE videos) and test data (39,781 images from 30 CE videos) for model development. This model was validated using 100 CE videos that included "normal", "blood", "inflamed", "vascular", and "polypoid" lesions. Our model achieved an overall accuracy of 0.98, precision of 0.89, recall of 0.97, and F1 score of 0.92. When we validated this model relative to the 100 CE videos, it produced average accuracies for the esophagus, stomach, small bowel, and colon of 0.98, 0.96, 0.87, and 0.87, respectively. Increasing the AI score's cut-off improved most performance metrics in each organ (p < 0.05). To locate a transitional area, we visualized the predicted results over time, and setting the cut-off of the AI score to 99.9% resulted in a better intuitive presentation than the baseline. In conclusion, the GI organ classification AI model demonstrated high accuracy on CE videos. The transitional area could be more easily located by adjusting the cut-off of the AI score and visualization of its result over time.
Collapse
Affiliation(s)
- Joowon Chung
- Department of Internal Medicine, Nowon Eulji Medical Center, Eulji University School of Medicine, Seoul 01830, Republic of Korea
| | - Dong Jun Oh
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea
| | - Junseok Park
- Department of Internal Medicine, Digestive Disease Center, Institute for Digestive Research, Soonchunhyang University College of Medicine, Seoul 04401, Republic of Korea
| | - Su Hwan Kim
- Department of Internal Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul 07061, Republic of Korea
| | - Yun Jeong Lim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea
| |
Collapse
|
33
|
Dhaliwal J, Walsh CM. Artificial Intelligence in Pediatric Endoscopy: Current Status and Future Applications. Gastrointest Endosc Clin N Am 2023; 33:291-308. [PMID: 36948747 DOI: 10.1016/j.giec.2022.12.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2023]
Abstract
The application of artificial intelligence (AI) has great promise for improving pediatric endoscopy. The majority of preclinical studies have been undertaken in adults, with the greatest progress being made in the context of colorectal cancer screening and surveillance. This development has only been possible with advances in deep learning, like the convolutional neural network model, which has enabled real-time detection of pathology. Comparatively, the majority of deep learning systems developed in inflammatory bowel disease have focused on predicting disease severity and were developed using still images rather than videos. The application of AI to pediatric endoscopy is in its infancy, thus providing an opportunity to develop clinically meaningful and fair systems that do not perpetuate societal biases. In this review, we provide an overview of AI, summarize the advances of AI in endoscopy, and describe its potential application to pediatric endoscopic practice and education.
Collapse
Affiliation(s)
- Jasbir Dhaliwal
- Division of Pediatric Gastroenterology, Hepatology and Nutrition, Cincinnati Children's Hospital Medictal Center, University of Cincinnati, OH, USA.
| | - Catharine M Walsh
- Division of Gastroenterology, Hepatology, and Nutrition, and the SickKids Research and Learning Institutes, The Hospital for Sick Children, Toronto, ON, Canada; Department of Paediatrics and The Wilson Centre, University of Toronto, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
34
|
Kim HJ, Sritandi W, Xiong Z, Ho JS. Bioelectronic devices for light-based diagnostics and therapies. BIOPHYSICS REVIEWS 2023; 4:011304. [PMID: 38505817 PMCID: PMC10903427 DOI: 10.1063/5.0102811] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 12/28/2022] [Indexed: 03/21/2024]
Abstract
Light has broad applications in medicine as a tool for diagnosis and therapy. Recent advances in optical technology and bioelectronics have opened opportunities for wearable, ingestible, and implantable devices that use light to continuously monitor health and precisely treat diseases. In this review, we discuss recent progress in the development and application of light-based bioelectronic devices. We summarize the key features of the technologies underlying these devices, including light sources, light detectors, energy storage and harvesting, and wireless power and communications. We investigate the current state of bioelectronic devices for the continuous measurement of health and on-demand delivery of therapy. Finally, we highlight major challenges and opportunities associated with light-based bioelectronic devices and discuss their promise for enabling digital forms of health care.
Collapse
Affiliation(s)
| | - Weni Sritandi
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
| | | | - John S. Ho
- Author to whom correspondence should be addressed:
| |
Collapse
|
35
|
Mascarenhas Saraiva M, Afonso J, Ribeiro T, Ferreira J, Cardoso H, Andrade P, Gonçalves R, Cardoso P, Parente M, Jorge R, Macedo G. Artificial intelligence and capsule endoscopy: automatic detection of enteric protruding lesions using a convolutional neural network. REVISTA ESPANOLA DE ENFERMEDADES DIGESTIVAS 2023; 115:75-79. [PMID: 34517717 DOI: 10.17235/reed.2021.7979/2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
BACKGROUND AND AIMS capsule endoscopy (CE) revolutionized the study of the small intestine. Nevertheless, reviewing CE images is time-consuming and prone to error. Artificial intelligence algorithms, particularly convolutional neural networks (CNN), are expected to overcome these drawbacks. Protruding lesions of the small intestine exhibit enormous morphological diversity in CE images. This study aimed to develop a CNN-based algorithm for the automatic detection small bowel protruding lesions. METHODS a CNN was developed using a pool of CE images containing protruding lesions or normal mucosa from 1,229 patients. A training dataset was used for the development of the model. The performance of the network was evaluated using an independent dataset, by calculating its sensitivity, specificity, accuracy, positive and negative predictive values. RESULTS a total of 18,625 CE images (2,830 showing protruding lesions and 15,795 normal mucosa) were included. Training and validation datasets were built with an 80 %/20 % distribution, respectively. After optimizing the architecture of the network, our model automatically detected small-bowel protruding lesions with an accuracy of 92.5 %. CNN had a sensitivity and specificity of 96.8 % and 96.5 %, respectively. The CNN analyzed the validation dataset in 53 seconds, at a rate of approximately 70 frames per second. CONCLUSIONS we developed an accurate CNN for the automatic detection of enteric protruding lesions with a wide range of morphologies. The development of these tools may enhance the diagnostic efficiency of CE.
Collapse
Affiliation(s)
| | - João Afonso
- Gastroenterology, Centro Hospitalar Universitário de São João
| | - Tiago Ribeiro
- Gastroenterology, Centro Hospitalar Universitário de São João
| | | | - Hélder Cardoso
- Gastroenterology, Centro Hospitalar Universitário de São João
| | | | | | - Pedro Cardoso
- Gastroenterology, Centro Hospitalar Universitário de São João
| | | | | | | |
Collapse
|
36
|
Ding Z, Shi H, Zhang H, Zhang H, Tian S, Zhang K, Cai S, Ming F, Xie X, Liu J, Lin R. Artificial intelligence-based diagnosis of abnormalities in small-bowel capsule endoscopy. Endoscopy 2023; 55:44-51. [PMID: 35931065 DOI: 10.1055/a-1881-4209] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
BACKGROUND : Further development of deep learning-based artificial intelligence (AI) technology to automatically diagnose multiple abnormalities in small-bowel capsule endoscopy (SBCE) videos is necessary. We aimed to develop an AI model, to compare its diagnostic performance with doctors of different experience levels, and to further evaluate its auxiliary role for doctors in diagnosing multiple abnormalities in SBCE videos. METHODS : The AI model was trained using 280 426 images from 2565 patients, and the diagnostic performance was validated in 240 videos. RESULTS : The sensitivity of the AI model for red spots, inflammation, blood content, vascular lesions, protruding lesions, parasites, diverticulum, and normal variants was 97.8 %, 96.1 %, 96.1 %, 94.7 %, 95.6 %, 100 %, 100 %, and 96.4 %, respectively. The specificity was 86.0 %, 75.3 %, 87.3 %, 77.8 %, 67.7 %, 97.5 %, 91.2 %, and 81.3 %, respectively. The accuracy was 95.0 %, 88.8 %, 89.2 %, 79.2 %, 80.8 %, 97.5 %, 91.3 %, and 93.3 %, respectively. For junior doctors, the assistance of the AI model increased the overall accuracy from 85.5 % to 97.9 % (P < 0.001, Bonferroni corrected), comparable to that of experts (96.6 %, P > 0.0125, Bonferroni corrected). CONCLUSIONS : This well-trained AI diagnostic model automatically diagnosed multiple small-bowel abnormalities simultaneously based on video-level recognition, with potential as an excellent auxiliary system for less-experienced endoscopists.
Collapse
Affiliation(s)
- Zhen Ding
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Huiying Shi
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Hang Zhang
- Ankon Technologies (Wuhan) Co., Ltd, Wuhan, China
| | - Hao Zhang
- Ankon Technologies (Wuhan) Co., Ltd, Wuhan, China
| | - Shuxin Tian
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Department of Gastroenterology, the First Affiliated Hospital of Shihezi University School of Medicine, Shihezi 832008, China
| | - Kun Zhang
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Sicheng Cai
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Fanhua Ming
- Ankon Technologies (Wuhan) Co., Ltd, Wuhan, China
| | - Xiaoping Xie
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jun Liu
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Rong Lin
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
37
|
Galati JS, Duve RJ, O'Mara M, Gross SA. Artificial intelligence in gastroenterology: A narrative review. Artif Intell Gastroenterol 2022; 3:117-141. [DOI: 10.35712/aig.v3.i5.117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 11/21/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Artificial intelligence (AI) is a complex concept, broadly defined in medicine as the development of computer systems to perform tasks that require human intelligence. It has the capacity to revolutionize medicine by increasing efficiency, expediting data and image analysis and identifying patterns, trends and associations in large datasets. Within gastroenterology, recent research efforts have focused on using AI in esophagogastroduodenoscopy, wireless capsule endoscopy (WCE) and colonoscopy to assist in diagnosis, disease monitoring, lesion detection and therapeutic intervention. The main objective of this narrative review is to provide a comprehensive overview of the research being performed within gastroenterology on AI in esophagogastroduodenoscopy, WCE and colonoscopy.
Collapse
Affiliation(s)
- Jonathan S Galati
- Department of Medicine, NYU Langone Health, New York, NY 10016, United States
| | - Robert J Duve
- Department of Internal Medicine, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY 14203, United States
| | - Matthew O'Mara
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| | - Seth A Gross
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| |
Collapse
|
38
|
Deep Learning Multi-Domain Model Provides Accurate Detection and Grading of Mucosal Ulcers in Different Capsule Endoscopy Types. Diagnostics (Basel) 2022; 12:diagnostics12102490. [PMID: 36292178 PMCID: PMC9600959 DOI: 10.3390/diagnostics12102490] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 10/06/2022] [Accepted: 10/08/2022] [Indexed: 11/17/2022] Open
Abstract
Background and Aims: The aim of our study was to create an accurate patient-level combined algorithm for the identification of ulcers on CE images from two different capsules. Methods: We retrospectively collected CE images from PillCam-SB3′s capsule and PillCam-Crohn’s capsule. ML algorithms were trained to classify small bowel CE images into either normal or ulcerated mucosa: a separate model for each capsule type, a cross-domain model (training the model on one capsule type and testing on the other), and a combined model. Results: The dataset included 33,100 CE images: 20,621 PillCam-SB3 images and 12,479 PillCam-Crohn’s images, of which 3582 were colonic images. There were 15,684 normal mucosa images and 17,416 ulcerated mucosa images. While the separate model for each capsule type achieved excellent accuracy (average AUC 0.95 and 0.98, respectively), the cross-domain model achieved a wide range of accuracies (0.569–0.88) with an AUC of 0.93. The combined model achieved the best results with an average AUC of 0.99 and average mean patient accuracy of 0.974. Conclusions: A combined model for two different capsules provided high and consistent diagnostic accuracy. Creating a holistic AI model for automated capsule reading is an essential part of the refinement required in ML models on the way to adapting them to clinical practice.
Collapse
|
39
|
Kodaka Y, Futagami S, Watanabe Y, Shichijo S, Uedo N, Aono H, Kirita K, Kato Y, Ueki N, Agawa S, Yamawaki H, Iwakiri K, Tada T. Determination of gastric atrophy with artificial intelligence compared to the assessments of the modified Kyoto and OLGA classifications. JGH Open 2022; 6:704-710. [PMID: 36262541 PMCID: PMC9575326 DOI: 10.1002/jgh3.12810] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 07/19/2022] [Accepted: 08/09/2022] [Indexed: 12/03/2022]
Abstract
BACKGROUND AND AIM Gastric atrophy is a precancerous lesion. We aimed to clarify whether gastric atrophy determined by artificial intelligence (AI) correlates with the diagnosis made by expert endoscopists using several endoscopic classifications, the Operative Link on Gastritis Assessment (OLGA) classification based on histological findings, and genotypes associated with gastric atrophy and cancer. METHODS Two hundred seventy Helicobacter pylori-positive outpatients were enrolled. All patients' endoscopy data were retrospectively evaluated based on the Kimura-Takemoto, modified Kyoto, and OLGA classifications. The AI-trained neural network generated a continuous number between 0 and 1 for gastric atrophy. Nucleotide variance of some candidate genes was confirmed or selectively assessed for a variety of genotypes, including the COX-21195, IL-1β 511, and mPGES-1 genotypes. RESULTS There were significant correlations between determinations of gastric atrophy by AI and by expert endoscopists using not only the Kimura-Takemoto classification (P < 0.001), but also the modified Kyoto classification (P = 0.046 and P < 0.001 for the two criteria). Moreover, there was a significant correlation with the OLGA classification (P = 0.009). Nucleotide variance of the COX-2, IL-1β, and mPGES-1genes was not significantly associated with gastric atrophy determined by AI. The area under the curve values of the combinations of AI and the modified Kyoto classification (0.746) and AI and the OLGA classification (0.675) were higher than in AI alone (0.665). CONCLUSION Combinations of AI and the modified Kyoto classification or of AI and the OLGA classification could be useful tools for evaluating gastric atrophy in patients with H. pylori infection as the risk of gastric cancer.
Collapse
Affiliation(s)
| | - Seiji Futagami
- Division of GastroenterologyNippon Medical SchoolTokyoJapan
| | | | - Satoki Shichijo
- Department of Gastrointestinal OncologyOsaka International Cancer InstituteOsakaJapan
| | - Noriya Uedo
- Department of Gastrointestinal OncologyOsaka International Cancer InstituteOsakaJapan
| | | | - Kumiko Kirita
- Division of GastroenterologyNippon Medical SchoolTokyoJapan
| | | | - Nobue Ueki
- Division of GastroenterologyNippon Medical SchoolTokyoJapan
| | - Shuhei Agawa
- Division of GastroenterologyNippon Medical SchoolTokyoJapan
| | | | | | - Tomohiro Tada
- AI Medical Service Inc.TokyoJapan
- Tada Tomohiro Institute of Gastroenterology and ProctologySaitamaJapan
| |
Collapse
|
40
|
Hanscom M, Cave DR. Endoscopic capsule robot-based diagnosis, navigation and localization in the gastrointestinal tract. Front Robot AI 2022; 9:896028. [PMID: 36119725 PMCID: PMC9479458 DOI: 10.3389/frobt.2022.896028] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 08/08/2022] [Indexed: 01/10/2023] Open
Abstract
The proliferation of video capsule endoscopy (VCE) would not have been possible without continued technological improvements in imaging and locomotion. Advancements in imaging include both software and hardware improvements but perhaps the greatest software advancement in imaging comes in the form of artificial intelligence (AI). Current research into AI in VCE includes the diagnosis of tumors, gastrointestinal bleeding, Crohn’s disease, and celiac disease. Other advancements have focused on the improvement of both camera technologies and alternative forms of imaging. Comparatively, advancements in locomotion have just started to approach clinical use and include onboard controlled locomotion, which involves miniaturizing a motor to incorporate into the video capsule, and externally controlled locomotion, which involves using an outside power source to maneuver the capsule itself. Advancements in locomotion hold promise to remove one of the major disadvantages of VCE, namely, its inability to obtain targeted diagnoses. Active capsule control could in turn unlock additional diagnostic and therapeutic potential, such as the ability to obtain targeted tissue biopsies or drug delivery. With both advancements in imaging and locomotion has come a corresponding need to be better able to process generated images and localize the capsule’s position within the gastrointestinal tract. Technological advancements in computation performance have led to improvements in image compression and transfer, as well as advancements in sensor detection and alternative methods of capsule localization. Together, these advancements have led to the expansion of VCE across a number of indications, including the evaluation of esophageal and colon pathologies including esophagitis, esophageal varices, Crohn’s disease, and polyps after incomplete colonoscopy. Current research has also suggested a role for VCE in acute gastrointestinal bleeding throughout the gastrointestinal tract, as well as in urgent settings such as the emergency department, and in resource-constrained settings, such as during the COVID-19 pandemic. VCE has solidified its role in the evaluation of small bowel bleeding and earned an important place in the practicing gastroenterologist’s armamentarium. In the next few decades, further improvements in imaging and locomotion promise to open up even more clinical roles for the video capsule as a tool for non-invasive diagnosis of lumenal gastrointestinal pathologies.
Collapse
|
41
|
Adjei PE, Lonseko ZM, Du W, Zhang H, Rao N. Examining the effect of synthetic data augmentation in polyp detection and segmentation. Int J Comput Assist Radiol Surg 2022; 17:1289-1302. [PMID: 35678960 DOI: 10.1007/s11548-022-02651-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 04/21/2022] [Indexed: 12/17/2022]
Abstract
PURPOSE As with several medical image analysis tasks based on deep learning, gastrointestinal image analysis is plagued with data scarcity, privacy concerns and an insufficient number of pathology samples. This study examines the generation and utility of synthetic samples of colonoscopy images with polyps for data augmentation. METHODS We modify and train a pix2pix model to generate synthetic colonoscopy samples with polyps to augment the original dataset. Subsequently, we create a variety of datasets by varying the quantity of synthetic samples and traditional augmentation samples, to train a U-Net network and Faster R-CNN model for segmentation and detection of polyps, respectively. We compare the performance of the models when trained with the resulting datasets in terms of F1 score, intersection over union, precision and recall. Further, we compare the performances of the models with unseen polyp datasets to assess their generalization ability. RESULTS The average F1 coefficient and intersection over union are improved with increasing number of synthetic samples in U-Net over all test datasets. The performance of the Faster R-CNN model is also improved in terms of polyp detection, while decreasing the false-negative rate. Further, the experimental results for polyp detection outperform similar studies in the literature on the ETIS-PolypLaribDB dataset. CONCLUSION By varying the quantity of synthetic and traditional augmentation, there is the potential to control the sensitivity of deep learning models in polyp segmentation and detection. Further, GAN-based augmentation is a viable option for improving the performance of models for polyp segmentation and detection.
Collapse
Affiliation(s)
- Prince Ebenezer Adjei
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.,Department of Computer Engineering, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | - Zenebe Markos Lonseko
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Wenju Du
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Han Zhang
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Nini Rao
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China. .,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| |
Collapse
|
42
|
Alemanni LV, Fabbri S, Rondonotti E, Mussetto A. Recent developments in small bowel endoscopy: the "black box" is now open! Clin Endosc 2022; 55:473-479. [PMID: 35831981 PMCID: PMC9329645 DOI: 10.5946/ce.2022.113] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 04/28/2022] [Accepted: 05/11/2022] [Indexed: 12/09/2022] Open
Abstract
Over the last few years, capsule endoscopy has been established as a fundamental device in the practicing gastroenterologist's toolbox. Its utilization in diagnostic algorithms for suspected small bowel bleeding, Crohn's disease, and small bowel tumors has been approved by several guidelines. The advent of double-balloon enteroscopy has significantly increased the therapeutic possibilities and release of multiple devices (single-balloon enteroscopy and spiral enteroscopy) aimed at improving the performance of small bowel enteroscopy. Recently, some important innovations have appeared in the small bowel endoscopy scene, providing further improvement to its evolution. Artificial intelligence in capsule endoscopy should increase diagnostic accuracy and reading efficiency, and the introduction of motorized spiral enteroscopy into clinical practice could also improve the therapeutic yield. This review focuses on the most recent studies on artificial-intelligence-assisted capsule endoscopy and motorized spiral enteroscopy.
Collapse
Affiliation(s)
- Luigina Vanessa Alemanni
- Gastroenterology Unit, Santa Maria delle Croci Hospital, Ravenna, Italy
- Department of Medical and Surgical Sciences, S. Orsola-Malpighi Hospital, Bologna, Italy
| | - Stefano Fabbri
- Gastroenterology Unit, Santa Maria delle Croci Hospital, Ravenna, Italy
- Department of Medical and Surgical Sciences, S. Orsola-Malpighi Hospital, Bologna, Italy
| | | | | |
Collapse
|
43
|
Hosoe N, Horie T, Tojo A, Sakurai H, Hayashi Y, Limpias Kamiya KJL, Sujino T, Takabayashi K, Ogata H, Kanai T. Development of a Deep-Learning Algorithm for Small Bowel-Lesion Detection and a Study of the Improvement in the False-Positive Rate. J Clin Med 2022; 11:3682. [PMID: 35806969 PMCID: PMC9267395 DOI: 10.3390/jcm11133682] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 02/04/2023] Open
Abstract
Deep learning has recently been gaining attention as a promising technology to improve the identification of lesions, and deep-learning algorithms for lesion detection have been actively developed in small-bowel capsule endoscopy (SBCE). We developed a detection algorithm for abnormal findings by deep learning (convolutional neural network) the SBCE imaging data of 30 cases with abnormal findings. To enable the detection of a wide variety of abnormal findings, the training data were balanced to include all major findings identified in SBCE (bleeding, angiodysplasia, ulceration, and neoplastic lesions). To reduce the false-positive rate, "findings that may be responsible for hemorrhage" and "findings that may require therapeutic intervention" were extracted from the images of abnormal findings and added to the training dataset. For the performance evaluation, the sensitivity and the specificity were calculated using 271 detectable findings in 35 cases. The sensitivity was calculated using 68,494 images of non-abnormal findings. The sensitivity and specificity were 93.4% and 97.8%, respectively. The average number of images detected by the algorithm as having abnormal findings was 7514. We developed an image-reading support system using deep learning for SBCE and obtained a good detection performance.
Collapse
Affiliation(s)
- Naoki Hosoe
- Center for Diagnostic and Therapeutic Endoscopy, Keio University School of Medicine, 35 Shinanomachi, Shinjuku, Tokyo 160-8582, Japan; (T.S.); (K.T.); (H.O.)
| | - Tomofumi Horie
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keio University School of Medicine, Tokyo 160-8582, Japan; (T.H.); (A.T.); (H.S.); (Y.H.); (K.J.-L.L.K.); (T.K.)
| | - Anna Tojo
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keio University School of Medicine, Tokyo 160-8582, Japan; (T.H.); (A.T.); (H.S.); (Y.H.); (K.J.-L.L.K.); (T.K.)
| | - Hinako Sakurai
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keio University School of Medicine, Tokyo 160-8582, Japan; (T.H.); (A.T.); (H.S.); (Y.H.); (K.J.-L.L.K.); (T.K.)
| | - Yukie Hayashi
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keio University School of Medicine, Tokyo 160-8582, Japan; (T.H.); (A.T.); (H.S.); (Y.H.); (K.J.-L.L.K.); (T.K.)
| | - Kenji Jose-Luis Limpias Kamiya
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keio University School of Medicine, Tokyo 160-8582, Japan; (T.H.); (A.T.); (H.S.); (Y.H.); (K.J.-L.L.K.); (T.K.)
| | - Tomohisa Sujino
- Center for Diagnostic and Therapeutic Endoscopy, Keio University School of Medicine, 35 Shinanomachi, Shinjuku, Tokyo 160-8582, Japan; (T.S.); (K.T.); (H.O.)
| | - Kaoru Takabayashi
- Center for Diagnostic and Therapeutic Endoscopy, Keio University School of Medicine, 35 Shinanomachi, Shinjuku, Tokyo 160-8582, Japan; (T.S.); (K.T.); (H.O.)
| | - Haruhiko Ogata
- Center for Diagnostic and Therapeutic Endoscopy, Keio University School of Medicine, 35 Shinanomachi, Shinjuku, Tokyo 160-8582, Japan; (T.S.); (K.T.); (H.O.)
| | - Takanori Kanai
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keio University School of Medicine, Tokyo 160-8582, Japan; (T.H.); (A.T.); (H.S.); (Y.H.); (K.J.-L.L.K.); (T.K.)
| |
Collapse
|
44
|
Performance of a Deep Learning System for Automatic Diagnosis of Protruding Lesions in Colon Capsule Endoscopy. Diagnostics (Basel) 2022; 12:diagnostics12061445. [PMID: 35741255 PMCID: PMC9222144 DOI: 10.3390/diagnostics12061445] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 06/05/2022] [Accepted: 06/07/2022] [Indexed: 12/18/2022] Open
Abstract
Background: Colon capsule endoscopy (CCE) is an alternative for patients unwilling or with contraindications for conventional colonoscopy. Colorectal cancer screening may benefit greatly from widespread acceptance of a non-invasive tool such as CCE. However, reviewing CCE exams is a time-consuming process, with risk of overlooking important lesions. We aimed to develop an artificial intelligence (AI) algorithm using a convolutional neural network (CNN) architecture for automatic detection of colonic protruding lesions in CCE images. An anonymized database of CCE images collected from a total of 124 patients was used. This database included images of patients with colonic protruding lesions or patients with normal colonic mucosa or with other pathologic findings. A total of 5715 images were extracted for CNN development. Two image datasets were created and used for training and validation of the CNN. The AUROC for detection of protruding lesions was 0.99. The sensitivity, specificity, PPV and NPV were 90.0%, 99.1%, 98.6% and 93.2%, respectively. The overall accuracy of the network was 95.3%. The developed deep learning algorithm accurately detected protruding lesions in CCE images. The introduction of AI technology to CCE may increase its diagnostic accuracy and acceptance for screening of colorectal neoplasia.
Collapse
|
45
|
Hirata I, Tsuboi A, Oka S, Sumioka A, Iio S, Hiyama Y, Kotachi T, Yuge R, Hayashi R, Urabe Y, Tanaka S. Diagnostic yield of proximal jejunal lesions with third-generation capsule endoscopy. DEN OPEN 2022; 3:e134. [PMID: 35898830 PMCID: PMC9307735 DOI: 10.1002/deo2.134] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 05/13/2022] [Accepted: 05/15/2022] [Indexed: 12/09/2022]
Abstract
Objectives Capsule endoscopy (CE) has been shown to have poor diagnostic performance when the capsule passes quickly through the small bowel, especially the proximal jejunum. This study aimed to evaluate the diagnostic yield of proximal jejunal lesions with third-generation CE technology. Methods We retrospectively examined 138 consecutive patients, 76 (55.0%) of whom were men. The patients' median age was 70 years, and proximal jejunal lesions were detected by CE and/or double-balloon endoscopy at Hiroshima University Hospital between January 2011 and June 2021. We analyzed the diagnostic accuracy of CE for proximal jejunal lesions and compared the characteristics of the discrepancy between the use of CE and double-balloon endoscopy with Pillcam SB 2 (SB2) and Pillcam SB 3 (SB3). Results SB2 and SB3 were used in 48 (35%) and 90 (65%) patients, respectively. There was no difference in baseline characteristics between these groups. Small-bowel lesions in the proximal jejunum comprised 75 tumors (54%), 50 vascular lesions (36%), and 13 inflammatory lesions (9%). The diagnostic rate was significantly higher in the SB3 group than in the SB2 group for tumors (91% vs. 72%, p < 0.05) and vascular lesions (97% vs. 69%, p < 0.01). For vascular lesions, in particular, the diagnostic rate of angioectasia improved in the SB3 group (100%) compared with that in the SB2 group (69%). Conclusions SB3 use improved the detection of proximal jejunal tumors and vascular lesions compared with SB2 use.
Collapse
Affiliation(s)
- Issei Hirata
- Department of Gastroenterology and MetabolismHiroshima University HospitalHiroshimaJapan
| | - Akiyoshi Tsuboi
- Department of EndoscopyHiroshima University HospitalHiroshimaJapan
| | - Shiro Oka
- Department of Gastroenterology and MetabolismHiroshima University HospitalHiroshimaJapan
| | - Akihiko Sumioka
- Department of Gastroenterology and MetabolismHiroshima University HospitalHiroshimaJapan
| | - Sumio Iio
- Department of Gastroenterology and MetabolismHiroshima University HospitalHiroshimaJapan
| | - Yuichi Hiyama
- Department of Center for Integrated Medical ResearchHiroshima University HospitalHiroshimaJapan
| | - Takahiro Kotachi
- Department of EndoscopyHiroshima University HospitalHiroshimaJapan
| | - Ryo Yuge
- Department of EndoscopyHiroshima University HospitalHiroshimaJapan
| | - Ryohei Hayashi
- Department of EndoscopyHiroshima University HospitalHiroshimaJapan
| | - Yuji Urabe
- Division of Regeneration and Medicine Center for Translational and Clinical ResearchHiroshima University HospitalHiroshimaJapan
| | - Shinji Tanaka
- Department of EndoscopyHiroshima University HospitalHiroshimaJapan
| |
Collapse
|
46
|
Lafeuille P, Lambin T, Yzet C, Latif EH, Ordoqui N, Rivory J, Pioche M. Flat colorectal sessile serrated polyp: an example of what artificial intelligence does not easily detect. Endoscopy 2022; 54:520-521. [PMID: 33979854 DOI: 10.1055/a-1486-6220] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Affiliation(s)
- Pierre Lafeuille
- Department of Endoscopy and Hepato-Gastroenterology, Edouard Herriot Hospital, Lyon, France
| | - Thomas Lambin
- Department of Endoscopy and Hepato-Gastroenterology, Edouard Herriot Hospital, Lyon, France
| | - Clara Yzet
- Department of Endoscopy and Hepato-Gastroenterology, Amiens University Hospital, Amiens, France
| | | | | | - Jérôme Rivory
- Department of Endoscopy and Hepato-Gastroenterology, Edouard Herriot Hospital, Lyon, France
| | - Mathieu Pioche
- Department of Endoscopy and Hepato-Gastroenterology, Edouard Herriot Hospital, Lyon, France
| |
Collapse
|
47
|
Chetcuti Zammit S, Sidhu R. Artificial intelligence within the small bowel: are we lagging behind? Curr Opin Gastroenterol 2022; 38:307-317. [PMID: 35645023 DOI: 10.1097/mog.0000000000000827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
PURPOSE OF REVIEW The use of artificial intelligence in small bowel capsule endoscopy is expanding. This review focusses on the use of artificial intelligence for small bowel pathology compared with human data and developments to date. RECENT FINDINGS The diagnosis and management of small bowel disease has been revolutionized with the advent of capsule endoscopy. Reading of capsule endoscopy videos however is time consuming with an average reading time of 40 min. Furthermore, the fatigued human eye may miss subtle lesions including indiscreet mucosal bulges. In recent years, artificial intelligence has made significant progress in the field of medicine including gastroenterology. Machine learning has enabled feature extraction and in combination with deep neural networks, image classification has now materialized for routine endoscopy for the clinician. SUMMARY Artificial intelligence is in built within the Navicam-Ankon capsule endoscopy reading system. This development will no doubt expand to other capsule endoscopy platforms and capsule endoscopies that are used to visualize other parts of the gastrointestinal tract as a standard. This wireless and patient friendly technique combined with rapid reading platforms with the help of artificial intelligence will become an attractive and viable choice to alter how patients are investigated in the future.
Collapse
Affiliation(s)
| | - Reena Sidhu
- Academic Department of Gastroenterology, Royal Hallamshire Hospital
- Academic Unit of Gastroenterology, Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
48
|
Abstract
Artificial intelligence (AI) is rapidly developing in various medical fields, and there is an increase in research performed in the field of gastrointestinal (GI) endoscopy. In particular, the advent of convolutional neural network, which is a class of deep learning method, has the potential to revolutionize the field of GI endoscopy, including esophagogastroduodenoscopy (EGD), capsule endoscopy (CE), and colonoscopy. A total of 149 original articles pertaining to AI (27 articles in esophagus, 30 articles in stomach, 29 articles in CE, and 63 articles in colon) were identified in this review. The main focuses of AI in EGD are cancer detection, identifying the depth of cancer invasion, prediction of pathological diagnosis, and prediction of Helicobacter pylori infection. In the field of CE, automated detection of bleeding sites, ulcers, tumors, and various small bowel diseases is being investigated. AI in colonoscopy has advanced with several patient-based prospective studies being conducted on the automated detection and classification of colon polyps. Furthermore, research on inflammatory bowel disease has also been recently reported. Most studies of AI in the field of GI endoscopy are still in the preclinical stages because of the retrospective design using still images. Video-based prospective studies are needed to advance the field. However, AI will continue to develop and be used in daily clinical practice in the near future. In this review, we have highlighted the published literature along with providing current status and insights into the future of AI in GI endoscopy.
Collapse
Affiliation(s)
- Yutaka Okagawa
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.,Department of Gastroenterology, Tonan Hospital, Sapporo, Japan
| | - Seiichiro Abe
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.
| | - Masayoshi Yamada
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Ichiro Oda
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| |
Collapse
|
49
|
Afonso J, Mascarenhas M, Ribeiro T, Cardoso H, Andrade P, Ferreira JP, Saraiva MM, Macedo G. Deep Learning for Automatic Identification and Characterization of the Bleeding Potential of Enteric Protruding Lesions in Capsule Endoscopy. GASTRO HEP ADVANCES 2022; 1:835-843. [PMID: 39131843 PMCID: PMC11307543 DOI: 10.1016/j.gastha.2022.04.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 04/12/2022] [Indexed: 08/13/2024]
Abstract
Background and Aims Capsule endoscopy (CE) revolutionized the study of the small intestine, overcoming the limitations of conventional endoscopy. Nevertheless, reviewing CE images is time-consuming. Convolutional Neural Networks (CNNs) are an artificial intelligence architecture with high performance levels for image analysis. Protruding lesions of the small intestine exhibit enormous morphologic diversity in CE images. We aimed to develop a CNN-based algorithm for automatic detection of varied small-bowel protruding lesions. Methods A CNN was developed using a pool of CE images containing protruding lesions or normal mucosa/other findings. A total of 2565 patients were included. These images were inserted into a CNN model with transfer learning. We evaluated the performance of the network by calculating its sensitivity, specificity, accuracy, positive predictive value, and negative predictive value. Results A CNN was developed based on a total of 21,320 CE images. Training and validation data sets comprising 80% and 20% of the total pool of images, respectively, were constructed for development and testing of the network. The algorithm automatically detected small-bowel protruding lesions with an accuracy of 97.1%. Our CNN had a sensitivity, specificity, positive, and negative predictive values of 95.9%, 97.1%, 83.0%, and 95.7%, respectively. The CNN operated at a rate of approximately 355 frames per second. Conclusion We developed an accurate CNN for automatic detection of enteric protruding lesions with a wide range of morphologies. The development of these tools may enhance the diagnostic efficiency of CE.
Collapse
Affiliation(s)
- João Afonso
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
| | - Miguel Mascarenhas
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
- Faculty of Medicine of the University of Porto, Porto, Portugal
| | - Tiago Ribeiro
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
| | - Hélder Cardoso
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
- Faculty of Medicine of the University of Porto, Porto, Portugal
| | - Patrícia Andrade
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
- Faculty of Medicine of the University of Porto, Porto, Portugal
| | - João P.S. Ferreira
- Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
| | | | - Guilherme Macedo
- Department of Gastroenterology, São João University Hospital, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
- Faculty of Medicine of the University of Porto, Porto, Portugal
| |
Collapse
|
50
|
White E, Koulaouzidis A, Patience L, Wenzek H. How a managed service for colon capsule endoscopy works in an overstretched healthcare system. Scand J Gastroenterol 2022; 57:359-363. [PMID: 34854333 DOI: 10.1080/00365521.2021.2006299] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Lower gastrointestinal diagnostics have been facing significant capacity constraints, which the COVID-19 pandemic has exacerbated due to significant reductions in endoscopy procedures. Colon Capsule Endoscopy (CCE) provides a safe, viable solution to offset ongoing demand and could be a valuable tool for the recovery of endoscopy services post-COVID. NHS Scotland has already begun a country-wide rollout of CCE as a managed service, and NHS England have committed to a pilot scheme of 11,000 capsules via hospital-based delivery. Here, we outline a proven method of CCE delivery that ensures the CCE and results are delivered in an efficient, clinically robust manner with high patient acceptability levels through a managed service. Delivering CCE without a managed service is likely to be slower, more costly, and less effective, limiting the many benefits of CCE as an addition to the standard diagnostic pathway for bowel cancer.
Collapse
Affiliation(s)
| | - Anastasios Koulaouzidis
- Department of Social Medicine & Public Health, Pomeranian Medical University, Szczecin, Poland
| | | | - Hagen Wenzek
- CorporateHealth International ApS, Odense, Denmark
| |
Collapse
|