1
|
Antonelli G, Eelbode T, Elsaman T, Sharma M, Bisschops R, Hassan C. Building Machine Learning Models in Gastrointestinal Endoscopy. Gastrointest Endosc Clin N Am 2025; 35:279-290. [PMID: 40021229 DOI: 10.1016/j.giec.2024.07.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2025]
Abstract
The current landscape of machine learning models in GI endoscopy is fraught with considerable variability in methodologies and quality, posing challenges for validation and generalization. To ensure the effective integration of AI in clinical practice, it is crucial to develop and validate models rigorously across diverse and representative datasets. This involves standardizing reference standards, ensuring thorough external validation, using representative patient populations, and incorporating a range of image qualities. Addressing these methodological discrepancies will enhance the reliability and robustness of AI models, thereby facilitating their adoption and improving patient care in GI endoscopy.
Collapse
Affiliation(s)
- Giulio Antonelli
- Gastroenterology and Digestive Endoscopy Unit, Ospedale dei Castelli, Via Nettunense Km 11.5, 00040, Ariccia, Rome, Italy
| | - Tom Eelbode
- Department of Electrical Engineering (ESAT/PSI), Catholic University Leuven, Leuven, Belgium; Medical Imaging Research Center (MIRC), University Hospitals Leuven, UZ Herestraat 49 - box 70033000, Leuven, Belgium
| | - Touka Elsaman
- Department of Biomedical Sciences, Humanitas Research Hospital and University, Via Manzoni 56, Rozzano, Milano 20089, Italy
| | - Mrigya Sharma
- Medical Intern, GMERS Medical College, Vadodara, India
| | - Raf Bisschops
- Department of Electrical Engineering (ESAT/PSI), Catholic University Leuven, Leuven, Belgium; Medical Imaging Research Center (MIRC), University Hospitals Leuven, UZ Herestraat 49 - box 70033000, Leuven, Belgium
| | - Cesare Hassan
- Department of Biomedical Sciences, Humanitas Research Hospital and University, Via Manzoni 56, Rozzano, Milano 20089, Italy; Endoscopy Unit, Humanitas Clinical and Research Center -IRCCS, Rozzano, Italy.
| |
Collapse
|
2
|
Chiang AL, Hong H. The Role of Industry to Grow Clinical Artificial Intelligence Applications in Gastroenterology and Endoscopy. Gastrointest Endosc Clin N Am 2025; 35:485-501. [PMID: 40021243 DOI: 10.1016/j.giec.2024.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2025]
Abstract
The integration of artificial intelligence (AI) in health care has the potential to enhance diagnostics and disease management. In the field of gastroenterology, AI has shown promise in improving diagnostic accuracy and streamlining clinical workflows. Despite its potential, many AI innovations remain in the research phase, with significant hurdles to overcome in the transition from laboratory research to practical clinical tools. The industry is positioned to be a key player in this transition by fostering tight collaboration with clinicians and researchers, advocating for the responsible application of AI, and committing to the extensive process of medical device development and market introduction.
Collapse
Affiliation(s)
- Austin L Chiang
- Division of Gastroenterology and Hepatology, Thomas Jefferson University Hospital, 132 South 10th Street, Suite 585, Philadelphia, PA 19102, USA.
| | - Ha Hong
- Endoscopy Operating Unit, Medtronic Plc, Minneapolis, MN, USA
| |
Collapse
|
3
|
Maan S, Agrawal R, Singh S, Thakkar S. Artificial Intelligence in Endoscopy Quality Measures. Gastrointest Endosc Clin N Am 2025; 35:431-444. [PMID: 40021239 DOI: 10.1016/j.giec.2024.10.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2025]
Abstract
Quality of gastrointestinal endoscopy is a major determinant of its effectiveness. Artificial intelligence (AI) has the potential to enhance quality monitoring and improve endoscopy outcomes. This article reviews the current literature on AI algorithms that have been developed for endoscopy quality assessment.
Collapse
Affiliation(s)
- Soban Maan
- Division of Gastroenterology & Hepatology, Department of Medicine, West Virginia University, Morgantown, WV, USA
| | - Rohit Agrawal
- Division of Gastroenterology & Hepatology, Department of Medicine, West Virginia University, Morgantown, WV, USA
| | - Shailendra Singh
- Division of Gastroenterology & Hepatology, Department of Medicine, West Virginia University, Morgantown, WV, USA
| | - Shyam Thakkar
- Division of Gastroenterology & Hepatology, Department of Medicine, West Virginia University, Morgantown, WV, USA.
| |
Collapse
|
4
|
Kumar A, Aravind N, Gillani T, Kumar D. Artificial intelligence breakthrough in diagnosis, treatment, and prevention of colorectal cancer – A comprehensive review. Biomed Signal Process Control 2025; 101:107205. [DOI: 10.1016/j.bspc.2024.107205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2024]
|
5
|
Bazerbachi F, Murad F, Kubiliun N, Adams MA, Shahidi N, Visrodia K, Essex E, Raju G, Greenberg C, Day LW, Elmunzer BJ. Video recording in GI endoscopy. VIDEOGIE : AN OFFICIAL VIDEO JOURNAL OF THE AMERICAN SOCIETY FOR GASTROINTESTINAL ENDOSCOPY 2025; 10:67-80. [PMID: 40012896 PMCID: PMC11852952 DOI: 10.1016/j.vgie.2024.09.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2025]
Abstract
The current approach to procedure reporting in endoscopy aims to capture essential findings and interventions but inherently sacrifices the rich detail and nuance of the entire endoscopic experience. Endoscopic video recording (EVR) provides a complete archive of the procedure, extending the utility of the encounter beyond diagnosis and intervention, and potentially adding significant value to the care of the patient and the field in general. This white paper outlines the potential of EVR in clinical care, quality improvement, education, and artificial intelligence-driven innovation, and addresses critical considerations surrounding technology, regulation, ethics, and privacy. As with other medical imaging modalities, growing adoption of EVR is inevitable, and proactive engagement of professional societies and practitioners is essential to harness the full potential of this technology toward improving clinical care, education, and research.
Collapse
Affiliation(s)
- Fateh Bazerbachi
- CentraCare, Interventional Endoscopy Program, St Cloud Hospital, St Cloud, Minnesota, USA
- Division of Gastroenterology, Hepatology and Nutrition, University of Minnesota, Minneapolis, Minnesota, USA
| | - Faris Murad
- Illinois Masonic Medical Center, Center for Advanced Care, Chicago, Illinois, USA
| | - Nisa Kubiliun
- Division of Digestive and Liver Diseases, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Megan A Adams
- Division of Gastroenterology, University of Michigan Medical School, Institute for Healthcare Policy and Innovation, Ann Arbor, Michigan, USA; Institute for Healthcare Policy and Innovation, Ann Arbor, Michigan, USA
| | - Neal Shahidi
- Division of Gastroenterology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Kavel Visrodia
- Columbia University Irving Medical Center - New York Presbyterian Hospital, New York, New York, USA
| | - Eden Essex
- American Society for GI Endoscopy, Downers Grove, Illinois, USA
| | - Gottumukkala Raju
- Division of Internal Medicine, Department of Gastroenterology Hepatology and Nutrition, MD Anderson Cancer Center, Houston, Texas, USA
| | - Caprice Greenberg
- Department of Surgery, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Lukejohn W Day
- Division of Gastroenterology, Department of Medicine, University of California San Francisco, San Francisco, California, USA
| | - B Joseph Elmunzer
- Division of Gastroenterology and Hepatology, Medical University of South Carolina, Charleston, South Carolina, USA
| |
Collapse
|
6
|
Antonelli G, Libanio D, De Groof AJ, van der Sommen F, Mascagni P, Sinonquel P, Abdelrahim M, Ahmad O, Berzin T, Bhandari P, Bretthauer M, Coimbra M, Dekker E, Ebigbo A, Eelbode T, Frazzoni L, Gross SA, Ishihara R, Kaminski MF, Messmann H, Mori Y, Padoy N, Parasa S, Pilonis ND, Renna F, Repici A, Simsek C, Spadaccini M, Bisschops R, Bergman JJGHM, Hassan C, Dinis Ribeiro M. QUAIDE - Quality assessment of AI preclinical studies in diagnostic endoscopy. Gut 2024; 74:153-161. [PMID: 39406471 DOI: 10.1136/gutjnl-2024-332820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Accepted: 09/27/2024] [Indexed: 12/12/2024]
Abstract
Artificial intelligence (AI) holds significant potential for enhancing quality of gastrointestinal (GI) endoscopy, but the adoption of AI in clinical practice is hampered by the lack of rigorous standardisation and development methodology ensuring generalisability. The aim of the Quality Assessment of pre-clinical AI studies in Diagnostic Endoscopy (QUAIDE) Explanation and Checklist was to develop recommendations for standardised design and reporting of preclinical AI studies in GI endoscopy.The recommendations were developed based on a formal consensus approach with an international multidisciplinary panel of 32 experts among endoscopists and computer scientists. The Delphi methodology was employed to achieve consensus on statements, with a predetermined threshold of 80% agreement. A maximum three rounds of voting were permitted.Consensus was reached on 18 key recommendations, covering 6 key domains: data acquisition and annotation (6 statements), outcome reporting (3 statements), experimental setup and algorithm architecture (4 statements) and result presentation and interpretation (5 statements). QUAIDE provides recommendations on how to properly design (1. Methods, statements 1-14), present results (2. Results, statements 15-16) and integrate and interpret the obtained results (3. Discussion, statements 17-18).The QUAIDE framework offers practical guidance for authors, readers, editors and reviewers involved in AI preclinical studies in GI endoscopy, aiming at improving design and reporting, thereby promoting research standardisation and accelerating the translation of AI innovations into clinical practice.
Collapse
Affiliation(s)
- Giulio Antonelli
- Gastroenterology and Digestive Endoscopy Unit, Ospedale dei Castelli, Ariccia, Rome, Italy
| | - Diogo Libanio
- MEDCIDS, Faculty of Medicine, University of Porto, Porto, Portugal
| | - Albert Jeroen De Groof
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Center, Amsterdam, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, VCA group, University of Technology, Eindhoven, The Netherlands
| | - Pietro Mascagni
- IHU Strasbourg, Strasbourg, France
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Pieter Sinonquel
- Department of Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Department of Translational Research for Gastrointestinal Disorders (TARGID), KU Leuven, Leuven, Belgium
| | | | | | - Tyler Berzin
- Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Pradeep Bhandari
- Endoscopy Department, Portsmouth Hospitals University NHS Trust, Portsmouth, UK
| | | | - Miguel Coimbra
- INESC TEC, Faculdade de Ciências, University of Porto, Porto, Portugal
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Center, Amsterdam, The Netherlands
| | - Alanna Ebigbo
- III Medizinische Klinik, UniversitatsKlinikum Augsburg, Augsburg, Germany
| | - Tom Eelbode
- Department of Electrical Engineering (ESAT/PSI), Medical Imaging Research Center, KU Leuven, Leuven, Belgium
| | - Leonardo Frazzoni
- Gastroenterology and Endoscopy Unit, Forlì-Cesena Hospitals, AUSL Romagna, Forlì, Italy
| | - Seth A Gross
- Division of Gastroenterology and Hepatology, New York University Langone Health, New York, New York, USA
| | - Ryu Ishihara
- Osaka International Cancer Institute, Osaka, Japan
| | - Michal Filip Kaminski
- Clinical Effectiveness Research Group, University of Oslo, Oslo, Norway
- Department of Gastroenterological Oncology, Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Warsaw, Poland
- Medical Center for Postgraduate Education, Warsaw, Poland
| | - Helmut Messmann
- III Medizinische Klinik, UniversitatsKlinikum Augsburg, Augsburg, Germany
| | - Yuichi Mori
- Clinical Effectiveness Research Group, University of Oslo, Oslo, Norway
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | | | | | - Nastazja Dagny Pilonis
- Clinical Effectiveness Research Group, University of Oslo, Oslo, Norway
- Department of Gastroenterological Oncology, Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Warsaw, Poland
- Medical Center for Postgraduate Education, Warsaw, Poland
| | - Francesco Renna
- INESC TEC, Faculdade de Ciências, University of Porto, Porto, Portugal
| | - Alessandro Repici
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy
- Endoscopy Unit, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | - Cem Simsek
- Department of Gastroenterology, Hacettepe University, Ankara, Turkey
| | - Marco Spadaccini
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy
- Endoscopy Unit, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | - Raf Bisschops
- Department of Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Department of Translational Research for Gastrointestinal Disorders (TARGID), KU Leuven, Leuven, Belgium
| | - Jacques J G H M Bergman
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Center, Amsterdam, The Netherlands
| | - Cesare Hassan
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy
- Endoscopy Unit, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | - Mario Dinis Ribeiro
- MEDCIDS, Faculty of Medicine, University of Porto, Porto, Portugal
- RISE@CI-IPOP (Health Research Network), Porto Comprehensive Cancer Centre (Porto.CCC), Porto, Portugal
| |
Collapse
|
7
|
Labaki C, Uche-Anya EN, Berzin TM. Artificial Intelligence in Gastrointestinal Endoscopy. Gastroenterol Clin North Am 2024; 53:773-786. [PMID: 39489586 DOI: 10.1016/j.gtc.2024.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2024]
Abstract
Recent advancements in artificial intelligence (AI) have significantly impacted the field of gastrointestinal (GI) endoscopy, with applications spanning a wide range of clinical indications. The central goals for AI in GI endoscopy are to improve endoscopic procedural performance and quality assessment, optimize patient outcomes, and reduce administrative burden. Despite early progress, such as Food and Drug Administration approval of the first computer-aided polyp detection system in 2021, there are numerous important challenges to be faced on the path toward broader adoption of AI algorithms in clinical endoscopic practice.
Collapse
Affiliation(s)
- Chris Labaki
- Department of Internal Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, 300 Brookline Avenue, Boston, MA, USA
| | - Eugenia N Uche-Anya
- Division of Gastroenterology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit Street, Boston, MA, USA
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Division of Gastroenterology, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, USA.
| |
Collapse
|
8
|
Biffi C, Antonelli G, Bernhofer S, Hassan C, Hirata D, Iwatate M, Maieron A, Salvagnini P, Cherubini A. REAL-Colon: A dataset for developing real-world AI applications in colonoscopy. Sci Data 2024; 11:539. [PMID: 38796533 PMCID: PMC11127922 DOI: 10.1038/s41597-024-03359-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 05/10/2024] [Indexed: 05/28/2024] Open
Abstract
Detection and diagnosis of colon polyps are key to preventing colorectal cancer. Recent evidence suggests that AI-based computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems can enhance endoscopists' performance and boost colonoscopy effectiveness. However, most available public datasets primarily consist of still images or video clips, often at a down-sampled resolution, and do not accurately represent real-world colonoscopy procedures. We introduce the REAL-Colon (Real-world multi-center Endoscopy Annotated video Library) dataset: a compilation of 2.7 M native video frames from sixty full-resolution, real-world colonoscopy recordings across multiple centers. The dataset contains 350k bounding-box annotations, each created under the supervision of expert gastroenterologists. Comprehensive patient clinical data, colonoscopy acquisition information, and polyp histopathological information are also included in each video. With its unprecedented size, quality, and heterogeneity, the REAL-Colon dataset is a unique resource for researchers and developers aiming to advance AI research in colonoscopy. Its openness and transparency facilitate rigorous and reproducible research, fostering the development and benchmarking of more accurate and reliable colonoscopy-related algorithms and models.
Collapse
Affiliation(s)
- Carlo Biffi
- Cosmo Intelligent Medical Devices, Dublin, Ireland.
| | - Giulio Antonelli
- Gastroenterology and Digestive Endoscopy Unit, Ospedale dei Castelli (N.O.C.), Rome, Italy
| | - Sebastian Bernhofer
- Karl Landsteiner University of Health Sciences, Krems, Austria
- Department of Internal Medicine 2, University Hospital St. Pölten, St. Pölten, Austria
| | - Cesare Hassan
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy
- Endoscopy Unit, Humanitas Clinical and Research Center IRCCS, Rozzano, Italy
| | - Daizen Hirata
- Gastrointestinal Center, Sano Hospital, Hyogo, Japan
| | - Mineo Iwatate
- Gastrointestinal Center, Sano Hospital, Hyogo, Japan
| | - Andreas Maieron
- Karl Landsteiner University of Health Sciences, Krems, Austria
- Department of Internal Medicine 2, University Hospital St. Pölten, St. Pölten, Austria
| | | | - Andrea Cherubini
- Cosmo Intelligent Medical Devices, Dublin, Ireland.
- Milan Center for Neuroscience, University of Milano-Bicocca, Milano, Italy.
| |
Collapse
|
9
|
Kim BS, Kim B, Cho M, Chung H, Ryu JK, Kim S. Enhanced multi-class pathology lesion detection in gastric neoplasms using deep learning-based approach and validation. Sci Rep 2024; 14:11527. [PMID: 38773274 PMCID: PMC11109266 DOI: 10.1038/s41598-024-62494-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 05/17/2024] [Indexed: 05/23/2024] Open
Abstract
This study developed a new convolutional neural network model to detect and classify gastric lesions as malignant, premalignant, and benign. We used 10,181 white-light endoscopy images from 2606 patients in an 8:1:1 ratio. Lesions were categorized as early gastric cancer (EGC), advanced gastric cancer (AGC), gastric dysplasia, benign gastric ulcer (BGU), benign polyp, and benign erosion. We assessed the lesion detection and classification model using six-class, cancer versus non-cancer, and neoplasm versus non-neoplasm categories, as well as T-stage estimation in cancer lesions (T1, T2-T4). The lesion detection rate was 95.22% (219/230 patients) on a per-patient basis: 100% for EGC, 97.22% for AGC, 96.49% for dysplasia, 75.00% for BGU, 97.22% for benign polyps, and 80.49% for benign erosion. The six-class category exhibited an accuracy of 73.43%, sensitivity of 80.90%, specificity of 83.32%, positive predictive value (PPV) of 73.68%, and negative predictive value (NPV) of 88.53%. The sensitivity and NPV were 78.62% and 88.57% for the cancer versus non-cancer category, and 83.26% and 89.80% for the neoplasm versus non-neoplasm category, respectively. The T stage estimation model achieved an accuracy of 85.17%, sensitivity of 88.68%, specificity of 79.81%, PPV of 87.04%, and NPV of 82.18%. The novel CNN-based model remarkably detected and classified malignant, premalignant, and benign gastric lesions and accurately estimated gastric cancer T-stages.
Collapse
Affiliation(s)
- Byeong Soo Kim
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Bokyung Kim
- Division of Gastroenterology, Department of Internal Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, 07061, Korea
| | - Minwoo Cho
- Transdisciplinary Department of Medicine, Seoul National University Hospital, Seoul, 03080, Korea
| | - Hyunsoo Chung
- Department of Internal Medicine and Liver Research Institute, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, 03080, Korea
| | - Ji Kon Ryu
- Department of Internal Medicine and Liver Research Institute, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, 03080, Korea.
| | - Sungwan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, 03080, Korea.
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826, Korea.
| |
Collapse
|
10
|
Sharma N, Gupta S, Gupta D, Gupta P, Juneja S, Shah A, Shaikh A. UMobileNetV2 model for semantic segmentation of gastrointestinal tract in MRI scans. PLoS One 2024; 19:e0302880. [PMID: 38718092 PMCID: PMC11078421 DOI: 10.1371/journal.pone.0302880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 04/14/2024] [Indexed: 05/12/2024] Open
Abstract
Gastrointestinal (GI) cancer is leading general tumour in the Gastrointestinal tract, which is fourth significant reason of tumour death in men and women. The common cure for GI cancer is radiation treatment, which contains directing a high-energy X-ray beam onto the tumor while avoiding healthy organs. To provide high dosages of X-rays, a system needs for accurately segmenting the GI tract organs. The study presents a UMobileNetV2 model for semantic segmentation of small and large intestine and stomach in MRI images of the GI tract. The model uses MobileNetV2 as an encoder in the contraction path and UNet layers as a decoder in the expansion path. The UW-Madison database, which contains MRI scans from 85 patients and 38,496 images, is used for evaluation. This automated technology has the capability to enhance the pace of cancer therapy by aiding the radio oncologist in the process of segmenting the organs of the GI tract. The UMobileNetV2 model is compared to three transfer learning models: Xception, ResNet 101, and NASNet mobile, which are used as encoders in UNet architecture. The model is analyzed using three distinct optimizers, i.e., Adam, RMS, and SGD. The UMobileNetV2 model with the combination of Adam optimizer outperforms all other transfer learning models. It obtains a dice coefficient of 0.8984, an IoU of 0.8697, and a validation loss of 0.1310, proving its ability to reliably segment the stomach and intestines in MRI images of gastrointestinal cancer patients.
Collapse
Affiliation(s)
- Neha Sharma
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Deepali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Punit Gupta
- University College Dublin, Dublin, Ireland
- Manipal University Jaipur, Jaipur, India
| | - Sapna Juneja
- International Islamic University, Kuala Lumpur, Malaysia
| | - Asadullah Shah
- International Islamic University, Kuala Lumpur, Malaysia
| | | |
Collapse
|
11
|
Gimeno-García AZ, Alayón-Miranda S, Benítez-Zafra F, Hernández-Negrín D, Nicolás-Pérez D, Pérez Cabañas C, Delgado R, Del-Castillo R, Romero A, Adrián Z, Cubas A, González-Méndez Y, Jiménez A, Navarro-Dávila MA, Hernández-Guerra M. Design and validation of an artificial intelligence system to detect the quality of colon cleansing before colonoscopy. GASTROENTEROLOGIA Y HEPATOLOGIA 2024; 47:481-490. [PMID: 38154552 DOI: 10.1016/j.gastrohep.2023.12.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 12/17/2023] [Accepted: 12/18/2023] [Indexed: 12/30/2023]
Abstract
BACKGROUND AND AIMS Patients' perception of their bowel cleansing quality may guide rescue cleansing strategies before colonoscopy. The main aim of this study was to train and validate a convolutional neural network (CNN) for classifying rectal effluent during bowel preparation intake as "adequate" or "inadequate" cleansing before colonoscopy. PATIENTS AND METHODS Patients referred for outpatient colonoscopy were asked to provide images of their rectal effluent during the bowel preparation process. The images were categorized as adequate or inadequate cleansing based on a predefined 4-picture quality scale. A total of 1203 images were collected from 660 patients. The initial dataset (799 images), was split into a training set (80%) and a validation set (20%). The second dataset (404 images) was used to develop a second test of the CNN accuracy. Afterward, CNN prediction was prospectively compared with the Boston Bowel Preparation Scale (BBPS) in 200 additional patients who provided a picture of their last rectal effluent. RESULTS On the initial dataset, a global accuracy of 97.49%, a sensitivity of 98.17% and a specificity of 96.66% were obtained using the CNN model. On the second dataset, an accuracy of 95%, a sensitivity of 99.60% and a specificity of 87.41% were obtained. The results from the CNN model were significantly associated with those from the BBPS (P<0.001), and 77.78% of the patients with poor bowel preparation were correctly classified. CONCLUSION The designed CNN is capable of classifying "adequate cleansing" and "inadequate cleansing" images with high accuracy.
Collapse
Affiliation(s)
- Antonio Z Gimeno-García
- Gastroenterology Department, Hospital Universitario de Canarias, Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, Tenerife, Spain.
| | - Silvia Alayón-Miranda
- Department of Computer Science and Systems Engineering, Universidad de La Laguna, Tenerife, Spain
| | - Federica Benítez-Zafra
- Gastroenterology Department, Hospital Universitario de Canarias, Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, Tenerife, Spain
| | - Domingo Hernández-Negrín
- Gastroenterology Department, Hospital Universitario de Canarias, Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, Tenerife, Spain
| | - David Nicolás-Pérez
- Gastroenterology Department, Hospital Universitario de Canarias, Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, Tenerife, Spain
| | - Claudia Pérez Cabañas
- Gastroenterology Department, Hospital Universitario de Canarias, Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, Tenerife, Spain
| | - Rosa Delgado
- Gastroenterology Department, Hospital Universitario de Canarias, Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, Tenerife, Spain
| | - Rocío Del-Castillo
- Gastroenterology Department, Hospital Universitario de Canarias, Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, Tenerife, Spain
| | - Ana Romero
- Gastroenterology Department, Hospital Universitario de Canarias, Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, Tenerife, Spain
| | - Zaida Adrián
- Gastroenterology Department, Hospital Universitario de Canarias, Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, Tenerife, Spain
| | - Ana Cubas
- Gastroenterology Department, Hospital Universitario de Canarias, Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, Tenerife, Spain
| | - Yanira González-Méndez
- Gastroenterology Department, Hospital Universitario de Canarias, Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, Tenerife, Spain
| | | | | | - Manuel Hernández-Guerra
- Gastroenterology Department, Hospital Universitario de Canarias, Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, Tenerife, Spain
| |
Collapse
|
12
|
Gong EJ, Bang CS, Lee JJ. Computer-aided diagnosis in real-time endoscopy for all stages of gastric carcinogenesis: Development and validation study. United European Gastroenterol J 2024; 12:487-495. [PMID: 38400815 PMCID: PMC11091781 DOI: 10.1002/ueg2.12551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 01/14/2024] [Indexed: 02/26/2024] Open
Abstract
OBJECTIVE Using endoscopic images, we have previously developed computer-aided diagnosis models to predict the histopathology of gastric neoplasms. However, no model that categorizes every stage of gastric carcinogenesis has been published. In this study, a deep-learning-based diagnosis model was developed and validated to automatically classify all stages of gastric carcinogenesis, including atrophy and intestinal metaplasia, in endoscopy images. DESIGN A total of 18,701 endoscopic images were collected retrospectively and randomly divided into train, validation, and internal-test datasets in an 8:1:1 ratio. The primary outcome was lesion-classification accuracy in six categories: normal/atrophy/intestinal metaplasia/dysplasia/early /advanced gastric cancer. External-validation of performance in the established model used 1427 novel images from other institutions that were not used in training, validation, or internal-tests. RESULTS The internal-test lesion-classification accuracy was 91.2% (95% confidence interval: 89.9%-92.5%). For performance validation, the established model achieved an accuracy of 82.3% (80.3%-84.3%). The external-test per-class receiver operating characteristic in the diagnosis of atrophy and intestinal metaplasia was 93.4 ± 0% and 91.3 ± 0%, respectively. CONCLUSIONS The established model demonstrated high performance in the diagnosis of preneoplastic lesions (atrophy and intestinal metaplasia) as well as gastric neoplasms.
Collapse
Affiliation(s)
- Eun Jeong Gong
- Department of Internal MedicineHallym University College of MedicineChuncheonKorea
- Institute for Liver and Digestive DiseasesHallym UniversityChuncheonKorea
- Institute of New Frontier ResearchHallym University College of MedicineChuncheonKorea
| | - Chang Seok Bang
- Department of Internal MedicineHallym University College of MedicineChuncheonKorea
- Institute for Liver and Digestive DiseasesHallym UniversityChuncheonKorea
- Institute of New Frontier ResearchHallym University College of MedicineChuncheonKorea
- Division of Big Data and Artificial IntelligenceChuncheon Sacred Heart HospitalChuncheonKorea
| | - Jae Jun Lee
- Institute of New Frontier ResearchHallym University College of MedicineChuncheonKorea
- Division of Big Data and Artificial IntelligenceChuncheon Sacred Heart HospitalChuncheonKorea
- Department of Anesthesiology and Pain MedicineHallym University College of MedicineChuncheonKorea
| |
Collapse
|
13
|
Leggett CL, Parasa S, Repici A, Berzin TM, Gross SA, Sharma P. Physician perceptions on the current and future impact of artificial intelligence to the field of gastroenterology. Gastrointest Endosc 2024; 99:483-489.e2. [PMID: 38416097 DOI: 10.1016/j.gie.2023.11.053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 11/29/2023] [Indexed: 02/29/2024]
Abstract
BACKGROUND AND AIMS The use of artificial intelligence (AI) has transformative implications to the practice of gastroenterology and endoscopy. The aims of this study were to understand the perceptions of the gastroenterology community toward AI and to identify potential barriers for adoption. METHODS A 16-question online survey exploring perceptions on the current and future implications of AI to the field of gastroenterology was developed by the American Society for Gastrointestinal Endoscopy AI Task Force and distributed to national and international society members. Participant demographic information including age, sex, experience level, and practice setting was collected. Descriptive statistics were used to summarize survey findings, and a Pearson χ2 analysis was performed to determine the association between participant demographic information and perceptions of AI. RESULTS Of 10,162 invited gastroenterologists, 374 completed the survey. The mean age of participants was 46 years (standard deviation, 12), and 299 participants (80.0%) were men. One hundred seventy-nine participants (47.9%) had >10 years of practice experience, with nearly half working in the community setting. Only 25 participants (6.7%) reported the current use of AI in their clinical practice. Most participants (95.5%) believed that AI solutions will have a positive impact in their practice. One hundred seventy-six participants (47.1%) believed that AI will make clinical duties more technical but will also ease the burden of the electronic medical record (54.0%). The top 3 areas where AI was predicted to be most influential were endoscopic lesion detection (65.3%), endoscopic lesion characterization (65.8%), and quality metrics (32.6%). Participants voiced a desire for education on topics such as the clinical use of AI applications (64.4%), the advantages and limitations of AI applications (57.0%), and the technical methodology of AI (44.7%). Most participants (42.8%) expressed that the cost of AI implementation should be covered by their hospital. Demographic characteristics significantly associated with this perception included participants' years in practice and practice setting. CONCLUSIONS Gastroenterologists have an overall positive perception regarding the use of AI in clinical practice but voiced concerns regarding its technical aspects and coverage of costs associated with implementation. Further education on the clinical use of AI applications with understanding of the advantages and limitations appears to be valuable in promoting adoption.
Collapse
Affiliation(s)
- Cadman L Leggett
- Division of Gastroenterology and Hepatology, Mayo Clinic, Rochester, Minnesota, USA
| | - Sravanthi Parasa
- Department of Gastroenterology, Swedish Medical Center, Seattle, Washington, USA
| | - Alessandro Repici
- Department of Gastroenterology, IRCCS Humanitas Clinical and Research Center and Humanitas University, Rozzano, Italy
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Seth A Gross
- Division of Gastroenterology and Hepatology, NYU Langone Health, New York, New York, USA
| | - Prateek Sharma
- Department of Gastroenterology, Kansas City VA Medical Center, Kansas City, Missouri, USA; Division of Gastroenterology, University of Kansas School of Medicine, Kansas City, Kansas, USA
| |
Collapse
|
14
|
Mascarenhas Saraiva M, Spindler L, Fathallah N, Beaussier H, Mamma C, Ribeiro T, Afonso J, Carvalho M, Moura R, Cardoso P, Mendes F, Martins M, Adam J, Ferreira J, Macedo G, de Parades V. Deep Learning in High-Resolution Anoscopy: Assessing the Impact of Staining and Therapeutic Manipulation on Automated Detection of Anal Cancer Precursors. Clin Transl Gastroenterol 2024; 15:e00681. [PMID: 38270249 DOI: 10.14309/ctg.0000000000000681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 01/10/2024] [Indexed: 01/26/2024] Open
Abstract
INTRODUCTION High-resolution anoscopy (HRA) is the gold standard for detecting anal squamous cell carcinoma (ASCC) precursors. Preliminary studies on the application of artificial intelligence (AI) models to this modality have revealed promising results. However, the impact of staining techniques and anal manipulation on the effectiveness of these algorithms has not been evaluated. We aimed to develop a deep learning system for automatic differentiation of high-grade squamous intraepithelial lesion vs low-grade squamous intraepithelial lesion in HRA images in different subsets of patients (nonstained, acetic acid, lugol, and after manipulation). METHODS A convolutional neural network was developed to detect and differentiate high-grade and low-grade anal squamous intraepithelial lesions based on 27,770 images from 103 HRA examinations performed in 88 patients. Subanalyses were performed to evaluate the algorithm's performance in subsets of images without staining, acetic acid, lugol, and after manipulation of the anal canal. The sensitivity, specificity, accuracy, positive and negative predictive values, and area under the curve were calculated. RESULTS The convolutional neural network achieved an overall accuracy of 98.3%. The algorithm had a sensitivity and specificity of 97.4% and 99.2%, respectively. The accuracy of the algorithm for differentiating high-grade squamous intraepithelial lesion vs low-grade squamous intraepithelial lesion varied between 91.5% (postmanipulation) and 100% (lugol) for the categories at subanalysis. The area under the curve ranged between 0.95 and 1.00. DISCUSSION The introduction of AI to HRA may provide an accurate detection and differentiation of ASCC precursors. Our algorithm showed excellent performance at different staining settings. This is extremely important because real-time AI models during HRA examinations can help guide local treatment or detect relapsing disease.
Collapse
Affiliation(s)
- Miguel Mascarenhas Saraiva
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
- Faculty of Medicine of the University of Porto, Alameda Professor Hernâni Monteiro, Porto, Portugal
| | - Lucas Spindler
- Department of Proctology, GH Paris Saint-Joseph, Paris, France
| | - Nadia Fathallah
- Department of Proctology, GH Paris Saint-Joseph, Paris, France
| | - Hélene Beaussier
- Department of Clinical Research, GH Paris Saint-Joseph, Paris, France
| | - Célia Mamma
- Department of Clinical Research, GH Paris Saint-Joseph, Paris, France
| | - Tiago Ribeiro
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
| | - João Afonso
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
| | - Mariana Carvalho
- Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
- INEGI-Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal
| | - Rita Moura
- Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
- INEGI-Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal
| | - Pedro Cardoso
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
| | - Francisco Mendes
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
| | - Miguel Martins
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
| | - Julien Adam
- Department of Pathology, GH Paris Saint-Joseph, Paris, France
| | - João Ferreira
- Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
- INEGI-Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal
| | - Guilherme Macedo
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, Porto, Portugal
- WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
- Faculty of Medicine of the University of Porto, Alameda Professor Hernâni Monteiro, Porto, Portugal
| | | |
Collapse
|
15
|
Campion JR, O'Connor DB, Lahiff C. Human-artificial intelligence interaction in gastrointestinal endoscopy. World J Gastrointest Endosc 2024; 16:126-135. [PMID: 38577646 PMCID: PMC10989254 DOI: 10.4253/wjge.v16.i3.126] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 01/18/2024] [Accepted: 02/23/2024] [Indexed: 03/14/2024] Open
Abstract
The number and variety of applications of artificial intelligence (AI) in gastrointestinal (GI) endoscopy is growing rapidly. New technologies based on machine learning (ML) and convolutional neural networks (CNNs) are at various stages of development and deployment to assist patients and endoscopists in preparing for endoscopic procedures, in detection, diagnosis and classification of pathology during endoscopy and in confirmation of key performance indicators. Platforms based on ML and CNNs require regulatory approval as medical devices. Interactions between humans and the technologies we use are complex and are influenced by design, behavioural and psychological elements. Due to the substantial differences between AI and prior technologies, important differences may be expected in how we interact with advice from AI technologies. Human–AI interaction (HAII) may be optimised by developing AI algorithms to minimise false positives and designing platform interfaces to maximise usability. Human factors influencing HAII may include automation bias, alarm fatigue, algorithm aversion, learning effect and deskilling. Each of these areas merits further study in the specific setting of AI applications in GI endoscopy and professional societies should engage to ensure that sufficient emphasis is placed on human-centred design in development of new AI technologies.
Collapse
Affiliation(s)
- John R Campion
- Department of Gastroenterology, Mater Misericordiae University Hospital, Dublin D07 AX57, Ireland
- School of Medicine, University College Dublin, Dublin D04 C7X2, Ireland
| | - Donal B O'Connor
- Department of Surgery, Trinity College Dublin, Dublin D02 R590, Ireland
| | - Conor Lahiff
- Department of Gastroenterology, Mater Misericordiae University Hospital, Dublin D07 AX57, Ireland
- School of Medicine, University College Dublin, Dublin D04 C7X2, Ireland
| |
Collapse
|
16
|
Tian S, Shi H, Chen W, Li S, Han C, Du F, Wang W, Wen H, Lei Y, Deng L, Tang J, Zhang J, Lin J, Shi L, Ning B, Zhao K, Miao J, Wang G, Hou H, Huang X, Kong W, Jin X, Ding Z, Lin R. Artificial intelligence-based diagnosis of standard endoscopic ultrasonography scanning sites in the biliopancreatic system: a multicenter retrospective study. Int J Surg 2024; 110:1637-1644. [PMID: 38079604 PMCID: PMC10942157 DOI: 10.1097/js9.0000000000000995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 11/27/2023] [Indexed: 03/16/2024]
Abstract
BACKGROUND There are challenges for beginners to identify standard biliopancreatic system anatomical sites on endoscopic ultrasonography (EUS) images. Therefore, the authors aimed to develop a convolutional neural network (CNN)-based model to identify standard biliopancreatic system anatomical sites on EUS images. METHODS The standard anatomical structures of the gastric and duodenal regions observed by EUS was divided into 14 sites. The authors used 6230 EUS images with standard anatomical sites selected from 1812 patients to train the CNN model, and then tested its diagnostic performance both in internal and external validations. Internal validation set tests were performed on 1569 EUS images of 47 patients from two centers. Externally validated datasets were retrospectively collected from 16 centers, and finally 131 patients with 85 322 EUS images were included. In the external validation, all EUS images were read by CNN model, beginners, and experts, respectively. The final decision made by the experts was considered as the gold standard, and the diagnostic performance between CNN model and beginners were compared. RESULTS In the internal test cohort, the accuracy of CNN model was 92.1-100.0% for 14 standard anatomical sites. In the external test cohort, the sensitivity and specificity of CNN model were 89.45-99.92% and 93.35-99.79%, respectively. Compared with beginners, CNN model had higher sensitivity and specificity for 11 sites, and was in good agreement with the experts (Kappa values 0.84-0.98). CONCLUSIONS The authors developed a CNN-based model to automatically identify standard anatomical sites on EUS images with excellent diagnostic performance, which may serve as a potentially powerful auxiliary tool in future clinical practice.
Collapse
Affiliation(s)
- Shuxin Tian
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan
- Department of Gastroenterology, The First Affiliated Hospital of Medical College, Shihezi University, Shihezi
- National Health Commission Key Laboratory of Central Asia High Incidence Disease Prevention and Control, Shihezi
| | - Huiying Shi
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan
| | - Weigang Chen
- Department of Gastroenterology, The First Affiliated Hospital of Medical College, Shihezi University, Shihezi
- National Health Commission Key Laboratory of Central Asia High Incidence Disease Prevention and Control, Shihezi
| | - Shijie Li
- National Health Commission Key Laboratory of Central Asia High Incidence Disease Prevention and Control, Shihezi
- Department of Endoscopy Center, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital and Institute, Beijing
| | - Chaoqun Han
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan
| | - Fan Du
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan
| | - Weijun Wang
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan
| | - Hongxu Wen
- Department of Gastroenterology, Lanzhou Second People’s Hospital, Lanzhou
| | - Yali Lei
- Department of Gastroenterology, Weinan Central Hospital, Weinan
| | - Liang Deng
- Department of Gastroenterology, The First Affiliated Hospital of Chongqing Medical University, Chongqing
| | - Jing Tang
- Department of Gastroenterology, Fuling Hospital Affiliated to Chongqing University, Chongqing
| | - Jinjie Zhang
- Department of Gastroenterology, The Second Affiliated Hospital of Baotou Medical College, Baotou
| | - Jianjiao Lin
- Department of Gastroenterology, Longgang District People’s Hospital, Shenzhen
| | - Lei Shi
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Luzhou
| | - Bo Ning
- Department of Gastroenterology, The Second Affiliated Hospital Chongqing Medical University, Chongqing
| | - Kui Zhao
- Department of Gastroenterology, The First Affiliated Hospital of Chendu Medical College, Chengdu
| | - Jiarong Miao
- Department of Gastroenterology, The First Affiliated Hospital of Kunming Medical University, Kunming
- Yunnan Province Clinical Research Center for Digestive Diseases, Kunming
| | - Guobao Wang
- Department of endoscopy, Sun Yat-sen University Cancer Center,Guangzhou
| | - Hui Hou
- Department of Gastroenterology, The Fifth Affiliated Hospital of Xinjiang Medical University, Urumqi
| | - Xiaoxi Huang
- Department of Gastroenterology, Haikou People’s Hospital, Haikou
| | - Wenjie Kong
- Department of Gastroenterology, People’s Hospital of Xinjiang Autonomous Region, Urumqi
| | - Xiaojuan Jin
- Department of Gastroenterology, Suining Central Hospital, Suining, People’s Republic of China
| | - Zhen Ding
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan
- Department of Endoscopy Center, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou
| | - Rong Lin
- Department of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan
| |
Collapse
|
17
|
Grover SC, Walsh CM. Integrating artificial intelligence into endoscopy training: opportunities, challenges, and strategies. Lancet Gastroenterol Hepatol 2024; 9:11-13. [PMID: 37832570 DOI: 10.1016/s2468-1253(23)00309-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Revised: 08/30/2023] [Accepted: 08/31/2023] [Indexed: 10/15/2023]
Affiliation(s)
- Samir C Grover
- Division of Gastroenterology and Hepatology and Li Ka Shing Knowledge Institute, St. Michael's Hospital, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5G 1X8, Canada
| | - Catharine M Walsh
- Division of Gastroenterology, Hepatology and Nutrition, Sickkids Research Institute, and Sickkids Learning Institute, The Hospital for Sick Children, Department of Paediatrics and the Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5G 1X8, Canada.
| |
Collapse
|
18
|
Bordbar M, Helfroush MS, Danyali H, Ejtehadi F. Wireless capsule endoscopy multiclass classification using three-dimensional deep convolutional neural network model. Biomed Eng Online 2023; 22:124. [PMID: 38098015 PMCID: PMC10722702 DOI: 10.1186/s12938-023-01186-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 11/29/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND Wireless capsule endoscopy (WCE) is a patient-friendly and non-invasive technology that scans the whole of the gastrointestinal tract, including difficult-to-access regions like the small bowel. Major drawback of this technology is that the visual inspection of a large number of video frames produced during each examination makes the physician diagnosis process tedious and prone to error. Several computer-aided diagnosis (CAD) systems, such as deep network models, have been developed for the automatic recognition of abnormalities in WCE frames. Nevertheless, most of these studies have only focused on spatial information within individual WCE frames, missing the crucial temporal data within consecutive frames. METHODS In this article, an automatic multiclass classification system based on a three-dimensional deep convolutional neural network (3D-CNN) is proposed, which utilizes the spatiotemporal information to facilitate the WCE diagnosis process. The 3D-CNN model fed with a series of sequential WCE frames in contrast to the two-dimensional (2D) model, which exploits frames as independent ones. Moreover, the proposed 3D deep model is compared with some pre-trained networks. The proposed models are trained and evaluated with 29 subject WCE videos (14,691 frames before augmentation). The performance advantages of 3D-CNN over 2D-CNN and pre-trained networks are verified in terms of sensitivity, specificity, and accuracy. RESULTS 3D-CNN outperforms the 2D technique in all evaluation metrics (sensitivity: 98.92 vs. 98.05, specificity: 99.50 vs. 86.94, accuracy: 99.20 vs. 92.60). In conclusion, a novel 3D-CNN model for lesion detection in WCE frames is proposed in this study. CONCLUSION The results indicate the performance of 3D-CNN over 2D-CNN and some well-known pre-trained classifier networks. The proposed 3D-CNN model uses the rich temporal information in adjacent frames as well as spatial data to develop an accurate and efficient model.
Collapse
Affiliation(s)
- Mehrdokht Bordbar
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | | | - Habibollah Danyali
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | - Fardad Ejtehadi
- Department of Internal Medicine, Gastroenterohepatology Research Center, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
19
|
Goetz N, Hanigan K, Cheng RKY. Artificial intelligence fails to improve colonoscopy quality: A single centre retrospective cohort study. Artif Intell Gastrointest Endosc 2023; 4:18-26. [DOI: 10.37126/aige.v4.i2.18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 11/07/2023] [Accepted: 11/30/2023] [Indexed: 12/07/2023] Open
Abstract
BACKGROUND Limited data currently exists on the clinical utility of Artificial Intelligence Assisted Colonoscopy (AIAC) outside of clinical trials.
AIM To evaluate the impact of AIAC on key markers of colonoscopy quality compared to conventional colonoscopy (CC).
METHODS This single-centre retrospective observational cohort study included all patients undergoing colonoscopy at a secondary centre in Brisbane, Australia. CC outcomes between October 2021 and October 2022 were compared with AIAC outcomes after the introduction of the Olympus Endo-AID module from October 2022 to January 2023. Endoscopists who conducted over 50 procedures before and after AIAC introduction were included. Procedures for surveillance of inflammatory bowel disease were excluded. Patient demographics, proceduralist specialisation, indication for colonoscopy, and colonoscopy quality metrics were collected. Adenoma detection rate (ADR) and sessile serrated lesion detection rate (SSLDR) were calculated for both AIAC and CC.
RESULTS The study included 746 AIAC procedures and 2162 CC procedures performed by seven endoscopists. Baseline patient demographics were similar, with median age of 60 years with a slight female predominance (52.1%). Procedure indications, bowel preparation quality, and caecal intubation rates were comparable between groups. AIAC had a slightly longer withdrawal time compared to CC, but the difference was not statistically significant. The introduction of AIAC did not significantly change ADR (52.1% for AIAC vs 52.6% for CC, P = 0.91) or SSLDR (17.4% for AIAC vs 18.1% for CC, P = 0.44).
CONCLUSION The implementation of AIAC failed to improve key markers of colonoscopy quality, including ADR, SSLDR and withdrawal time. Further research is required to assess the utility and cost-efficiency of AIAC for high performing endoscopists.
Collapse
Affiliation(s)
- Naeman Goetz
- Department of Gastroenterology, Redcliffe Hospital, Redcliffe 4020, Australia
| | - Katherine Hanigan
- Department of Gastroenterology, Redcliffe Hospital, Redcliffe 4020, Australia
| | | |
Collapse
|
20
|
Gong EJ, Bang CS, Lee JJ, Jeong HM, Baik GH, Jeong JH, Dick S, Lee GH. Clinical Decision Support System for All Stages of Gastric Carcinogenesis in Real-Time Endoscopy: Model Establishment and Validation Study. J Med Internet Res 2023; 25:e50448. [PMID: 37902818 PMCID: PMC10644184 DOI: 10.2196/50448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/27/2023] [Accepted: 10/12/2023] [Indexed: 10/31/2023] Open
Abstract
BACKGROUND Our research group previously established a deep-learning-based clinical decision support system (CDSS) for real-time endoscopy-based detection and classification of gastric neoplasms. However, preneoplastic conditions, such as atrophy and intestinal metaplasia (IM) were not taken into account, and there is no established model that classifies all stages of gastric carcinogenesis. OBJECTIVE This study aims to build and validate a CDSS for real-time endoscopy for all stages of gastric carcinogenesis, including atrophy and IM. METHODS A total of 11,868 endoscopic images were used for training and internal testing. The primary outcomes were lesion classification accuracy (6 classes: advanced gastric cancer, early gastric cancer, dysplasia, atrophy, IM, and normal) and atrophy and IM lesion segmentation rates for the segmentation model. The following tests were carried out to validate the performance of lesion classification accuracy: (1) external testing using 1282 images from another institution and (2) evaluation of the classification accuracy of atrophy and IM in real-world procedures in a prospective manner. To estimate the clinical utility, 2 experienced endoscopists were invited to perform a blind test with the same data set. A CDSS was constructed by combining the established 6-class lesion classification model and the preneoplastic lesion segmentation model with the previously established lesion detection model. RESULTS The overall lesion classification accuracy (95% CI) was 90.3% (89%-91.6%) in the internal test. For the performance validation, the CDSS achieved 85.3% (83.4%-97.2%) overall accuracy. The per-class external test accuracies for atrophy and IM were 95.3% (92.6%-98%) and 89.3% (85.4%-93.2%), respectively. CDSS-assisted endoscopy showed an accuracy of 92.1% (88.8%-95.4%) for atrophy and 95.5% (92%-99%) for IM in the real-world application of 522 consecutive screening endoscopies. There was no significant difference in the overall accuracy between the invited endoscopists and established CDSS in the prospective real-clinic evaluation (P=.23). The CDSS demonstrated a segmentation rate of 93.4% (95% CI 92.4%-94.4%) for atrophy or IM lesion segmentation in the internal testing. CONCLUSIONS The CDSS achieved high performance in terms of computer-aided diagnosis of all stages of gastric carcinogenesis and demonstrated real-world application potential.
Collapse
Affiliation(s)
- Eun Jeong Gong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Republic of Korea
- Department of Anesthesiology, Hallym University College of Medicine, Chuncheon, Republic of Korea
| | - Hae Min Jeong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Republic of Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Republic of Korea
| | | | | | | |
Collapse
|
21
|
Gimeno-García AZ, Benítez-Zafra F, Nicolás-Pérez D, Hernández-Guerra M. Colon Bowel Preparation in the Era of Artificial Intelligence: Is There Potential for Enhancing Colon Bowel Cleansing? MEDICINA (KAUNAS, LITHUANIA) 2023; 59:1834. [PMID: 37893552 PMCID: PMC10608636 DOI: 10.3390/medicina59101834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 10/10/2023] [Accepted: 10/13/2023] [Indexed: 10/29/2023]
Abstract
BACKGROUND AND OBJECTIVES Proper bowel preparation is of paramount importance for enhancing adenoma detection rates and reducing postcolonoscopic colorectal cancer risk. Despite recommendations from gastroenterology societies regarding the optimal rates of successful bowel preparation, these guidelines are frequently unmet. Various approaches have been employed to enhance the rates of successful bowel preparation, yet the quality of cleansing remains suboptimal. Intensive bowel preparation techniques, supplementary administration of bowel solutions, and educational interventions aimed at improving patient adherence to instructions have been commonly utilized, particularly among patients at a high risk of inadequate bowel preparation. Expedited strategies conducted on the same day as the procedure have also been endorsed by scientific organizations. More recently, the utilization of artificial intelligence (AI) has emerged for the preprocedural detection of inadequate bowel preparation, holding the potential to guide the preparation process immediately preceding colonoscopy. This manuscript comprehensively reviews the current strategies employed to optimize bowel cleansing, with a specific focus on patients with elevated risks for inadequate bowel preparation. Additionally, the prospective role of AI in this context is thoroughly examined. CONCLUSIONS While a majority of outpatients may achieve cleanliness with standard cleansing protocols, dealing with hard-to-prepare patients remains a challenge. Rescue strategies based on AI are promising, but such evidence remains limited. To ensure proper bowel cleansing, a combination of strategies should be performed.
Collapse
|
22
|
Brunori A, Daca-Alvarez M, Pellisé M. pT1 colorectal cancer: A treatment dilemma. Best Pract Res Clin Gastroenterol 2023; 66:101854. [PMID: 37852711 DOI: 10.1016/j.bpg.2023.101854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 07/04/2023] [Accepted: 07/30/2023] [Indexed: 10/20/2023]
Abstract
The implementation of population screening programs for colorectal cancer (CRC) has led to a considerable increase in the prevalence pT1-CRC originating on polyps amenable by local treatments. However, a high proportion of patients are referred for unnecessary oncological surgeries without a clear benefit in terms of survival. Selecting the appropriate endoscopic resection technique in the moment of diagnosis becomes crucial to provide the best treatment alternative to each individual polyp and patient. For this, it is imperative to increase the optical diagnostic skill for differentiating pT1-CRCs and decide the appropriate initial therapy. En bloc resection is crucial to obtain an adequate histological specimen that might allow organ preserving therapeutic management. In this review, we address key challenges in T1 CRC management, explore the efficacy and safety of the available diagnostic and therapeutic approaches, and shed light on upcoming advances in the field.
Collapse
Affiliation(s)
- Angelo Brunori
- Gastroenterology and Digestive Endoscopy, Università degli Studi di Perugia, Italy
| | - Maria Daca-Alvarez
- Department of Gastroenterology Hospital Clinic de Barcelona, Institut d'Investigacions Biomediques August Pi I Sunyer (IDIBAPS), Hospital Clinic of Barcelona, Centro de Investigación Biomédica en Red de EnfermedadesHepáticas y Digestivas (CIBERehd), Spain
| | - Maria Pellisé
- Department of Gastroenterology Hospital Clinic de Barcelona, Institut d'Investigacions Biomediques August Pi I Sunyer (IDIBAPS), Hospital Clinic of Barcelona, Centro de InvestigaciónBiomé, dica en Red de EnfermedadesHepáticas y Digestivas (CIBERehd), Universitat de Barcelona, Barcelona, Spain.
| |
Collapse
|
23
|
Tee CHN, Ravi R, Ang TL, Li JW. Role of artificial intelligence in Barrett’s esophagus. Artif Intell Gastroenterol 2023; 4:28-35. [DOI: 10.35712/aig.v4.i2.28] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 05/17/2023] [Accepted: 06/12/2023] [Indexed: 09/07/2023] Open
Abstract
The application of artificial intelligence (AI) in gastrointestinal endoscopy has gained significant traction over the last decade. One of the more recent applications of AI in this field includes the detection of dysplasia and cancer in Barrett’s esophagus (BE). AI using deep learning methods has shown promise as an adjunct to the endoscopist in detecting dysplasia and cancer. Apart from visual detection and diagnosis, AI may also aid in reducing the considerable interobserver variability in identifying and distinguishing dysplasia on whole slide images from digitized BE histology slides. This review aims to provide a comprehensive summary of the key studies thus far as well as providing an insight into the future role of AI in Barrett’s esophagus.
Collapse
Affiliation(s)
- Chin Hock Nicholas Tee
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore 529889, Singapore
| | - Rajesh Ravi
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore 529889, Singapore
| | - Tiing Leong Ang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore 529889, Singapore
| | - James Weiquan Li
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore 529889, Singapore
| |
Collapse
|
24
|
Ahmad HA, East JE, Panaccione R, Travis S, Canavan JB, Usiskin K, Byrne MF. Artificial Intelligence in Inflammatory Bowel Disease Endoscopy: Implications for Clinical Trials. J Crohns Colitis 2023; 17:1342-1353. [PMID: 36812142 PMCID: PMC10441563 DOI: 10.1093/ecco-jcc/jjad029] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Indexed: 02/24/2023]
Abstract
Artificial intelligence shows promise for clinical research in inflammatory bowel disease endoscopy. Accurate assessment of endoscopic activity is important in clinical practice and inflammatory bowel disease clinical trials. Emerging artificial intelligence technologies can increase efficiency and accuracy of assessing the baseline endoscopic appearance in patients with inflammatory bowel disease and the impact that therapeutic interventions may have on mucosal healing in both of these contexts. In this review, state-of-the-art endoscopic assessment of mucosal disease activity in inflammatory bowel disease clinical trials is described, covering the potential for artificial intelligence to transform the current paradigm, its limitations, and suggested next steps. Site-based artificial intelligence quality evaluation and inclusion of patients in clinical trials without the need for a central reader is proposed; for following patient progress, a second reading using AI alongside a central reader with expedited reading is proposed. Artificial intelligence will support precision endoscopy in inflammatory bowel disease and is on the threshold of advancing inflammatory bowel disease clinical trial recruitment.
Collapse
Affiliation(s)
| | - James E East
- Translational Gastroenterology Unit, Oxford NIHR Biomedical Research Centre, University of Oxford, Oxford, UK
| | - Remo Panaccione
- Inflammatory Bowel Disease Clinic, University of Calgary, Calgary, AB, Canada
| | - Simon Travis
- Translational Gastroenterology Unit, Oxford NIHR Biomedical Research Centre, University of Oxford, Oxford, UK
| | | | | | - Michael F Byrne
- University of British Columbia, Division of Gastroenterology, Department of Medicine, Vancouver, BC, Canada
- Satisfai Health, Vancouver, BC, Canada
| |
Collapse
|
25
|
Gong EJ, Bang CS, Lee JJ, Baik GH, Lim H, Jeong JH, Choi SW, Cho J, Kim DY, Lee KB, Shin SI, Sigmund D, Moon BI, Park SC, Lee SH, Bang KB, Son DS. Deep learning-based clinical decision support system for gastric neoplasms in real-time endoscopy: development and validation study. Endoscopy 2023; 55:701-708. [PMID: 36754065 DOI: 10.1055/a-2031-0691] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Abstract
BACKGROUND : Deep learning models have previously been established to predict the histopathology and invasion depth of gastric lesions using endoscopic images. This study aimed to establish and validate a deep learning-based clinical decision support system (CDSS) for the automated detection and classification (diagnosis and invasion depth prediction) of gastric neoplasms in real-time endoscopy. METHODS : The same 5017 endoscopic images that were employed to establish previous models were used for the training data. The primary outcomes were: (i) the lesion detection rate for the detection model, and (ii) the lesion classification accuracy for the classification model. For performance validation of the lesion detection model, 2524 real-time procedures were tested in a randomized pilot study. Consecutive patients were allocated either to CDSS-assisted or conventional screening endoscopy. The lesion detection rate was compared between the groups. For performance validation of the lesion classification model, a prospective multicenter external test was conducted using 3976 novel images from five institutions. RESULTS : The lesion detection rate was 95.6 % (internal test). On performance validation, CDSS-assisted endoscopy showed a higher lesion detection rate than conventional screening endoscopy, although statistically not significant (2.0 % vs. 1.3 %; P = 0.21) (randomized study). The lesion classification rate was 89.7 % in the four-class classification (advanced gastric cancer, early gastric cancer, dysplasia, and non-neoplastic) and 89.2 % in the invasion depth prediction (mucosa confined or submucosa invaded; internal test). On performance validation, the CDSS reached 81.5 % accuracy in the four-class classification and 86.4 % accuracy in the binary classification (prospective multicenter external test). CONCLUSIONS : The CDSS demonstrated its potential for real-life clinical application and high performance in terms of lesion detection and classification of detected lesions in the stomach.
Collapse
Affiliation(s)
- Eun Jeong Gong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, South Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, South Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, South Korea
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, South Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, South Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, South Korea
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, South Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, South Korea
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, South Korea
- Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon, South Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, South Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, South Korea
| | - Hyun Lim
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, South Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, South Korea
| | | | | | | | | | | | | | | | | | - Sung Chul Park
- Department of Internal Medicine, School of Medicine, Kangwon National University, Chuncheon, South Korea
| | - Sang Hoon Lee
- Department of Internal Medicine, School of Medicine, Kangwon National University, Chuncheon, South Korea
| | - Ki Bae Bang
- Department of Internal Medicine, Dankook University College of Medicine, Cheonan, South Korea
| | - Dae-Soon Son
- Division of Data Science, Data Science Convergence Research Center, Hallym University, Chuncheon, South Korea
| |
Collapse
|
26
|
Gunasekaran H, Ramalakshmi K, Swaminathan DK, J A, Mazzara M. GIT-Net: An Ensemble Deep Learning-Based GI Tract Classification of Endoscopic Images. Bioengineering (Basel) 2023; 10:809. [PMID: 37508836 PMCID: PMC10376874 DOI: 10.3390/bioengineering10070809] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 06/14/2023] [Accepted: 07/02/2023] [Indexed: 07/30/2023] Open
Abstract
This paper presents an ensemble of pre-trained models for the accurate classification of endoscopic images associated with Gastrointestinal (GI) diseases and illnesses. In this paper, we propose a weighted average ensemble model called GIT-NET to classify GI-tract diseases. We evaluated the model on a KVASIR v2 dataset with eight classes. When individual models are used for classification, they are often prone to misclassification since they may not be able to learn the characteristics of all the classes adequately. This is due to the fact that each model may learn the characteristics of specific classes more efficiently than the other classes. We propose an ensemble model that leverages the predictions of three pre-trained models, DenseNet201, InceptionV3, and ResNet50 with accuracies of 94.54%, 88.38%, and 90.58%, respectively. The predictions of the base learners are combined using two methods: model averaging and weighted averaging. The performances of the models are evaluated, and the model averaging ensemble has an accuracy of 92.96% whereas the weighted average ensemble has an accuracy of 95.00%. The weighted average ensemble outperforms the model average ensemble and all individual models. The results from the evaluation demonstrate that utilizing an ensemble of base learners can successfully classify features that were incorrectly learned by individual base learners.
Collapse
Affiliation(s)
- Hemalatha Gunasekaran
- Information Technology, University of Technology and Applied Sciences, Ibri 516, Oman
| | - Krishnamoorthi Ramalakshmi
- Information Technology, Alliance College of Engineering and Design, Alliance University, Bengaluru 562106, India
| | | | - Andrew J
- Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Manuel Mazzara
- Institute of Software Development and Engineering, Innopolis University, 420500 Innopolis, Russia
| |
Collapse
|
27
|
Lo CM, Yang YW, Lin JK, Lin TC, Chen WS, Yang SH, Chang SC, Wang HS, Lan YT, Lin HH, Huang SC, Cheng HH, Jiang JK, Lin CC. Modeling the survival of colorectal cancer patients based on colonoscopic features in a feature ensemble vision transformer. Comput Med Imaging Graph 2023; 107:102242. [PMID: 37172354 DOI: 10.1016/j.compmedimag.2023.102242] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 05/05/2023] [Accepted: 05/07/2023] [Indexed: 05/14/2023]
Abstract
The prognosis of patients with colorectal cancer (CRC) mostly relies on the classic tumor node metastasis (TNM) staging classification. A more accurate and convenient prediction model would provide a better prognosis and assist in treatment. From May 2014 to December 2017, patients who underwent an operation for CRC were enrolled. The proposed feature ensemble vision transformer (FEViT) used ensemble classifiers to benefit the combinations of relevant colonoscopy features from the pretrained vision transformer and clinical features, including sex, age, family history of CRC, and tumor location, to establish the prognostic model. A total of 1729 colonoscopy images were enrolled in the current retrospective study. For the prediction of patient survival, FEViT achieved an accuracy of 94 % with an area under the receiver operating characteristic curve of 0.93, which was better than the TNM staging classification (90 %, 0.83) in the experiment. FEViT reduced the limited receptive field and gradient disappearance in the conventional convolutional neural network and was a relatively effective and efficient procedure. The promising accuracy of FEViT in modeling survival makes the prognosis of CRC patients more predictable and practical.
Collapse
Affiliation(s)
- Chung-Ming Lo
- Graduate Institute of Library, Information and Archival Studies, National Chengchi University, Taipei, Taiwan
| | - Yi-Wen Yang
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jen-Kou Lin
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Tzu-Chen Lin
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wei-Shone Chen
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Shung-Haur Yang
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Surgery, National Yang Ming Chiao Tung University Hospital, Yilan, Taiwan
| | - Shih-Ching Chang
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Huann-Sheng Wang
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yuan-Tzu Lan
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Hung-Hsin Lin
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Sheng-Chieh Huang
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Hou-Hsuan Cheng
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jeng-Kai Jiang
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chun-Chi Lin
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan.
| |
Collapse
|
28
|
Shimizu T, Sasaki Y, Ito K, Matsuzaka M, Sakuraba H, Fukuda S. A trial deep learning-based model for four-class histologic classification of colonic tumor from narrow band imaging. Sci Rep 2023; 13:7510. [PMID: 37161081 PMCID: PMC10169849 DOI: 10.1038/s41598-023-34750-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 05/06/2023] [Indexed: 05/11/2023] Open
Abstract
Narrow band imaging (NBI) has been extensively utilized as a diagnostic tool for colorectal neoplastic lesions. This study aimed to develop a trial deep learning (DL) based four-class classification model for low-grade dysplasia (LGD); high-grade dysplasia or mucosal carcinoma (HGD); superficially invasive submucosal carcinoma (SMs) and deeply invasive submucosal carcinomas (SMd) and evaluate its potential as a diagnostic tool. We collected a total of 1,390 NBI images as the dataset, including 53 LGD, 120 HGD, 20 SMs and 17 SMd. A total of 598,801 patches were trimmed from the lesion and background. A patch-based classification model was built by employing a residual convolutional neural network (CNN) and validated by three-fold cross-validation. The patch-based validation accuracy was 0.876, 0.957, 0.907 and 0.929 in LGD, HGD, SMs and SMd, respectively. The image-level classification algorithm was derived from the patch-based mapping across the entire image domain, attaining accuracies of 0.983, 0.990, 0.964, and 0.992 in LGD, HGD, SMs, and SMd, respectively. Our CNN-based model demonstrated high performance for categorizing the histological grade of dysplasia as well as the depth of invasion in routine colonoscopy, suggesting a potential diagnostic tool with minimal human inputs.
Collapse
Affiliation(s)
- Takeshi Shimizu
- Department of Gastroenterology, Sendai City Medical Center Sendai Open Hospital, 5-22-1 Tsurugaya, Miyagino-ku, Sendai, 983-0824, Japan
| | - Yoshihiro Sasaki
- Department of Medical Informatics, Hirosaki University Hospital, 53 Hon-cho, Hirosaki, 036-8563, Japan.
| | - Kei Ito
- Department of Gastroenterology, Sendai City Medical Center Sendai Open Hospital, 5-22-1 Tsurugaya, Miyagino-ku, Sendai, 983-0824, Japan
| | - Masashi Matsuzaka
- Department of Medical Informatics, Hirosaki University Hospital, 53 Hon-cho, Hirosaki, 036-8563, Japan
| | - Hirotake Sakuraba
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| | - Shinsaku Fukuda
- Department of Community Medical Support, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| |
Collapse
|
29
|
Gimeno-García AZ, Hernández-Pérez A, Nicolás-Pérez D, Hernández-Guerra M. Artificial Intelligence Applied to Colonoscopy: Is It Time to Take a Step Forward? Cancers (Basel) 2023; 15:cancers15082193. [PMID: 37190122 DOI: 10.3390/cancers15082193] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 04/04/2023] [Accepted: 04/05/2023] [Indexed: 05/17/2023] Open
Abstract
Growing evidence indicates that artificial intelligence (AI) applied to medicine is here to stay. In gastroenterology, AI computer vision applications have been stated as a research priority. The two main AI system categories are computer-aided polyp detection (CADe) and computer-assisted diagnosis (CADx). However, other fields of expansion are those related to colonoscopy quality, such as methods to objectively assess colon cleansing during the colonoscopy, as well as devices to automatically predict and improve bowel cleansing before the examination, predict deep submucosal invasion, obtain a reliable measurement of colorectal polyps and accurately locate colorectal lesions in the colon. Although growing evidence indicates that AI systems could improve some of these quality metrics, there are concerns regarding cost-effectiveness, and large and multicentric randomized studies with strong outcomes, such as post-colonoscopy colorectal cancer incidence and mortality, are lacking. The integration of all these tasks into one quality-improvement device could facilitate the incorporation of AI systems in clinical practice. In this manuscript, the current status of the role of AI in colonoscopy is reviewed, as well as its current applications, drawbacks and areas for improvement.
Collapse
Affiliation(s)
- Antonio Z Gimeno-García
- Gastroenterology Department, Hospital Universitario de Canarias, 38200 San Cristóbal de La Laguna, Tenerife, Spain
- Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Tenerife, Spain
| | - Anjara Hernández-Pérez
- Gastroenterology Department, Hospital Universitario de Canarias, 38200 San Cristóbal de La Laguna, Tenerife, Spain
- Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Tenerife, Spain
| | - David Nicolás-Pérez
- Gastroenterology Department, Hospital Universitario de Canarias, 38200 San Cristóbal de La Laguna, Tenerife, Spain
- Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Tenerife, Spain
| | - Manuel Hernández-Guerra
- Gastroenterology Department, Hospital Universitario de Canarias, 38200 San Cristóbal de La Laguna, Tenerife, Spain
- Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Tenerife, Spain
| |
Collapse
|
30
|
Cherubini A, Dinh NN. A Review of the Technology, Training, and Assessment Methods for the First Real-Time AI-Enhanced Medical Device for Endoscopy. Bioengineering (Basel) 2023; 10:404. [PMID: 37106592 PMCID: PMC10136070 DOI: 10.3390/bioengineering10040404] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 02/25/2023] [Accepted: 03/22/2023] [Indexed: 04/29/2023] Open
Abstract
Artificial intelligence (AI) has the potential to assist in endoscopy and improve decision making, particularly in situations where humans may make inconsistent judgments. The performance assessment of the medical devices operating in this context is a complex combination of bench tests, randomized controlled trials, and studies on the interaction between physicians and AI. We review the scientific evidence published about GI Genius, the first AI-powered medical device for colonoscopy to enter the market, and the device that is most widely tested by the scientific community. We provide an overview of its technical architecture, AI training and testing strategies, and regulatory path. In addition, we discuss the strengths and limitations of the current platform and its potential impact on clinical practice. The details of the algorithm architecture and the data that were used to train the AI device have been disclosed to the scientific community in the pursuit of a transparent AI. Overall, the first AI-enabled medical device for real-time video analysis represents a significant advancement in the use of AI for endoscopies and has the potential to improve the accuracy and efficiency of colonoscopy procedures.
Collapse
Affiliation(s)
- Andrea Cherubini
- Cosmo Intelligent Medical Devices, D02KV60 Dublin, Ireland
- Milan Center for Neuroscience, University of Milano–Bicocca, 20126 Milano, Italy
| | - Nhan Ngo Dinh
- Cosmo Intelligent Medical Devices, D02KV60 Dublin, Ireland
| |
Collapse
|
31
|
Usefulness of a novel computer-aided detection system for colorectal neoplasia: a randomized controlled trial. Gastrointest Endosc 2023; 97:528-536.e1. [PMID: 36228695 DOI: 10.1016/j.gie.2022.09.029] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 09/11/2022] [Accepted: 09/26/2022] [Indexed: 01/23/2023]
Abstract
BACKGROUND AND AIMS Artificial intelligence-based computer-aid detection (CADe) devices have been recently tested in colonoscopies, increasing the adenoma detection rate (ADR), mainly in Asian populations. However, evidence for the benefit of these devices in the occidental population is still low. We tested a new CADe device, namely, ENDO-AID (OIP-1) (Olympus, Tokyo, Japan), in clinical practice. METHODS This randomized controlled trial included 370 consecutive patients who were randomized 1:1 to CADe (n = 185) versus standard exploration (n = 185) from November 2021 to January 2022. The primary endpoint was the ADR. Advanced adenoma was defined as ≥10 mm, harboring high-grade dysplasia, or with a villous pattern. Otherwise, the adenoma was nonadvanced. ADR was assessed in both groups stratified by endoscopist ADR and colon cleansing. RESULTS In the intention-to-treat analysis, the ADR was 55.1% (102/185) in the CADe group and 43.8% (81/185) in the control group (P = .029). Nonadvanced ADRs (54.8% vs 40.8%, P = .01) and flat ADRs (39.4 vs 24.8, P = .006), polyp detection rate (67.1% vs 51%; P = .004), and number of adenomas per colonoscopy were significantly higher in the CADe group than in the control group (median [25th-75th percentile], 1 [0-2] vs 0 [0-1.5], respectively; P = .014). No significant differences were found in serrated ADR. After stratification by endoscopist and bowel cleansing, no statistically significant differences in ADR were found. CONCLUSIONS Colonoscopy assisted by ENDO-AID (OIP-1) increases ADR and number of adenomas per colonoscopy, suggesting it may aid in the detection of colorectal neoplastic lesions, especially because of its detection of diminutive and flat adenomas. (Clinical trial registration number: NCT04945044.).
Collapse
|
32
|
2022 American Gastroenterological Association-Center for Gastrointestinal Innovation and Technology Tech Summit. Clin Gastroenterol Hepatol 2023; 21:245-249. [PMID: 36108950 DOI: 10.1016/j.cgh.2022.08.045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 08/26/2022] [Indexed: 02/07/2023]
|
33
|
Liang J, Jiang Y, Abboud Y, Gaddam S. Role of Endoscopy in Management of Upper Gastrointestinal Cancers. Diseases 2022; 11:diseases11010003. [PMID: 36648868 PMCID: PMC9844461 DOI: 10.3390/diseases11010003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/21/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022] Open
Abstract
Upper gastrointestinal (GI) malignancy is a leading cause of cancer-related morbidity and mortality. Upper endoscopy has an established role in diagnosing and staging upper GI cancers, screening for pre-malignant lesions, and providing palliation in cases of advanced malignancy. New advances in endoscopic techniques and technology have improved diagnostic accuracy and increased the therapeutic potential of upper endoscopy. We aim to describe the different types of endoscopic technology used in cancer diagnosis, summarize the current guidelines for endoscopic diagnosis and treatment of malignant and pre-malignant lesions, and explore new potential roles for endoscopy in cancer therapy.
Collapse
|
34
|
Video-Based Deep Learning to Detect Dyssynergic Defecation with 3D High-Definition Anorectal Manometry. Dig Dis Sci 2022; 68:2015-2022. [PMID: 36401758 DOI: 10.1007/s10620-022-07759-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 11/03/2022] [Indexed: 11/20/2022]
Abstract
BACKGROUND We developed a deep learning algorithm to evaluate defecatory patterns to identify dyssynergic defecation using 3-dimensional high definition anal manometry (3D-HDAM). AIMS We developed a 3D-HDAM deep learning algorithm to evaluate for dyssynergia. METHODS Spatial-temporal data were extracted from consecutive 3D-HDAM studies performed between 2018 and 2020 at Dartmouth-Hitchcock Health. The technical procedure and gold standard definition of dyssynergia were based on the London consensus, adapted to the needs of 3D-HDAM technology. Three machine learning models were generated: (1) traditional machine learning informed by conventional anorectal function metrics, (2) deep learning, and (3) a hybrid approach. Diagnostic accuracy was evaluated using bootstrap sampling to calculate area-under-the-curve (AUC). To evaluate overfitting, models were validated by adding 502 simulated defecation maneuvers with diagnostic ambiguity. RESULTS 302 3D-HDAM studies representing 1208 simulated defecation maneuvers were included (average age 55.2 years; 80.5% women). The deep learning model had comparable diagnostic accuracy [AUC 0.91 (95% confidence interval 0.89-0.93)] to traditional [AUC 0.93(0.92-0.95)] and hybrid [AUC 0.96(0.94-0.97)] predictive models in training cohorts. However, the deep learning model handled ambiguous tests more cautiously than other models; the deep learning model was more likely to designate an ambiguous test as inconclusive [odds ratio 4.21(2.78-6.38)] versus traditional/hybrid approaches. CONCLUSIONS Deep learning is capable of considering complex spatial-temporal information on 3D-HDAM technology. Future studies are needed to evaluate the clinical context of these preliminary findings.
Collapse
|
35
|
Park J, Hwang Y, Kim HG, Lee JS, Kim JO, Lee TH, Jeon SR, Hong SJ, Ko BM, Kim S. Reduced detection rate of artificial intelligence in images obtained from untrained endoscope models and improvement using domain adaptation algorithm. Front Med (Lausanne) 2022; 9:1036974. [PMID: 36438041 PMCID: PMC9684642 DOI: 10.3389/fmed.2022.1036974] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 10/27/2022] [Indexed: 11/11/2022] Open
Abstract
A training dataset that is limited to a specific endoscope model can overfit artificial intelligence (AI) to its unique image characteristics. The performance of the AI may degrade in images of different endoscope model. The domain adaptation algorithm, i.e., the cycle-consistent adversarial network (cycleGAN), can transform the image characteristics into AI-friendly styles. We attempted to confirm the performance degradation of AIs in images of various endoscope models and aimed to improve them using cycleGAN transformation. Two AI models were developed from data of esophagogastroduodenoscopies collected retrospectively over 5 years: one for identifying the endoscope models, Olympus CV-260SL, CV-290 (Olympus, Tokyo, Japan), and PENTAX EPK-i (PENTAX Medical, Tokyo, Japan), and the other for recognizing the esophagogastric junction (EGJ). The AIs were trained using 45,683 standardized images from 1,498 cases and validated on 624 separate cases. Between the two endoscope manufacturers, there was a difference in image characteristics that could be distinguished without error by AI. The accuracy of the AI in recognizing gastroesophageal junction was >0.979 in the same endoscope-examined validation dataset as the training dataset. However, they deteriorated in datasets from different endoscopes. Cycle-consistent adversarial network can successfully convert image characteristics to ameliorate the AI performance. The improvements were statistically significant and greater in datasets from different endoscope manufacturers [original → AI-trained style, increased area under the receiver operating characteristic (ROC) curve, P-value: CV-260SL → CV-290, 0.0056, P = 0.0106; CV-260SL → EPK-i, 0.0182, P = 0.0158; CV-290 → CV-260SL, 0.0134, P < 0.0001; CV-290 → EPK-i, 0.0299, P = 0.0001; EPK-i → CV-260SL, 0.0215, P = 0.0024; and EPK-i → CV-290, 0.0616, P < 0.0001]. In conclusion, cycleGAN can transform the diverse image characteristics of endoscope models into an AI-trained style to improve the detection performance of AI.
Collapse
Affiliation(s)
- Junseok Park
- Department of Internal Medicine, Soonchunhyang University College of Medicine, Seoul, South Korea
| | - Youngbae Hwang
- Department of Intelligent Systems and Robotics, Chungbuk National University, Cheongju, South Korea
| | - Hyun Gun Kim
- Department of Internal Medicine, Soonchunhyang University College of Medicine, Seoul, South Korea
- *Correspondence: Hyun Gun Kim
| | - Joon Seong Lee
- Department of Internal Medicine, Soonchunhyang University College of Medicine, Seoul, South Korea
| | - Jin-Oh Kim
- Department of Internal Medicine, Soonchunhyang University College of Medicine, Seoul, South Korea
| | - Tae Hee Lee
- Department of Internal Medicine, Soonchunhyang University College of Medicine, Seoul, South Korea
| | - Seong Ran Jeon
- Department of Internal Medicine, Soonchunhyang University College of Medicine, Seoul, South Korea
| | - Su Jin Hong
- Department of Internal Medicine, Soonchunhyang University College of Medicine, Seoul, South Korea
| | - Bong Min Ko
- Department of Internal Medicine, Soonchunhyang University College of Medicine, Seoul, South Korea
| | - Seokmin Kim
- Department of Intelligent Systems and Robotics, Chungbuk National University, Cheongju, South Korea
| |
Collapse
|
36
|
Reverberi C, Rigon T, Solari A, Hassan C, Cherubini P, Cherubini A. Experimental evidence of effective human-AI collaboration in medical decision-making. Sci Rep 2022; 12:14952. [PMID: 36056152 PMCID: PMC9440124 DOI: 10.1038/s41598-022-18751-2] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 08/18/2022] [Indexed: 11/25/2022] Open
Abstract
Artificial Intelligence (AI) systems are precious support for decision-making, with many applications also in the medical domain. The interaction between MDs and AI enjoys a renewed interest following the increased possibilities of deep learning devices. However, we still have limited evidence-based knowledge of the context, design, and psychological mechanisms that craft an optimal human-AI collaboration. In this multicentric study, 21 endoscopists reviewed 504 videos of lesions prospectively acquired from real colonoscopies. They were asked to provide an optical diagnosis with and without the assistance of an AI support system. Endoscopists were influenced by AI ([Formula: see text]), but not erratically: they followed the AI advice more when it was correct ([Formula: see text]) than incorrect ([Formula: see text]). Endoscopists achieved this outcome through a weighted integration of their and the AI opinions, considering the case-by-case estimations of the two reliabilities. This Bayesian-like rational behavior allowed the human-AI hybrid team to outperform both agents taken alone. We discuss the features of the human-AI interaction that determined this favorable outcome.
Collapse
Affiliation(s)
- Carlo Reverberi
- Department of Psychology, University of Milano-Bicocca, 20126, Milan, Italy.
- Milan Center for Neuroscience, University of Milano-Bicocca, 20126, Milan, Italy.
| | - Tommaso Rigon
- Department of Economics, Management and Statistics, University of Milano-Bicocca, 20126, Milan, Italy
| | - Aldo Solari
- Milan Center for Neuroscience, University of Milano-Bicocca, 20126, Milan, Italy
- Department of Economics, Management and Statistics, University of Milano-Bicocca, 20126, Milan, Italy
| | - Cesare Hassan
- Department of Biomedical Sciences, Humanitas University, 20072, Pieve Emanuele, Italy
- Endoscopy Unit, Humanitas Clinical and Research Center IRCCS, Rozzano, Italy
| | - Paolo Cherubini
- Department of Psychology, University of Milano-Bicocca, 20126, Milan, Italy
- Milan Center for Neuroscience, University of Milano-Bicocca, 20126, Milan, Italy
- Department of Neural and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Andrea Cherubini
- Milan Center for Neuroscience, University of Milano-Bicocca, 20126, Milan, Italy.
- Artificial Intelligence Group, Cosmo AI/Linkverse, Lainate, 20045, Milan, Italy.
| |
Collapse
|
37
|
Gubbiotti A, Spadaccini M, Badalamenti M, Hassan C, Repici A. Key factors for improving adenoma detection rate. Expert Rev Gastroenterol Hepatol 2022; 16:819-833. [PMID: 36151898 DOI: 10.1080/17474124.2022.2128761] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
INTRODUCTION Colonoscopy is a fundamental tool in colorectal cancer (CRC) prevention. Nevertheless, one-fourth of colorectal neoplasms are still missed during colonoscopy, potentially being the main reason for post-colonoscopy colorectal cancer (PCCRC). Adenoma detection rate (ADR) is currently known as the best quality indicator correlating with PCCRC incidence. AREAS COVERED We performed a literature review in order to summarize evidences investigating key factors affecting ADR: endoscopists education and training, patient management, endoscopic techniques, improved navigation (exposition defect), and enhanced lesions recognition (vision defect) were considered. EXPERT OPINION 'Traditional' factors, such as split dose bowel preparation, adequate withdrawal time, and right colon second view, held a significant impact on ADR. Several devices and technologies have been developed to promote high-quality colonoscopy, however artificial intelligence may be considered the most promising tool for ADR improvement, provided that endoscopists education and recording are guaranteed.
Collapse
Affiliation(s)
- Alessandro Gubbiotti
- Humanitas University, Department of Biomedical Sciences, Pieve Emanuele, Italy.,IRCCS Humanitas Research Hospital, Digestive Endoscopy Unit, Division of Gastroenterology, Rozzano, Italy
| | - Marco Spadaccini
- Humanitas University, Department of Biomedical Sciences, Pieve Emanuele, Italy.,IRCCS Humanitas Research Hospital, Digestive Endoscopy Unit, Division of Gastroenterology, Rozzano, Italy
| | - Matteo Badalamenti
- Humanitas University, Department of Biomedical Sciences, Pieve Emanuele, Italy.,IRCCS Humanitas Research Hospital, Digestive Endoscopy Unit, Division of Gastroenterology, Rozzano, Italy
| | - Cesare Hassan
- Humanitas University, Department of Biomedical Sciences, Pieve Emanuele, Italy.,IRCCS Humanitas Research Hospital, Digestive Endoscopy Unit, Division of Gastroenterology, Rozzano, Italy
| | - Alessandro Repici
- Humanitas University, Department of Biomedical Sciences, Pieve Emanuele, Italy.,IRCCS Humanitas Research Hospital, Digestive Endoscopy Unit, Division of Gastroenterology, Rozzano, Italy
| |
Collapse
|
38
|
Biffi C, Salvagnini P, Dinh NN, Hassan C, Sharma P, Cherubini A. A novel AI device for real-time optical characterization of colorectal polyps. NPJ Digit Med 2022; 5:84. [PMID: 35773468 PMCID: PMC9247164 DOI: 10.1038/s41746-022-00633-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 06/16/2022] [Indexed: 01/03/2023] Open
Abstract
Accurate in-vivo optical characterization of colorectal polyps is key to select the optimal treatment regimen during colonoscopy. However, reported accuracies vary widely among endoscopists. We developed a novel intelligent medical device able to seamlessly operate in real-time using conventional white light (WL) endoscopy video stream without virtual chromoendoscopy (blue light, BL). In this work, we evaluated the standalone performance of this computer-aided diagnosis device (CADx) on a prospectively acquired dataset of unaltered colonoscopy videos. An international group of endoscopists performed optical characterization of each polyp acquired in a prospective study, blinded to both histology and CADx result, by means of an online platform enabling careful video assessment. Colorectal polyps were categorized by reviewers, subdivided into 10 experts and 11 non-experts endoscopists, and by the CADx as either “adenoma” or “non-adenoma”. A total of 513 polyps from 165 patients were assessed. CADx accuracy in WL was found comparable to the accuracy of expert endoscopists (CADxWL/Exp; OR 1.211 [0.766–1.915]) using histopathology as the reference standard. Moreover, CADx accuracy in WL was found superior to the accuracy of non-expert endoscopists (CADxWL/NonExp; OR 1.875 [1.191–2.953]), and CADx accuracy in BL was found comparable to it (CADxBL/CADxWL; OR 0.886 [0.612–1.282]). The proposed intelligent device shows the potential to support non-expert endoscopists in systematically reaching the performances of expert endoscopists in optical characterization.
Collapse
Affiliation(s)
- Carlo Biffi
- Artificial Intelligence Group, Cosmo AI/Linkverse, Lainate/Rome, Italy
| | - Pietro Salvagnini
- Artificial Intelligence Group, Cosmo AI/Linkverse, Lainate/Rome, Italy
| | - Nhan Ngo Dinh
- Artificial Intelligence Group, Cosmo AI/Linkverse, Lainate/Rome, Italy
| | - Cesare Hassan
- Gastroenterology Unit, Nuovo Regina Margherita Hospital, Rome, Italy.,Endoscopy Unit, Humanitas Clinical and Research Center IRCCS, Rozzano, Italy
| | - Prateek Sharma
- VA Medical Center, Kansas City, MO, USA.,University of Kansas School of Medicine, Kansas City, MO, USA
| | | | - Andrea Cherubini
- Artificial Intelligence Group, Cosmo AI/Linkverse, Lainate/Rome, Italy. .,Milan Center for Neuroscience, University of Milano-Bicocca, 20126, Milano, Italy.
| |
Collapse
|
39
|
Yoo BS, Houston KV, D'Souza SM, Elmahdi A, Davis I, Vilela A, Parekh PJ, Johnson DA. Advances and horizons for artificial intelligence of endoscopic screening and surveillance of gastric and esophageal disease. Artif Intell Med Imaging 2022; 3:70-86. [DOI: 10.35711/aimi.v3.i3.70] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 05/18/2022] [Accepted: 06/20/2022] [Indexed: 02/06/2023] Open
Abstract
The development of artificial intelligence in endoscopic assessment of the gastrointestinal tract has shown progressive enhancement in diagnostic acuity. This review discusses the expanding applications for gastric and esophageal diseases. The gastric section covers the utility of AI in detecting and characterizing gastric polyps and further explores prevention, detection, and classification of gastric cancer. The esophageal discussion highlights applications for use in screening and surveillance in Barrett's esophagus and in high-risk conditions for esophageal squamous cell carcinoma. Additionally, these discussions highlight applications for use in assessing eosinophilic esophagitis and future potential in assessing esophageal microbiome changes.
Collapse
Affiliation(s)
- Byung Soo Yoo
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Kevin V Houston
- Department of Internal Medicine, Virginia Commonwealth University, Richmond, VA 23298, United States
| | - Steve M D'Souza
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Alsiddig Elmahdi
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Isaac Davis
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Ana Vilela
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Parth J Parekh
- Division of Gastroenterology, Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - David A Johnson
- Division of Gastroenterology, Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| |
Collapse
|
40
|
No-Code Platform-Based Deep-Learning Models for Prediction of Colorectal Polyp Histology from White-Light Endoscopy Images: Development and Performance Verification. J Pers Med 2022; 12:jpm12060963. [PMID: 35743748 PMCID: PMC9225479 DOI: 10.3390/jpm12060963] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 05/27/2022] [Accepted: 06/10/2022] [Indexed: 12/17/2022] Open
Abstract
Background: The authors previously developed deep-learning models for the prediction of colorectal polyp histology (advanced colorectal cancer, early cancer/high-grade dysplasia, tubular adenoma with or without low-grade dysplasia, or non-neoplasm) from endoscopic images. While the model achieved 67.3% internal-test accuracy and 79.2% external-test accuracy, model development was labour-intensive and required specialised programming expertise. Moreover, the 240-image external-test dataset included only three advanced and eight early cancers, so it was difficult to generalise model performance. These limitations may be mitigated by deep-learning models developed using no-code platforms. Objective: To establish no-code platform-based deep-learning models for the prediction of colorectal polyp histology from white-light endoscopy images and compare their diagnostic performance with traditional models. Methods: The same 3828 endoscopic images used to establish previous models were used to establish new models based on no-code platforms Neuro-T, VLAD, and Create ML-Image Classifier. A prospective multicentre validation study was then conducted using 3818 novel images. The primary outcome was the accuracy of four-category prediction. Results: The model established using Neuro-T achieved the highest internal-test accuracy (75.3%, 95% confidence interval: 71.0–79.6%) and external-test accuracy (80.2%, 76.9–83.5%) but required the longest training time. In contrast, the model established using Create ML-Image Classifier required only 3 min for training and still achieved 72.7% (70.8–74.6%) external-test accuracy. Attention map analysis revealed that the imaging features used by the no-code deep-learning models were similar to those used by endoscopists during visual inspection. Conclusion: No-code deep-learning tools allow for the rapid development of models with high accuracy for predicting colorectal polyp histology.
Collapse
|
41
|
Kim SY, Park JM. Quality indicators in esophagogastroduodenoscopy. Clin Endosc 2022; 55:319-331. [PMID: 35656624 PMCID: PMC9178133 DOI: 10.5946/ce.2022.094] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 04/22/2022] [Indexed: 11/25/2022] Open
Abstract
Esophagogastroduodenoscopy (EGD) has been used to diagnose a wide variety of upper gastrointestinal diseases. In particular, EGD is used to screen high-risk subjects of gastric cancer. Quality control of EGD is important because the diagnostic rate is examiner-dependent. However, there is still no representative quality indicator that can be uniformly applied in EGD. There has been growing awareness of the importance of quality control in improving EGD performance. Therefore, we aimed to review the available and emerging quality indicators for diagnostic EGD.
Collapse
Affiliation(s)
- Sang Yoon Kim
- Department of Internal Medicine, Myongji Hospital, Hanyang University College of Medicine, Goyang, Korea
| | - Jae Myung Park
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
- Catholic Photomedicine Research Institute, The Catholic University of Korea, Seoul, Korea
- Correspondence: Jae Myung Park Division of Gastroenterology and Hepatology, Department of Internal Medicine, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Korea E-mail:
| |
Collapse
|
42
|
Kim HJ, Gong EJ, Bang CS, Lee JJ, Suk KT, Baik GH. Computer-Aided Diagnosis of Gastrointestinal Protruded Lesions Using Wireless Capsule Endoscopy: A Systematic Review and Diagnostic Test Accuracy Meta-Analysis. J Pers Med 2022; 12:jpm12040644. [PMID: 35455760 PMCID: PMC9029411 DOI: 10.3390/jpm12040644] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 04/14/2022] [Accepted: 04/14/2022] [Indexed: 12/13/2022] Open
Abstract
Background: Wireless capsule endoscopy allows the identification of small intestinal protruded lesions, such as polyps, tumors, or venous structures. However, reading wireless capsule endoscopy images or movies is time-consuming, and minute lesions are easy to miss. Computer-aided diagnosis (CAD) has been applied to improve the efficacy of the reading process of wireless capsule endoscopy images or movies. However, there are no studies that systematically determine the performance of CAD models in diagnosing gastrointestinal protruded lesions. Objective: The aim of this study was to evaluate the diagnostic performance of CAD models for gastrointestinal protruded lesions using wireless capsule endoscopic images. Methods: Core databases were searched for studies based on CAD models for the diagnosis of gastrointestinal protruded lesions using wireless capsule endoscopy, and data on diagnostic performance were presented. A systematic review and diagnostic test accuracy meta-analysis were performed. Results: Twelve studies were included. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of protruded lesions were 0.95 (95% confidence interval, 0.93–0.97), 0.89 (0.84–0.92), 0.91 (0.86–0.94), and 74 (43–126), respectively. Subgroup analyses showed robust results. Meta-regression found no source of heterogeneity. Publication bias was not detected. Conclusion: CAD models showed high performance for the optical diagnosis of gastrointestinal protruded lesions based on wireless capsule endoscopy.
Collapse
Affiliation(s)
- Hye Jin Kim
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
| | - Eun Jeong Gong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon 24253, Korea
- Correspondence: ; Tel.: +82-33-240-5821; Fax: +82-33-241-8064
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon 24253, Korea
- Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Ki Tae Suk
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| |
Collapse
|
43
|
Schmitz R, Werner R, Repici A, Bisschops R, Meining A, Zornow M, Messmann H, Hassan C, Sharma P, Rösch T. Artificial intelligence in GI endoscopy: stumbling blocks, gold standards and the role of endoscopy societies. Gut 2022; 71:451-454. [PMID: 33479051 DOI: 10.1136/gutjnl-2020-323115] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 01/04/2021] [Accepted: 01/05/2021] [Indexed: 02/06/2023]
Affiliation(s)
- Rüdiger Schmitz
- Interdisciplinary Endoscopy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.,Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.,Center for Biomedical Artificial Intelligence (bAIome), University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Rene Werner
- Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.,Center for Biomedical Artificial Intelligence (bAIome), University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Alessandro Repici
- Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy.,Humanitas University, Department of Biomedical Sciences, Milan, Italy
| | - Raf Bisschops
- Gastroenterology, University Hospital Gasthuisberg, Leuven, Belgium
| | - Alexander Meining
- Department of Gastroenterology, University of Würzburg, Würzburg, Germany
| | - Michael Zornow
- Chair for Public and European Law, University of Göttingen, Göttingen, Germany
| | - Helmut Messmann
- Department of Gastroenterology, Universitätsklinikum Augsburg, Augsburg, Germany
| | - Cesare Hassan
- Gastroenterology Unit, Nuovo Regina Margherita Hospital, Rome, Italy
| | - Prateek Sharma
- Division of Gastroenterology and Hepatology, Veterans Affairs Medical Center and University of Kansas, Lawrence, Kansas, USA
| | - Thomas Rösch
- Interdisciplinary Endoscopy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
44
|
Li JW, Wang LM, Ang TL. Artificial intelligence-assisted colonoscopy: a narrative review of current data and clinical applications. Singapore Med J 2022; 63:118-124. [PMID: 35509251 PMCID: PMC9251247 DOI: 10.11622/smedj.2022044] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/22/2023]
Abstract
Colonoscopy is the reference standard procedure for the prevention and diagnosis of colorectal cancer, which is a leading cause of cancer-related deaths in Singapore. Artificial intelligence systems are automated, objective and reproducible. Artificial intelligence-assisted colonoscopy has recently been introduced into clinical practice as a clinical decision support tool. This review article provides a summary of the current published data and discusses ongoing research and current clinical applications of artificial intelligence-assisted colonoscopy.
Collapse
Affiliation(s)
- James Weiquan Li
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- SingHealth Duke-NUS Medicine Academic Clinical Programme, Singapore
| | - Lai Mun Wang
- Pathology Section, Department of Laboratory Medicine, Changi General Hospital, Singapore
- SingHealth Duke-NUS Pathology Academic Clinical Programme, Singapore
| | - Tiing Leong Ang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- SingHealth Duke-NUS Medicine Academic Clinical Programme, Singapore
| |
Collapse
|
45
|
Glissen Brown JR, Waljee AK, Mori Y, Sharma P, Berzin TM. Charting a path forward for clinical research in artificial intelligence and gastroenterology. Dig Endosc 2022; 34:4-12. [PMID: 33715244 DOI: 10.1111/den.13974] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/02/2021] [Accepted: 03/11/2021] [Indexed: 12/12/2022]
Abstract
Gastroenterology has been an early leader in bridging the gap between artificial intelligence (AI) model development and clinical trial validation, and in recent years we have seen the publication of several randomized clinical trials examining the role of AI in gastroenterology. As AI applications for clinical medicine advance rapidly, there is a clear need for guidance surrounding AI-specific study design, evaluation, comparison, analysis and reporting of results. Several initiatives are in the publication or pre-publication phase including AI-specific amendments to minimum reporting guidelines for clinical trials, society task force initiatives aimed at priority use cases and research priorities, and minimum reporting guidelines that guide the reporting of clinical prediction models. In this paper, we examine applications of AI in clinical trials and discuss elements of newly published AI-specific extensions to the Consolidated Standards of Reporting Trials and Standard Protocol Items: Recommendations for Interventional Trials statements that guide clinical trial reporting and development. We then review AI applications at the pre-trial level in both endoscopy and other subfields of gastroenterology and explore areas where further guidance is needed to supplement the current guidance available at the pre-trial level.
Collapse
Affiliation(s)
- Jeremy R Glissen Brown
- Center for Advanced Endoscopy, Division of Gastroenterology, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, USA
| | - Akbar K Waljee
- Division of Gastroenterology, University of Michigan Health System, University of Michigan, Ann Arbor, USA
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan.,Clinical Effectiveness Research Group, Institute of Health and Society, University of Oslo, Oslo, Norway
| | - Prateek Sharma
- Department of Gastroenterology and Hepatology, University of Kansas Medical Center, Kansas City, KS, USA.,Department of Gastroenterology, Kansas City VA Medical Center, Kansas City, USA
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Division of Gastroenterology, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, USA
| |
Collapse
|
46
|
Kandel P, Wallace MB. Advanced Imaging Techniques and In vivo Histology: Current Status and Future Perspectives (Lower G.I.). GASTROINTESTINAL AND PANCREATICO-BILIARY DISEASES: ADVANCED DIAGNOSTIC AND THERAPEUTIC ENDOSCOPY 2022:291-310. [DOI: 10.1007/978-3-030-56993-8_110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
47
|
Okamoto Y, Yoshida S, Izakura S, Katayama D, Michida R, Koide T, Tamaki T, Kamigaichi Y, Tamari H, Shimohara Y, Nishimura T, Inagaki K, Tanaka H, Yamashita K, Sumimoto K, Oka S, Tanaka S. Development of multi-class computer-aided diagnostic systems using the NICE/JNET classifications for colorectal lesions. J Gastroenterol Hepatol 2022; 37:104-110. [PMID: 34478167 DOI: 10.1111/jgh.15682] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 07/22/2021] [Accepted: 08/30/2021] [Indexed: 12/15/2022]
Abstract
BACKGROUND AND AIM Diagnostic support using artificial intelligence may contribute to the equalization of endoscopic diagnosis of colorectal lesions. We developed computer-aided diagnosis (CADx) support system for diagnosing colorectal lesions using the NBI International Colorectal Endoscopic (NICE) classification and the Japan NBI Expert Team (JNET) classification. METHODS Using Residual Network as the classifier and NBI images as training images, we developed a CADx based on the NICE classification (CADx-N) and a CADx based on the JNET classification (CADx-J). For validation, 480 non-magnifying and magnifying NBI images were used for the CADx-N and 320 magnifying NBI images were used for the CADx-J. The diagnostic performance of the CADx-N was evaluated using the magnification rate. RESULTS The accuracy of the CADx-N for Types 1, 2, and 3 was 97.5%, 91.2%, and 93.8%, respectively. The diagnostic performance for each magnification level was good (no statistically significant difference). The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of the CADx-J were 100%, 96.3%, 82.8%, 100%, and 96.9% for Type 1; 80.3%, 93.7%, 94.1%, 79.2%, and 86.3% for Type 2A; 80.4%, 84.7%, 46.8%, 96.3%, and 84.1% for Type 2B; and 62.5%, 99.6%, 96.8%, 93.8%, and 94.1% for Type 3, respectively. CONCLUSIONS The multi-class CADx systems had good diagnostic performance with both the NICE and JNET classifications and may aid in educating non-expert endoscopists and assist in diagnosing colorectal lesions.
Collapse
Affiliation(s)
- Yuki Okamoto
- Department of Gastroenterology and Metabolism, Hiroshima University Hospital, Hiroshima, Japan
| | - Shigeto Yoshida
- Department of Gastroenterology, JR Hiroshima Hospital, Hiroshima, Japan
| | - Seiji Izakura
- Research Institute for Nanodevice and Bio Systems, Hiroshima University, Hiroshima, Japan
| | - Daisuke Katayama
- Research Institute for Nanodevice and Bio Systems, Hiroshima University, Hiroshima, Japan
| | - Ryuichi Michida
- Research Institute for Nanodevice and Bio Systems, Hiroshima University, Hiroshima, Japan
| | - Tetsushi Koide
- Research Institute for Nanodevice and Bio Systems, Hiroshima University, Hiroshima, Japan
| | - Toru Tamaki
- Department of Computer Science, Nagoya Institute of Technology, Nagoya, Japan
| | - Yuki Kamigaichi
- Department of Gastroenterology and Metabolism, Hiroshima University Hospital, Hiroshima, Japan
| | - Hirosato Tamari
- Department of Gastroenterology and Metabolism, Hiroshima University Hospital, Hiroshima, Japan
| | - Yasutsugu Shimohara
- Department of Gastroenterology and Metabolism, Hiroshima University Hospital, Hiroshima, Japan
| | - Tomoyuki Nishimura
- Department of Gastroenterology and Metabolism, Hiroshima University Hospital, Hiroshima, Japan
| | - Katsuaki Inagaki
- Department of Gastroenterology and Metabolism, Hiroshima University Hospital, Hiroshima, Japan
| | - Hidenori Tanaka
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Ken Yamashita
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Kyoku Sumimoto
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| | - Shiro Oka
- Department of Gastroenterology and Metabolism, Hiroshima University Hospital, Hiroshima, Japan
| | - Shinji Tanaka
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima, Japan
| |
Collapse
|
48
|
Sánchez-Peralta LF, Pagador JB, Sánchez-Margallo FM. Artificial Intelligence for Colorectal Polyps in Colonoscopy. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
49
|
Bang CS. Artificial Intelligence in the Analysis of Upper Gastrointestinal Disorders. THE KOREAN JOURNAL OF HELICOBACTER AND UPPER GASTROINTESTINAL RESEARCH 2021. [DOI: 10.7704/kjhugr.2021.0030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
In the past, conventional machine learning was applied to analyze tabulated medical data while deep learning was applied to study afflictions such as gastrointestinal disorders. Neural networks were used to detect, classify, and delineate various images of lesions because the local feature selection and optimization of the deep learning model enabled accurate image analysis. With the accumulation of medical records, the evolution of computational power and graphics processing units, and the widespread use of open-source libraries in large-scale machine learning processes, medical artificial intelligence (AI) is overcoming its limitations. While early studies prioritized the automatic diagnosis of cancer or pre-cancerous lesions, the current expanded scope of AI includes benign lesions, quality control, and machine learning analysis of big data. However, the limited commercialization of medical AI and the need to justify its application in each field of research are restricting factors. Modeling assumes that observations follow certain statistical rules, and external validation checks whether assumption is correct or generalizable. Therefore, unused data are essential in the training or internal testing process to validate the performance of the established AI models. This article summarizes the studies on the application of AI models in upper gastrointestinal disorders. The current limitations and the perspectives on future development have also been discussed.
Collapse
|
50
|
The emerging role of artificial intelligence in gastrointestinal endoscopy: A review. GASTROENTEROLOGIA Y HEPATOLOGIA 2021; 45:492-497. [PMID: 34793895 DOI: 10.1016/j.gastrohep.2021.11.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 10/15/2021] [Accepted: 11/07/2021] [Indexed: 11/19/2022]
|