Review Open Access
Copyright ©The Author(s) 2021. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Gastroenterol. Apr 28, 2021; 27(16): 1664-1690
Published online Apr 28, 2021. doi: 10.3748/wjg.v27.i16.1664
Artificial intelligence in gastroenterology and hepatology: Status and challenges
Jia-Sheng Cao, Ming-Yu Chen, Bin Zhang, Jia-Hao Hu, Shi-Jie Li, Xu Feng, Ji-Liang Shen, Xiu-Jun Cai, Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, Hangzhou 310016, Zhejiang Province, China
Zi-Yi Lu, Sarun Juengpanich, Win Topatana, Zhejiang University School of Medicine, Zhejiang University, Hangzhou 310058, Zhejiang Province, China
Xue-Yin Zhou, School of Medicine, Wenzhou Medical University, Wenzhou 325035, Zhejiang Province, China
Yu Liu, College of Life Sciences, Zhejiang University, Hangzhou 310058, Zhejiang Province, China
ORCID number: Jia-Sheng Cao (0000-0002-4047-8899); Zi-Yi Lu (0000-0002-8209-3188); Ming-Yu Chen (0000-0001-5113-754X); Bin Zhang (0000-0002-6888-811X); Sarun Juengpanich (0000-0002-1449-5564); Jia-Hao Hu (0000-0001-5835-1012); Shi-Jie Li (0000-0002-7583-4523); Win Topatana (0000-0001-8580-1920); Xue-Yin Zhou (0000-0002-0209-5248); Xu Feng (0000-0002-4445-8174); Ji-Liang Shen (0000-0001-9702-4735); Yu Liu (0000-0001-9439-0107); Xiu-Jun Cai (0000-0002-6457-0577).
Author contributions: Cao JS, Lu ZY, Chen MY, and Cai XJ designed the study and collected the data; Zhang B, Juengpanich S, Hu JH, Li SJ, Topatana W, and Zhou XY analyzed and interpreted the data; Cao JS, Lu ZY, and Chen MY wrote the manuscript; Cai XJ revised the manuscript; all authors approved the final version of the manuscript.
Supported by Zhejiang Medical and Health Science and Technology Project, No. 2019321842; National Natural Science Foundation of China, No. 81827804; and Zhejiang Clinical Research Center of Minimally Invasive Diagnosis and Treatment of Abdominal Diseases, No. 2018E50003.
Conflict-of-interest statement: The authors deny any conflict of interest.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Xiu-Jun Cai, FACS, FRCS, MD, PhD, Chief Doctor, Professor, Surgeon, Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University, No. 3 Qingchun East Road, Hangzhou 310016, Zhejiang Province, China. srrsh_cxj@zju.edu.cn
Received: January 15, 2021
Peer-review started: January 15, 2021
First decision: February 9, 2021
Revised: February 11, 2021
Accepted: March 17, 2021
Article in press: March 17, 2021
Published online: April 28, 2021
Processing time: 95 Days and 13 Hours

Abstract

Originally proposed by John McCarthy in 1955, artificial intelligence (AI) has achieved a breakthrough and revolutionized the processing methods of clinical medicine with the increasing workloads of medical records and digital images. Doctors are paying attention to AI technologies for various diseases in the fields of gastroenterology and hepatology. This review will illustrate AI technology procedures for medical image analysis, including data processing, model establishment, and model validation. Furthermore, we will summarize AI applications in endoscopy, radiology, and pathology, such as detecting and evaluating lesions, facilitating treatment, and predicting treatment response and prognosis with excellent model performance. The current challenges for AI in clinical application include potential inherent bias in retrospective studies that requires larger samples for validation, ethics and legal concerns, and the incomprehensibility of the output results. Therefore, doctors and researchers should cooperate to address the current challenges and carry out further investigations to develop more accurate AI tools for improved clinical applications.

Key Words: Artificial intelligence; Gastroenterology; Hepatology; Status; Challenges

Core Tip: Artificial intelligence (AI) technologies are widely used for medical image analysis in the gastroenterology and hepatology fields. Several AI models have been developed for accurate diagnosis, treatment, and prognosis based on images of endoscopy, radiology, pathology, achieving high performance comparable to experts. However, we should be aware of the certain constraints that limit the acceptance and utilization of AI tools in clinical practice. To use AI wisely, doctors and researchers should work together to address the current challenges and develop more accurate AI tools to improve patient care.



INTRODUCTION

Originally proposed by John McCarthy in 1955, artificial intelligence (AI) which involves machine learning (ML) and problem solving, has achieved a breakthrough and revolutionized the processing methods of clinical medicine with the increasing workloads of medical records and digital images. In clinical practice, AI consists of several overlapping technologies such as ML, artificial neural networks (ANNs), deep learning (DL), convolutional neural networks (CNNs), and recurrent neural networks[1,2] (Figure 1). Since the 1980s, ML has been performed to construct a mathematical model and predict outcomes based on input data, and it is roughly divided into supervised (labeled data), unsupervised (unlabeled data), and semi-supervised (both labeled and unlabeled data) learning techniques[3]. Recently, as a subset of ML, ANNs have received increased interest because they can identify and learn input data by themselves instead of being labeled by experts[4]. In the last decade, DL, a new model of ML, holds great promise in clinical medicine. DL is particularly suitable for enormous complex or highly dimensional medical image analysis and predictive modeling tasks using the multilayers of ANNs, including CNNs and recurrent neural networks[5,6]. Notably, given that convolutional and pooling layers can extract distinct features and fully connected layers can make a final classification, CNNs have demonstrated excellent performance in image recognition such as endoscopy, radiology, and pathology[7,8].

Figure 1
Figure 1 Timeline and related technologies of artificial intelligence. AI: Artificial intelligence; ANN: Artificial neural network; CNN: Convolutional neural network; RNN: Recurrent neural network.

In the fields of gastroenterology and hepatology, doctors are paying attention to AI technologies for the diagnosis, treatment, and prognosis of various diseases due to the heterogeneous expertise levels of doctors (majoring in endoscopy, radiology, and pathology), time-consuming procedures, and increasing workloads. Specifically, doctors usually assess medical images visually to detect and diagnose diseases based on personal expertise and experience. As the maturity of digitalization increases, a quantitative assessment of imaging information has become the reality instead of relatively inaccurate qualitative reasoning[9,10]. Although a lot of time is necessary to review and check image analysis traditionally, little information can be obtained. For example, using AI technologies to process pathology images can assess the histopathological classification and predict gene mutations in liver cancer[11], while only the mass nature can be identified by conventional pathology assessment. As a country with a high population, China has produced rapidly increasing medical records, which result in the high workloads[12]. Despite the progression of AI, gastroenterologists and hepatologists should always be aware of its limitations such as the retrospective manner of included studies and the utilization of not particularly suitable databases. In addition, it demands that doctors prepare for the effects and changes of AI on clinical practice in the real world.

In this review, we aim to (1) introduce how AI technologies process input data, learn from input data, validate the established model; (2) summarize the AI applications in endoscopy, radiology, pathology for accurate diagnosis, treatment, and prognosis; and (3) discuss the current limitations and future considerations of AI applications in the fields of gastroenterology and hepatology (Figure 2).

Figure 2
Figure 2 Artificial intelligence-assisted endoscopy, radiology, and pathology applications for medical image analysis in the fields of gastroenterology and hepatology, including detecting and evaluating lesions, facilitating treatment, and predicting treatment response and prognosis, and other potentials, using several deep learning models. CT: Computed tomography; MRI: Magnetic resonance imaging.
METHODS IN DEEP LEARNING

As the most suitable approach for medical image analysis, the DL approach does not require shaped regions of interest on images to complete feature selection and extraction based on a neural network structure[13,14]. After data collection and processing, the correct neural network is chosen to establish a model, followed by model validation to assess its true generalizability.

Data processing

Raw data are collected and analyzed, and corrupt data are identified and cleaned in the processing phase. Data selection methods are provided in Scikit-Learn[15], a Python machine learning library, which consist of univariate selection, feature importance, correlation matrix, and recursive feature elimination or addition. Other programming languages such as R Studio (http://www.r-project.org) or MATLAB software (University of New Mexico, New Mexico, United States) also offer a successful environment for AI, and they provide similar approaches to address specific tasks. Useful data and relevant variables from multiple data sources, which are applied to predict outcomes, are selected and divided into an initial training set and a testing set that allow training and internal validation of the model. Data in the training set should be different and nonredundant from that in the testing set. Notably, for small datasets, a higher proportion of data should be included in the testing set to measure the performance of the trained model accurately through cross-validation or in a bootstrapping procedure.

Modeling

After transforming the data into an appropriate format, different tools are developed for implementing ML. Although several programming tools such as Python, R Studio, and MATLAB vary among themselves, they provide similar options and algorithms to adjust the parameters based on specific tasks. The major classification algorithms for testing are Naive Bayes, Decision Trees, Support Vector Machine, K-Nearest Neighbor, and Ensemble Classifiers. Oversampling or undersampling of the unbalanced training data can be utilized to improve the representation of classes and prevent model bias during the modeling stage. Currently, as the calculation workload of the batch learning process is heavy, minibatch learning is more popularized with repeating epochs, which usually decreases errors for the training and testing phases. However, an early stopping technique would be adopted to address the overfitting problem if repeating epochs cannot ensure error reduction. Based on the evaluations of model performance, developers conduct feature engineering again to manipulate the features and approve the predictive values of the model. After the optimization phase, selection for the model is primarily based on trial-and-error and the best performance for specific problem-solving. Finally, model optimization is performed with adjusted parameters by testing different configurations.

Model validation

To evaluate the AI approaches, one of the most significant requirements is external validation, which is called the blind test. Any model developed within one dataset will merely reflect its idiosyncrasies, and will have poor performance in analyzing new settings. In addition, models can also be validated by internal data testing (e.g., k-fold cross-validation). In k-fold cross-validation, the dataset is separated into k subsets, including one subset for testing and the remaining (k-1) subsets for training a model. With all data used in both training and testing sets, the cross-validation process is repeated k times. The model performance is finally calculated as the average value of all k iterations. The k varies depending on the size of the dataset. For example, leave-one-out validation may be used in a small training set (< 200 data points), which means that k is equivalent to the dataset size. The appropriate and robust predictive model should have consistent performance between training and testing sets, preventing overfitting discrepancies.

ARTIFICIAL INTELLIGENCE IN ENDOSCOPY

With the advent and continuous improvement of fiberoptics, endoscopy has been playing a significant role in the diagnosis and treatment of gastrointestinal diseases. However, gastrointestinal diseases remain an enormous economic burden and lead to high mortality worldwide. AI is applicable in the gastroenterology fields within endoscopy[16-18], such as identification of esophageal and gastric neoplasia in esophagogastroduodenoscopy (EGD), detection of gastrointestinal bleeding in wireless capsule endoscopy (WCE), and polyp detection and characterization in colonoscopy, etc[19-63] (Table 1).

Table 1 Summary of key studies on artificial intelligence-assisted endoscopy in gastroenterology fields.
Ref.CountryDisease studiedDesign of studyApplicationNumber of casesType of machine learning algorithmOutcomes (%)
Accuracy
Sensitivity/Specificity
Esophagogastroduodenoscopy
Takiyama et al[19], 2018JapanAnatomical location of upper gastrointestinal tractRetrospectiveRecognition of the anatomical location of upper gastrointestinal tractTraining: 27335 images: 663 larynx, 3252 esophagus, 5479 upper stomach, 7184 middle stomach, 7539 lower stomach, and 3218 duodenum; Testing: 17081 images: 363 larynx, 2142 esophagus, 3532 upper stomach, 6379 middle stomach, 3137 lower stomach, and 1528 duodenumCNNsLarynx: 100; Esopha us: 100; Stomach: 99; Duodenum: 99Larynx: 93.9/100; Esophagus: 95.8/99.7; Stomach: 98.9/93; Duodenum: 87/99.2
Wu et al[20], 2019ChinaDiseases of upper gastrointestinal tractProspectiveMonitor blind spots of upper gastrointestinal tractTraining: 1.28 million images from 1000 object classes; Testing: 3000 images for DCNN1, and 2160 images for DCNN2CNNs90.487.57/95.02
van der Sommen et al[21], 2016NetherlandsEN-BERetrospectiveDetection of EN in BE21 patients with EN-BE (60 images), 23 patients without EN-BE (40 images)SVMNA86/87
Swager et al[22], 2017NetherlandsEN-BERetrospectiveDetection of EN in BE60 images: 40 with EN-BE and 30 without EN-BESVM9590/93
Hashimoto et al[23], 2020United States EN-BERetrospectiveDetection of EN in BETraining: 916 images with EN-BE; Testing: 458 images: 225 dysplasia and 233 non-dysplasiaCNNs95.496.4/94.2
Ebigbo et al[24], 2020GermanyEAC-BERetrospectiveDetection of EAC in BETraining: 129 images; Testing: 62 images: 36 EAC and 26 normal BECNNs89.983.7/100
Horie et al[25], 2019JapanEAC and ESCCRetrospectiveDetection of EAC and ESCCTraining: 384 patients with 32 EAC and 397 ESCC (8428 images); Testing: 47 patients with 8 EAC and 41 ESCC (1118 images)CNNs9898/79
Kumagai et al[26], 2019JapanESCCRetrospectiveDetection of ESCCTraining: 240 patients (4715 images: 1141 ESCC and 3574 benign lesions); Testing: 55 patients (1520 images: 467 ESCC and 1053 benign)CNNs90.992.6/89.3
Zhao et al[27], 2019ChinaESCC RetrospectiveDetection of ESCC165 patients with ESCC and 54 patients without ESCC (1383 images)CNNs89.287.0/84.1
Cai et al[28], 2019ChinaESCCRetrospectiveDetection of ESCCTraining: 746 patients (2438 images: 1332 abnormal and 1096 normal); Testing: 52 patients (187 images)CNNs91.497.8/85.4
Nakagawa et al[29], 2019JapanESCCRetrospectiveDetermination of invasion depthTraining: 804 patients with ESCC (14338 images: 8660 non-ME and 5678 ME); Testing: 155 patients with ESCC (914 images: 405 non-ME and 509 ME)CNNsSM1/SM2, 3: 91.0; Invasion depth: 89.6SM1/SM2, 3: 90.1/95.8; Invasion depth: 89.8/88.3
Tokai et al[30], 2020JapanESCCRetrospectiveDetermination of invasion depth Training: 1751 images with ESCC; Testing: 42 patients with ESCC (293 images)CNNs80.984.1/80.9
Ali et al[31], 2018PakistanEGCRetrospectiveDetection of EGC56 patients with EGC, 120 patients without EGCSVM8791.0/82.0
Sakai et al[32], 2018JapanEGCRetrospectiveDetection of EGCTraining: 58 patients (348943 images: 172555 EGC and 176388 normal); Testing: 58 patients (9650 images: 4653 EGC and 4997 normal)CNNs87.680.0/94.8
Kanesaka et al[33], 2018JapanEGCRetrospectiveDetection of EGCTraining: 126 images: 66 EGC and 60 normal; Testing: 81 images: 61 EGC and 20 normalSVM96.396.7/95.0
Wu et al[34], 2019ChinaEGCRetrospectiveDetection of EGCTraining: 9691 images: 3710 EGC and 5981 normal; Testing: 100 patients: 50 EGC and 50 normalCNNs92.594.0/91.0
Horiuchi et al[35], 2020JapanEGCRetrospectiveDetection of EGCTraining: 2570 images: 1492 EGC and 1078 gastritis; Testing: 285 images: 151 EGC and 107 gastritisCNNs85.395.4/71.0
Zhu et al[36], 2019ChinaInvasive GCRetrospectiveDetermination of invasion depthTraining: 245 patients with GC and 545 patients without GC (5056 images); Testing: 203 images: 68 GC and 135 normalCNNs89.276.5/95.6
Luo et al[37], 2019ChinaEAC, ESCC, and GCProspectiveDetection of upper gastrointestinal cancersTraining: 15040 individuals (125898 images: 31633 cancer and 94265 control); Testing: 1886 individuals (15637 images: 3931 cancer and 11706 control)CNNs91.5-97.794.2/85.8
Nagao et al[38], 2020JapanGCRetrospectiveDetermination of invasion depth1084 patients with GC (16557 images); Training: Testing = 4:1CNNs94.584.4/99.4
Wireless capsule endoscopy
Ayaru et al[39], 2015United KingdomSmall bowel bleedingRetrospectivePrediction of outcomesTraining: 170 patients with small bowel bleeding; Testing: 130 patients with small bowel bleedingANNsRecurrent bleeding 88; Therapeutic intervention: 88; Severe bleeding: 78Recurrent bleeding: 67/91; Therapeutic intervention: 80/89; Severe bleeding: 73/80
Xiao et al[40], 2016ChinaSmall bowel bleedingRetrospectiveDetection of bleeding in GI tractTraining: 8200 images: 2050 bleeding and 6150 non-bleeding; Testing: 1800 images: 800 bleeding and 1000 non-bleedingCNNs99.699.2/99.9
Usman et al[41], 2016South KoreaSmall bowel bleedingRetrospectiveDetection of bleeding in GI tractTraining: 75000 pixels: 25000 bleeding and 50000 non-bleeding; Testing: 8000 pixels: 3000 bleeding and 5000 non-bleedingSVM91.893.7/90.7
Sengupta et al[42], 2017United States Small bowel bleedingRetrospectivePrediction of 30-d mortalityTraining: 4044 patients with small bowel bleeding; Testing: 2060 patients with small bowel bleedingANNs8187.8/90/9
Leenhardt et al[43], 2019FranceSmall bowel bleedingRetrospectiveDetection of GIATraining: 600 images: 300 hemorrhagic GIA and 300 non-hemorrhagic GIA; Testing: 600 images: 300 hemorrhagic GIA and 300 non-hemorrhagic GIACNNs98100.0/96.0
Aoki et al[44], 2020JapanSmall bowel bleedingRetrospectiveDetection of small bowel bleedingTraining: 41 patients (27847 images: 6503 bleeding and 21344 normal); Testing: 25 patients (10208 images: 208 bleeding and 10000 non-bleeding)CNNs99.8996.63/99.96
Yang et al[45], 2020ChinaSmall bowel polypsRetrospectiveDetection of small bowel polyps1000 images: 500 polyps and 500 non-polypsSVM96.0095.80/96.20
Vieira et al[46], 2020PortugalSmall bowel tumorsRetrospectiveDetection of small bowel tumors39 patients (3936 images: 936 tumors and 3000 normal)SVM97.696.1/98.3
Colonoscopy
Fernández-Esparrach et al[47], 2016SpainColorectal polypsRetrospectiveDetection of polyps24 videos containing 31 different polypsEnergy maps7970.4/72.4
Komeda et al[48], 2017JapanColorectal polyps RetrospectiveDetection of polypsTraining: 1800 images: 1200 adenoma and 600 non-adenoma; Testing: 10 casesCNNs70.083.3/50.0
Misawa et al[49], 2017JapanColorectal polypsRetrospectiveDetection of polypsTraining: 1661 images: 1213 neoplasm and 448 non-neoplasm; Testing: 173 images: 124 neoplasm and 49 non-neoplasmSVM87.894.3/71.4
Misawa et al[50], 2018JapanColorectal polypsRetrospectiveDetection of polyps196631 frames: 63135 polyps and 133496 non-polypsCNNs76.590.0/63.3
Chen et al[51], 2018ChinaColorectal polypsRetrospectiveDetection of diminutive colorectal polypsTraining: 2157 images: 681 hyperplastic and 1476 adenomas; Testing: 284 images: 96 hyperplastic and 188 adenomasDNNs90.196.3/78.1
Urban et al[52], 2018United StatesColorectal polypsRetrospectiveDetection of polypsTraining: 8561 images: 4008 polyps and 4553 non-polyps; Testing: 1330 images: 672 polyps and 658 non-polypsCNNs96.496.9/95.0
Renner et al[53], 2018GermanyColorectal polypsRetrospectiveDifferentiation of neoplastic from non-neoplastic polypsTraining: 788 images: 602 adenomas and 186 non-adenomatous polyps; Testing: 186 images: 52 adenomas and 48 hyperplastic lesionsDNNs78.092.3/62.5
Wang et al[54], 2018United StatesColorectal polypsRetrospectiveDetection of polypsTraining: 5545 images: 3634 polyps and 1911 non-polyps; Testing: 27113 images: 5541 polyps and 21572 non-polypsCNNs9894.4/95.9
Mori et al[55], 2018JapanColorectal polypsProspectiveA diagnose-and-leave strategy for diminutive, non-neoplastic rectosigmoid polypsTraining: 61925 images; Testing: 466 cases (287 neoplastic polyps, 175 nonneoplastic polyps, and 4 missing specimens)SVM96.593.8/91.0
Byrne et al[56], 2019CanadaColorectal polypsRetrospectiveDetection and classification of polypsTraining: 60089 frames of 223 videos (29% NICE type 1, 53% NICE type 2 and 18% of normal mucosa with no polyp); Testing: 125 videos: 51 hyperplastic polyps and 74 adenomaCNNs94.098.0/83.0
Blanes-Vidal et al[57], 2019DenmarkColorectal polypsRetrospectiveDetection of polyps131 patients with polyps and 124 patients without polypsCNNs96.497.1/93.3
Lee et al[58], 2020South KoreaColorectal polypsRetrospectiveDetection of polypsTraining: 306 patients (8593 images: 8495 polyp and 98 normal); Testing: 15 patients (15 polyps videos)CNNs93.489.9/93.7
Gohari et al[59], 2011IranCRCRetrospectiveDetermination of prognostic factors of CRC1219 patients with CRCANNsColon cancer: 89; Rectum cancer: 82.7NA/NA
Biglarian et al[60], 2012IranCRCRetrospectivePrediction of distant metastasis in CRC1219 patients with CRCANNs82NA/NA
Takeda et al[61], 2017JapanCRCRetrospectiveDiagnosis of invasive CRCTraining: 5543 images: 2506 non-neoplasms, 2667 adenomas, and 370 invasive cancers; Testing: 200 images: 100 adenomas and 100 invasive cancersSVM94.189.4/98.9
Ito et al[62], 2019JapanCRCRetrospectiveDiagnosis of cT1b CRCTraining: 9942 images: 5124 cTis + cT1a, 4818 cT1b, and 2604 cTis + cT1a; Testing: 5022 images: 2604 cTis + cT1a, and 2418 cT1bCNNs81.267.5/89.0
Zhou et al[63], 2020ChinaCRCRetrospectiveDiagnosis of CRCTraining: 3176 patients with CRC and 9003 patients without CRC (464105 images: 28071 CRC and 436034 non-CRC); Testing: 307 patients with CRC and 1956 patients without CRC (84615 images: 11675 CRC and 72940 non-CRC)CNNs96.391.4/98.0
EGD

Inadequate examination of the upper gastrointestinal tract is one of the reasons for misdiagnosing several EGD diseases. Based on AI-assisted EGD, the upper gastrointestinal tract can be divided into the pharynx, esophagus, stomach (upper, middle, lower), and duodenum with high values of the area under the curve (AUC)[19]. Furthermore, several AI technologies have classified the images of the stomach within EGD to significantly monitor the blind spots, and their accuracy has reached the ability of experienced endoscopists[20,34].

Endoscopic surveillance for Barrett’s esophagus (BE) is the potential risk factor for esophageal adenocarcinoma (EAC), of which the prognosis is related to disease staging[64,65]. However, accurate detection of esophageal neoplasia and early EAC remains difficult for experienced endoscopists[66]. An AI system developed by Ebigbo et al[24] enabled early detection of EAC with high sensitivity and specificity, they subsequently designed a real-time system for neoplasia classification in magnification. Both accurate detection of early EAC is important in BE images and the novel system of high invasion accuracy also deserves clinical attention[23]. In esophageal squamous cell carcinoma (ESCC), these tumors are often diagnosed at advanced stages, while early ESCC seems to be detected based on endoscopists’ experience because they are almost impossible to visualize with white light endoscopy. Fortunately, with AI technologies, small esophageal lesions (< 10 mm) are recognized, and there is an AI system showing diagnostic accuracy of 91.4%, which is higher than that of high-level (with experience of > 15 years, 88.8%), mid-level (with experience of 5-15 years, 81.6%), and junior-level (with experience of < 5 years, 77.2%) endoscopists[28]. In addition, the prognosis of ESCC can be proved by differentiating tumor invasion depth[29,30].

The prognosis of gastric cancer (GC) mainly depends on the early detection and invasion depth of the disease. It is extremely difficult for endoscopists to recognize early gastric cancer (EGC), which is often accompanied by gastric mucosal inflammation, and the false-negative rate of EGC in EGD has reached nearly 25.0%[67,68]. AI-assisted EGD has the potential to address tough tissues. However, the first reported CNNs-based AI system for detection of EGC had a low positive predictive value of 30.6%, leading to misdiagnosis of gastritis and misinterpretation of the gastric angle as GC[69]. In 2019, Wu et al[34] examined the detection of GC by AI and validated 200 endoscopic images, with increased accuracy, sensitivity, and specificity values (92.5%, 94%, and 91%, respectively). Furthermore, the AI system, named GRAIDS, has achieved diagnostic sensitivity close to that of expert endoscopists (94.2% vs 94.5%), and it demonstrated a robust performance showing high diagnostic accuracy in a multicenter study[37]. Besides detection, one of the most important criteria for curative resection is the invasion depth. The invasion depth prediction of GC by AI was first developed by Kubota et al[70], and the model showed the accuracy of T-stages (T1 = 77%, T2 = 49%, T3 = 51%, T4 = 55%, respectively). Considering that endoscopic mucosal resection is appropriate for intramucosal cancers (M) and submucosal cancers (invasion < 500 μm) (SM1), a more detailed classification is urgently needed. Therefore, an AI system was developed to differentiate the depths of M or SM1 and SM2 (submucosal invasion ≥ 500 μm) for GC with significantly higher sensitivity, specificity, and accuracy than those of skilled endoscopists[38].

WCE

AI-assisted WCE enables endoscopists to highlight suspicious regions on examination of the digestive tract noninvasively, including detection of small bowel bleeding, ulcers, and polyps, celiac disease, etc. Based on specific AI classifiers and validation techniques (mainly k-fold cross-validation), these models utilized still frames, pixels, or real-time videos to identify patients with small bowel bleeding with accuracy above 90% for most studies[40,41,43,44]. A CNNs-based algorithm, established in a retrospective analysis of 10000 WCE images (8200 and 1800 in the training and testing set, respectively) and validated by 10-fold cross-validation, was proposed for automatic detection of small bowel bleeding. The model was performed with a high F-1 score of 99.6% and precision of 99.9% for both active and inactive bleeding frames[40]. Besides detection, several emerging AI tools have been developed to stratify patients for the possibility of recurrent bleeding, treatment requirement, and mortality estimate to prevent repeated endoscopies in a significant proportion of patients with potential recurrent upper or lower gastrointestinal bleeding[39,42].

Colonoscopy

Colorectal polyp detection and appropriate polypectomy during colonoscopy is the standard way to prevent colorectal cancer (CRC). Since missed colorectal polyps can potentially progress into CRC, AI-assisted colonoscopy has been developed for polyp detection and characterization, and predicting the prognosis of CRC. In terms of polyp detection, an automated AI system using an energy map was developed in 2016, and it showed barely satisfactory performance[47]. Urban et al[52] used 8641 labeled images and 20 colonoscopy videos as the training and testing set to establish a CNNs model to identify colonic polyps, and the model had an accuracy of 96.4%[52]. Notably, the models should be validated to improve accuracy. After validating the model developed by Wang et al[54] with 27113 newly collected images from 1138 patients, the model showed acceptable performance (sensitivity = 94.38%, specificity = 95.2%, and AUC = 0.984). In addition, polyp characterization with magnifying endoscopic images is useful for identifying pit or vascular patterns to improve performance. AI tools with narrow-band imaging[51] or endoscopic videos[56] can be used to differentiate diminutive hyperplastic polyps and adenomas with high accuracy. Specifically, diminutive polyps (≤ 5 mm) may also be identified during colonoscopy[55]. In addition, AI may assist doctors in predicting the prognosis of CRC. An ANNs model, which was developed from a dataset of 1219 CRC patients, may predict patient survival and influential factors more accurately than a Cox regression model[59], and it also enables doctors to predict the risk of distant metastases[60].

ARTIFICIAL INTELLIGENCE IN RADIOLOGY

There is a disproportionate growing rate between radiological imaging data and the number of trained radiologists, and it has forced radiologists to compensate by increasing productivity[71]. The emergence of AI technologies has eased the current dilemma and dramatically advanced radiology image analysis, including ultrasound, computed tomography (CT), and magnetic resonance imaging (MRI) in the fields of gastroenterology and hepatology. In addition, the rise of radiomics, which is a new technology in radiology and cancer, can extract abundant quantifiable objective data to evaluate surgical resection and predict treatment response[72-112] (Table 2).

Table 2 Summary of key studies on artificial intelligence-assisted radiology in hepatology fields.
Ref.CountryDisease studiedDesign of studyApplicationNumber of casesType of machine learning algorithmOutcomes (%)
Accuracy
Sensitivity/Specificity
Ultrasound-based medical image recognition
Gatos et al[72], 2016United StatesHepatic fibrosisRetrospectiveClassification of CLD85 images: 54 healthy and 31 CLDSVM8783.3/89.1
Gatos et al[73], 2017United StatesHepatic fibrosisRetrospectiveClassification of CLD124 images: 54 healthy and 70 CLDSVM87.393.5/81.2
Chen et al[74], 2017ChinaHepatic fibrosisRetrospectiveClassification of the stages of hepatic fibrosis in HBV patients513 HBV patients with different hepatic fibrosis (119 S0, 164 S1, 88 S2, 72 S3, and 70 S4)SVM, Naive Bayes, RF, KNN82.8792.97/82.50
Li et al[75], 2019ChinaHepatic fibrosisProspectiveClassification of the stages of hepatic fibrosis in HBV patients144 HBV patientsAdaptive boosting, decision tree, RF, SVM8593.8/76.9
Gatos et al[76], 2019United StatesHepatic fibrosisRetrospectiveClassification of CLD88 healthy individuals (88 F0 fibrosis stage images) and 112 CLD patients (112 images: 46 F1, 16 F2, 22 F3, and 28 F4)CNNs82.5NA/NA
Wang et al[77], 2019ChinaHepatic fibrosisProspectiveClassification of the stages of hepatic fibrosis in HBV patientsTraining: 266 HBV patients (1330 images); Testing: 132 HBV patients (660 images)CNNsF4: 100; ≥ F3: 99; ≥ F2: 99F4: 100.0/100.0; ≥ F3: 97.4/95.7; ≥ F2: 100.0/97.7
Kuppili et al[78], 2017United StatesMAFLDRetrospectiveDetection and characterization of FLD63 patients: 27 healthy and 36 MAFLDELM, SVMELM: 96.75; SVM: 89.01NA/NA
Byra et al[79], 2018PolandMAFLDRetrospectiveDiagnosis of the amount of fat in the liver55 severely obese patientsCNNs, SVM96.3100/88.2
Biswas et al[80], 2018United StatesMAFLDRetrospectiveDetection and risk stratification of FLD63 patients: 27 healthy and 36 MAFLDCNNs, SVM, ELMCNNs: 100; SVM: 82; ELM: 92NA/NA
Cao et al[81], 2020ChinaMAFLDRetrospectiveDetection and classification of MAFLD240 patients: 106 healthy, 57 mild MAFLD, 67 moderate MAFLD, and 10 severe MAFLDCNNs95.8NA/NA
Guo et al[82], 2018ChinaLiver tumorsRetrospectiveDiagnosis of liver tumors93 patients with liver tumors: 47 malignant lesions (22 HCC, 5 CC, and 10 RCLM), and 46 benign lesionsDNNs90.4193.56/86.89
Schmauch et al[83], 2019FranceFLLRetrospectiveDetection and characterization of FLLTraining: 367 patients (367 images); Testing: 177 patientsCNNsDetection: 93.5; Characterization: 91.6NA/NA
Yang et al[84], 2020ChinaFLLRetrospectiveDetection of FLLTraining: 1815 patients with FLL (18000 images); Testing: 328 patients with FLL (3718 images)CNNs84.786.5/85.5
CT/MRI-based medical image recognition
Choi et al[85], 2018South KoreaHepatic fibrosisRetrospectiveStaging liver fibrosis by using CT imagesTraining: 7461 patients: 3357 F0, 113 F1, 284 F2, 460 F3, 3247 F4; Testing: 891 patients: 118 F0, 109 F1, 161 F2, 173 F3, 330 F4CNNs92.1–95.0 84.6–95.5/89.9–96.6
He et al[86], 2019United StatesHepatic fibrosisRetrospectiveStaging liver fibrosis by using MRI imagesTraining: 225 CLD patients; Testing: 84 patientsSVM81.872.2/87.0
Ahmed et al[87], 2020EgyptHepatic fibrosisRetrospectiveDetection and staging of liver fibrosis by using MRI images37 patients: 15 healthy and 22 CLDSVM83.781.8/86.6
Hectors et al[88], 2020United StatesLiver fibrosisRetrospectiveStaging liver fibrosis by using MRI imagesTraining: 178 patients with liver fibrosis; Testing: 54 patients with liver fibrosisCNNsF1-F4: 85; F2-F4: 89; F3-F4: 91; F4: 83F1-F4: 84/90; F2-F4: 87/93; F3-F4: 97/83; F4: 68/94
Vivanti et al[89], 2017IsraelLiver tumorsRetrospectiveDetection and segmentation of new tumors in follow-up by using CT images246 liver tumors (97 new tumors)CNNs8670/NA
Yasaka et al[90], 2018JapanLiver massesRetrospectiveDetection and differentiation of liver masses by using CT imagesTraining: 460 patients with liver masses (1068 images: 240 Category A, 121 Category B, 320 Category C, 207 Category D, 180 Category E); Testing: 100 images with liver masses: 21 Category A, 9 Category B, 35 Category C, 20 Category D, 15 Category ECNNs84Category A: 71/NA; Category B: 33/NA; Category C: 94/NA; Category D: 90/NA; Category E: 100/NA
Ibragimov et al[91], 2018United StatesLiver diseases requiring SBRTRetrospectivePrediction of hepatotoxicity after liver SBRT by using CT images125 patients undergone liver SBRT: 58 liver metastases, 36 HCC, 27 cholangiocarcinoma, and 4 other histopathologiesCNNs85NA/NA
Abajian et al[92], 2018United StatesHCCRetrospectivePrediction of HCC response to TACE by using MRI images36 HCC patients treated with TACERF7862.5/82.1
Zhang et al[93], 2018United StatesHCCRetrospectiveClassification of HCC by using MRI images20 patients with HCCCNNs80NA/NA
Morshid et al[94], 2019United StatesHCCRetrospectivePrediction of HCC response to TACE by using CT images105 HCC patients received first-line treatment with TACECNNs74.2NA/NA
Nayak et al[95], 2019IndiaCirrhosis; HCCRetrospectiveDetection of cirrhosis and HCC by using CT images40 patients: 14 healthy, 12 cirrhosis, 14 cirrhosis with HCCSVM86.9100/95
Hamm et al[96], 2019United StatesCommon hepatic lesionsRetrospectiveClassification of common hepatic lesions by using MRI imagesTraining: 434 patients with common hepatic lesions; Testing: 60 patients with common hepatic lesionsCNNs9292/98
Wang et al[97], 2019United StatesCommon hepatic lesionsRetrospectiveDemonstration of a proof-of-concept interpretable DL system by using MRI images60 common hepatic lesions patientsCNNsNA82.9/NA
Jansen et al[98], 2019NetherlandsFLLRetrospectiveClassification of FLL by using MRI images95 patients with FLL (125 benign lesions: 40 adenomas, 29 cysts, and 56 hemangiomas; and 88 malignant lesions: 30 HCC and 58 metastases)RF77Adenoma: 80/78; Cyst: 93/93; Hemangioma: 84/82; HCC: 73/56; Metastasis: 62/77
Mokrane et al[99], 2020FranceHCCRetrospectiveDiagnosis of HCC in patients with cirrhosis by using CT imagesTraining: 106 patients: 85 HCC and 21 non-HCC; Testing: 36 patients: 23 HCC and 13 non-HCCSVM, KNN, RF7070/54
Shi et al[100], 2020ChinaHCCRetrospectiveDetection of HCC from FLL by using CT imagesTraining: 359 lesions: 155 HCC and 204 non-HCC; Testing: 90 lesions: 39 HCC and 51 non-HCCCNNs85.674.4/94.1
Alirr et al[101], 2020KuwaitLiver tumorsRetrospectiveSegmentation of liver tumorsTraining: 100 images with liver tumors;Testing: 31 images with liver tumorsCNNs95.2NA/NA
Zheng et al[102], 2020ChinaPancreatic cancerRetrospectivePancreas segmentation by using MRI images20 patients with PDACCNNs99.86NA/NA
Radiomics
Liang et al[103], 2014ChinaHCCRetrospectivePrediction of recurrence for HCC patients who received RFA83 patients with HCC receiving RFA as first treatment (18 recurrence and 65 non-recurrence)SVM8267/86
Zhou et al[104], 2017ChinaHCCRetrospectiveCharacterization of HCC46 patients with HCC: 21 low-grade (Edmondson grades I and II) and 25 high-grade (Edmondson grades III and IV)Free-form curve-fitting86.9576.00/100.00
Abajian et al[105], 2018United StatesHCCRetrospectivePrediction of response to intra-arterial treatment36 patients undergone trans-arterial treatmentRF7862.5/82.1
Ibragimov et al[91], 2018United StatesLiver tumorsRetrospectivePrediction of hepatobiliary toxicity of SBRT125 patients undergone liver SBRT: 58 metapatients, 36 HCC, 27 cholangiocarcinoma, and 4 other primary liver tumor histopathologiesCNNs85NA/NA
Morshid et al[94], 2019United StatesHCCRetrospectivePrediction of HCC response to TACE105 patients with HCC: 11 BCLC stage A, 24 BCLC stage B, 67 BCLC stage C, and 3 BCLC stage DCNNs74.2NA/NA
Ma et al[106], 2019ChinaHCCRetrospectivePrediction of MVI in HCCTraining: 110 patients with HCC: 37 with MVI and 73 without MVI; Testing: 47 patients with HCC: 18 with MVI and 29 without MVISVM76.665.6/94.4
Dong et al[107], 2020ChinaHCCRetrospectivePrediction and differentiation of MVI in HCC Prediction: 322 patients with HCC: 144 with MVI and 178 without MVI; Differentiation: 144 patients with HCC and MVIRF, mRMRPrediction: 63.4; Differentiation: 73.0 Prediction: 89.2/48.4; Differentiation: 33.3/80.0
He et al[108], 2020ChinaHCCProspectivePrediction of MVI in HCCTraining: 101 patients with HCC; Testing: 18 patients with HCCLASSO84.4NA/NA
Schoenberg et al[109], 2020GermanyHCCProspectivePrediction of disease-free survival after HCC resectionTraining: 127 patients with HCC; Testing: 53 patients with HCCRF78.8NA/NA
Zhao et al[110], 2020ChinaHCCRetrospectivePrediction of ER of HCC after partial hepatectomyTraining: 78 patients with HCC: 40 with ER and 38 without ER; Testing: 35 patients with HCC: 18 with ER and 17 without ERLASSO80.880.0/81.6
Liu et al[111], 2020ChinaHCCRetrospectivePrediction of progression-free survival of HCC patients after RFA and SRRFA: Training: 149 HCC patients undergone RFA Testing: 65 HCC patients undergone RFA; SR: Training: 144 HCC patients undergone SR Testing: 61 HCC patients undergone SRCox-CNNsRFA: 82.0; SR: 86.3NA/NA
Chen et al[112], 2021ChinaHCCRetrospectivePrediction of HCC response to first TACE by using CT imagesTraining: 355 patients with HCC; Testing: 118 patients with HCCLASSO8185.2/77.2
Abdominal ultrasound

AI technologies have been applied to abdominal ultrasound-based medical images for the assessment of liver diseases, such as hepatic fibrosis and mass lesions. A support vector machine-derived approach was developed by Gatos et al[72] to detect and classify chronic liver disease (CLD) based on abdominal ultrasound. After quantifying 85 ultrasound images (54 healthy and 31 with CLD), the proposed model showed superior results (accuracy = 87.0%, sensitivity = 83.3%, and specificity = 89.1%), which greatly improved the diagnostic and classification accuracy of CLD. Furthermore, CNNs are employed to identify and isolate regions of different stiffness temporal stability under ultrasound to explore the impact on CLD diagnosis. The updated detection algorithm has augmented the accuracy to 95.5% after excluding unreliable areas and reducing interobserver variability[76]. Detecting and classifying hepatic mass lesions as benign or malignant is equally important. Schmauch et al[83] performed supervised training (367 ultrasonic images together with the radiological reports) to build a DL model, and the resulting algorithm had high receiver operating characteristic curves of 0.93 and 0.916 in lesion detection and characterization, respectively. Although the model could increase the diagnostic accuracy and detect potential malignant mass lesions, it should be further validated. In addition, combining AI technologies with contrast-enhanced ultrasound may improve the performance to identify and characterize liver cancer. For example, after AI-assisted contrast-enhanced ultrasound was applied to detect liver lesions in the arterial, portal, and late phases, the accuracy, sensitivity and specificity of the examination were markedly increased[82]. Due to the misty demonstration of gastroenterology within ultrasound, an AI-assisted ultrasound tool was limited.

CT/MRI

Liver diseases often present indeterminate behaviors on abdominal CT, and a biopsy is recommended according to the European Association for the Study of the Liver guidelines[113]. Based on a large dataset of CT images (7461 patients diagnosed with liver fibrosis), a CNNs model was developed and it outperformed the radiologists’ interpretation[85]. Furthermore, depending on ANNs-based contrast-enhanced CT images from 460 patients, Yasaka et al[90] conducted a retrospective study to classify liver masses into five categories with high accuracy, including (1) primary hepatocellular carcinoma (HCC); (2) malignant tumors apart from HCC; (3) early HCC, indeterminate masses, or dysplastic nodules; (4) hemangiomas; and (5) cysts. For patients diagnosed with liver tumors or pancreatic cancer, it is crucial to complete the liver or pancreas segmentation to assess the lesions and make the ideal treatment plan. Instead of conventional manual segmentation, a CNNs model was proposed to segment liver tumors based on CT images, with an accuracy of more than 80.0%, favoring suitable decision-making[101]. Additionally, a CNNs model was also developed for pancreas localization and segmentation using CT images[102]. Furthermore, monitoring tumor recurrence plays an important role in follow-up CT. Vivanti et al[89] collected and integrated the initial appearance of tumors, CT behaviors, and quantification of the tumor loads throughout the disease course, and then they designed an automated detection model of tumor recurrence with an accuracy of 86%.

Besides depending on CT images, a DL approach for pancreas segmentation can also be designed from MRI images. Several AI-assisted studies have shown promising results in classifying MRI liver lesions with/without risk factors and patients’ clinical data, improving the accuracy and yields of reference models[93,96,98,102].

Radiomics

Currently, radiomics has received great interest from doctors because this AI-assisted technology can extract indiscoverable quantifiable objective data of the radiological images and reveal the association with potential biological processes[114,115]. Preoperative stratification of patients at different risk of recurrence and prediction of survival after resection is fundamental to improve prognosis. As an independent risk factor of recurrence, microvascular invasion (MVI) cannot be provided in conventional radiological techniques[116]. Several studies have managed to use radiomic algorithms based on ultrasound, CT, or MRI to elaborate radiomic signatures for preoperative prediction of MVI[106-108]. Besides the prediction of recurrence, radiomics may also be utilized to predict survival after surgical resection. However, compared to the excellent AI models based on pathologic images, radiomics-based predictive models merely attain a low value of 0.78[109].

Beyond recurrence and survival prediction purposes, radiomics can also be utilized for prediction of patients’ response to transarterial chemoembolization (TACE) and radiofrequency ablation (RFA), and post-radiotherapy hepatotoxicity. A CNNs model developed from 105 HCC patients’ CT images had higher accuracy in predicting response to TACE than the Barcelona Clinic Liver Cancer stages[94]. In addition, Chen et al[112] designed an excellent clinical-radiomic model to predict objective response to first TACE based on 595 HCC patients’ CT images, which could assist the selection of HCC patients for TACE. Another study used radiomics of MRI images with clinical data to perform prediction of TACE response[105]. For HCC in the early stages, RFA is a recommended option. Based on radiomics, Liang et al[103] designed a model to predict the RFA response and HCC recurrence after RFA, obtaining high AUC, sensitivity, and specificity. Additionally, post-radiotherapy hepatotoxicity should be monitored to adjust the position and dose of radiotherapy. A CNNs model not only identified that irradiation of the proximal portal vein was associated with poor prognosis, it also predicted post-radiotherapy hepatotoxicity with an AUC of 0.85[91]. Ibragimov et al[91] applied a CNNs model to determine the consistent patterns in toxicity-related dose plans, and the AUC of the model for dose planned analysis was increased from 0.79 to 0.85 after the combination with some pre-treatment clinical features, showing that the combined framework can indicate the accurate position and dose of radiotherapy.

ARTIFICIAL INTELLIGENCE IN PATHOLOGY

Pathological analysis is considered the gold standard for the diagnosis of diseases in the fields of gastroenterology and hepatology. Currently, there is a shortage of pathologists around the world, which has become an obstruction for maintaining the accuracy of pathological analysis[117]. With the development of the whole-slide imaging (WSI) scanner and AI technologies, a combination of both technologies can ease the medical burden, improve the diagnosis accuracy, and even predict gene mutations and prognosis[118-147] (Table 3).

Table 3 Summary of key studies on artificial intelligence-assisted pathology in the gastroenterology and hepatology fields.
Ref.CountryDisease studiedDesign of studyApplicationNumber of casesType of machine learning algorithmOutcomes (%)
Accuracy
Sensitivity/Specificity
Basic AI-based pathology: diagnosis
Tomita et al[118], 2019United StatesBE and EACRetrospectiveDetection and classification of cancerous and precancerous esophagus tissueTraining: 379 images with 4 classes: normal, BE-no-dysplasia, BE-with-dysplasia, and adenocarcinoma; Testing: 123 images with 4 classes: normal, BE-no-dysplasia, BE-with-dysplasia, and adenocarcinomaCNNsMean: 83; BE-no-dysplasia: 85; BE-with-dysplasia: 89; Adenocarcinoma: 88Normal: 69/71 BE-no-dysplasia: 77/88; BE-with-dysplasia: 21/97; Adenocarcinoma: 71/91
Sharma et al[119], 2017GermanyGCRetrospectiveClassification and necrosis detection of GC454 patients (6810 WSIs: 4994 for cancer classification and 1816 for necrosis detection) (HER2 immunohistochemical stain and HE stained)CNNsCancer classification: 69.90; Necrosis detection: 81.44NA/NA
Li et al[120], 2018ChinaGCRetrospectiveDetection of GC700 images: 560 GC and 140 normal (HE stained)CNNs100NA/NA
Leon et al[121], 2019ColombiaGCRetrospectiveDetection of GC40 images: 20 benign and 20 malignantCNNs89.72NA/NA
Sun et al[122], 2019ChinaGCRetrospectiveDiagnosis of GC500 WSIs of gastric areas with typical cancerous regionsDNNs91.6NA/NA
Ma et al[123], 2020ChinaGCRetrospectiveClassification of lesions in the gastric mucosaTraining: 534 WSIs (1616713 images: 544925 normal, 544624 chronic gastritis, and 527164 cancer) (HE stained) Testing: 153 WSIs (399240 images: 135446 normal, 125783 chronic gastritis, and 138011 cancer) (HE stained)CNNs, RFBenign and cancer: 98.4; Normal, chronic gastritis, and GC: 94.5Benign and cancer: 98.0/98.9; Normal, chronic gastritis, and GC: NA/NA
Yoshida et al[124], 2018JapanGastric lesionsRetrospectiveClassification of gastric biopsy specimens3062 gastric biopsy specimens (HE stained)CNNs55.689.5/50.7
Qu et al[125], 2018JapanGastric lesionsRetrospectiveClassification of gastric pathology imagesTraining: 1080 patches: 540 benign and 540 malignant; Testing: 5400 patches: 2700 benign and 2700 malignantCNNs96.5NA/NA
Iizuka et al[126], 2020JapanGastric and colonic epithelial tumorsRetrospectiveClassification of gastric and colonic epithelial tumors4128 cases of human gastric epithelial lesions and 4036 of colonic epithelial lesions (HE stained)CNNs, RNNsGastric adenocarcinoma: 97; Gastric adenoma: 99; Colonic adenocarcinoma: 96; Colonic adenoma: 99NA/NA
Korbar et al[127], 2017United StatesColorectal polypsRetrospectiveClassification of different types of colorectal polyps on WSIsTraining: 458 WSIs; Testing: 239 WSIsA modified version of a residual network9388.3/NA
Wei et al[128], 2020United StatesColorectal polypsRetrospectiveClassification of colorectal polyps on WSIsTraining: 326 slides with colorectal polyps: 37 tubular, 30 tubulovillous or villous, 111 hyperplastic, 140 sessile serrated, and 8 normal; Testing: 238 slides with colorectal polyps: 95 tubular, 78 tubulovillous or villous, 41 hyperplastic, and 24 sessile serratedCNNsTubular: 84.5; Tubulovillous or villous: 89.5; Hyperplastic: 85.3; Sessile serrated: 88.7Tubular: 73.7/91.6; Tubulovillous or villous: 97.6/87.8; Hyperplastic: 60.3/97.5; Sessile serrated: 79.2/89.7
Shapcott et al[129], 2018UnitedKingdomCRCRetrospectiveDiagnosis of CRC853 hand-marked imagesCNNs84NA/NA
Geessink et al[130], 2019NetherlandsCRCRetrospectiveQuantification of intratumoral stroma in CRC129 patients with CRCCNNs94.691.1/99.4
Song et al[131], 2020ChinaCRCRetrospectiveDiagnosis of CRCTraining: 177 slides: 156 adenoma and 21 non-neoplasm; Testing: 362 slides: 167 adenoma and 195 non-neoplasmCNNs90.489.3/79.0
Wang et al[132], 2015ChinaHepatic fibrosisRetrospectiveAssessment of HBV-related liver fibrosis and detection of liver cirrhosisTraining: 105 HBV patients; Testing: 70 HBV patientsSVM82NA/NA
Forlano et al[133], 2020UnitedKingdomMAFLDRetrospectiveDetection and quantification of histological features of MAFLDTraining: 100 MAFLD patients; Testing: 146 MAFLD patientsK-meansSteatosis: 97; Inflammation: 96; Ballooning: 94; Fibrosis: 92NA/NA
Li et al[134], 2017ChinaHCCRetrospectiveNuclei grading of HCC4017 HCC nuclei patchesCNNs96.7G1: 94.3/97.5; G2: 96.0/97.0;G3: 97.1/96.6; G4: 99.5/95.8
Kiani et al[135], 2020United StatesLiver cancer (HCC and CC)RetrospectiveHistopathologic classification of liver cancerTraining: 70 WSIs: 35 HCC and 35 CC Testing: 80 WSIs: 40 HCC and 40 CCSVM84.272/95
Advanced AI-based pathology: prediction of gene mutations and prognosis
Steinbuss et al[136], 2020GermanyGastritisRetrospectiveIdentification of gastritis subtypesTraining: 92 patients (825 images: 398 low inflammation, 305 severe inflammation, and 122 A gastritis) (HE stained) Testing: 22 patients (209 images: 122 low inflammation, 38 severe inflammation, and 49 A gastritis) (HE stained)CNNs84A gastritis: 88/89; B gastritis: 100/93; C gastritis: 83/100
Liu et al[137], 2020ChinaGastrointestinal neuroendocrine tumorRetrospectivePrediction of Ki-67 positive cells12 patients (18762 images: 5900 positive cells, 6086 positive cells, and 6776 background from ROIs) (HE and IHC stained)CNNs97.897.8/NA
Kather et al[138], 2019GermanyGC and CRCRetrospectivePrediction of MSI in GC and CRCTraining: 360 patients (93408 tiles); Testing: 378 patients (896530 tiles)CNNs84NA/NA
Bychkov et al[139], 2018 FinlandCRCRetrospectivePrediction of CRC outcome420 CRC tumor tissue microarray samplesCNNs, RNNs69NA/NA
Kather et al[140], 2019GermanyCRCRetrospectivePrediction of survival from CRC histology slidesTraining: 86 CRC tissue slides (> 100000 HE image patches); Testing: 25 CRC patients (7180 images)CNNs98.7NA/NA
Echle et al[141], 2020GermanyCRCRetrospectiveDetection of dMMR or MSI in CRCTraining: 5500 patients; Testing: 906 patientsA modified shufflenet DL system9298/52
Skrede et al[142], 20203R23 Song 2020CRCRetrospectivePrediction of CRC outcome after resectionTraining: 828 patients (> 12000000 image tiles); Testing: 920 patientsCNNs7652/78
Sirinukunwattana et al[143], 2020UnitedKingdomCRCRetrospectiveIdentification of consensus molecular subtypes of CRCTraining: 278 patients with CRC; Testing: 574 patients with CRC: 144 biopsies and 430 TCGANeural networks with domain-adversarial learningBiopsies: 85; TCGA: 84NA/NA
Jang et al[144], 2020South KoreaCRCRetrospectivePrediction of gene mutations in CRCTraining: 629 WSIs with CRC (HE stained) Testing: 142 WSIs with CRC (HE stained)CNNs64.8-88.0NA/NA
Chaudhary et al[145], 2018United StatesHCCRetrospectiveIdentification of survival subgroups of HCCTraining: 360 HCC patients’ data using RNA-seq, miRNA-seq and methylation data from TCGA; Testing: 684 HCC patients’ data (LIRI-JP cohort: 230; NCI cohort: 221; Chinese cohort: 166, E-TABM-36 cohort: 40, and Hawaiian cohort: 27)DLLIRI-JP cohort: 75; NCI cohort: 67; Chinese cohort: 69; E-TABM-36 cohort: 77; Hawaiian cohort: 82NA/NA
Saillard et al[146], 2020FranceHCCRetrospectivePrediction of the survival of HCC patients treated by surgical resectionTraining: 206 HCC (390 WSIs); Testing: 328 HCC (342 WSIs)CNNs (SCHMOWDER and CHOWDER)SCHMOWDER: 78; CHOWDER: 75NA/NA
Chen et al[11], 2020ChinaHCCRetrospectiveClassification and gene mutation prediction of HCCTraining: 472 WSIs: 383 HCC and 89 normal liver tissue; Testing: 101 WSIs: 67 HCC and 34 normal liver tissue CNNsClassification: 96.0; Tumor differentiation: 89.6; Gene mutation: 71-89NA/NA
Fu et al[147], 2020UnitedKingdomEAC, GC, CRC, and liver cancersRetrospectivePrediction of mutations, tumor composition and prognosis17335 HE-stained images of 28 cancer typesCNNsVariable across tumors/gene alterationsNA/NA
Basic AI-assisted pathology: diagnosis

The basic role of pathology is disease diagnosis. In the fields of gastroenterology, there is an increasing need for automatic pathological analysis and diagnosis of GC. Based on the virtual version of pathological slices, several studies were performed to identify and classify GC automatically with high AUCs[120-122,126]. For example, a CNNs model was developed to distinguish gastric mass lesions including gastric adenocarcinoma, adenoma and non-neoplastic lesions, and it has achieved the highest AUC of 0.97 for the identification of gastric adenocarcinoma[126]. With regard to colorectal lesions, Wei et al[128] trained an AI-assisted model to classify colorectal polyps on WSIs, and notably, the performance of the model was similar to that of local pathologists whether in a single institution or other institutions. Besides diagnosis, a model based on more than 400 WSIs was developed to differentiate five common subtypes of colorectal polyps with accuracy of 93%[127]. In CRC, Shapcott et al[129] performed a retrospective study to develop a CNNs model for diagnosis based on 853 hand-marked images with an accuracy of 84%.

In the fields of hepatology, AI-assisted pathology is applied in patients with hepatitis B virus (HBV), metabolic associated fatty liver disease, HCC, etc. An automated, stain-free AI system can quantify the amount of fibrillar collagen to evaluate the degree of HBV-related fibrosis with the AUC > 0.82[132]. For patients with metabolic associated fatty liver disease, AI-assisted pathology tools were used to identify and quantify pathological changes including steatosis, macrosteatosis, lobular inflammation, ballooning, and fibrosis[133], and the algorithm output scores for quantitative comparison with experienced pathologists achieved good agreement. However, limited AI-assisted pathology tools have been built for HCC diagnosis. Notably, the MFC-CNN-ELM program was designed for nuclei grading of biopsy specimens from HCC patients, which revealed high performance in classifying tumor cells of different differentiation stages[134].

Advanced AI-assisted pathology: prediction of gene mutations and prognosis

Apart from AI-assisted pathology tools in diagnosis, it is no surprise that many tools have been developed for the prediction of gene mutations and prognosis in the fields of gastroenterology and hepatology. In CRC, AI tools have shown great effectiveness in predicting prognosis across all tumor stages based on WSIs[139,140], and several prospective multicenter studies have further validated the high prognosis performance[142]. Notably, a subset of genetic defects occurring in gastroenterology is related to some morphological features detected on WSIs. Among screened genetic defects, microsatellite instability and mismatch-repair deficiency are associated with the survival of gastrointestinal and colorectal cancer patients receiving immunotherapy. Therefore, an AI tool was designed to predict microsatellite instability and mismatch-repair deficiency directly from pathology, and it finally showed reasonably good performance in assisting immunotherapy[138,141]. Notably, Kather et al[140] further validated the above model’s performance in predicting overall survival from CRC pathology slides with a hazard ratio of 2.29 in CRC-specific overall survival (OS) and an hazard ratio of 1.63 in OS, respectively. However, besides the above-mentioned studies that have focused on tumor detection of CRC, few studies were designed to predict gene mutations and prognosis due to the more complicated and heterogeneous histomorphology in gastric diseases than that in the colon[136,137].

In the fields of hepatology, AI tools are mainly used to predict gene mutations and prognosis in HCC. For example, a model has higher accuracy in predicting survival postoperatively than using a composite score of clinical and pathological factors in HCC. In addition, the model may generalize well after validating the performance in an external dataset with different staining and scanning methods[146]. Chen et al[11] investigated a CNN (Inception V3) for automatic classification (benign/malignant classification with 96.0% accuracy, and differentiation degree with 89.6% accuracy) and gene mutation prediction from WSIs after resection of HCC. It was found that CTNNB1, FMN2, TP53, and ZFX4 could be predicted from WSIs with external AUCs from 0.71 to 0.89. Currently, after integrating clinical data, biological data, genetic data, and pathological data, the novel model may also be a promising approach. The first multi-omics model combined ribonucleic acid (RNA) sequencing, miRNA sequencing and methylation data from The Cancer Genome Atlas, and then employed AI technologies to predict and differentiate survival of HCC patients[145]. Other attempts have been made to develop models that can predict gene mutations directly based on WSIs of HCC. Using AI-assisted pathology, some approaches can predict gene expression and RNA sequencing, which may have the potential for clinical translation[147]. Interestingly, some gene expression such as PD-1 and PD-L1 expression, inflammatory gene signatures, and biomarkers of inflammation did trend with improved survival and response in HCC patients[148].

LIMITATIONS AND FUTURE CONSIDERATIONS

This review retrospectively summarized some key and representative articles with the possibility of missing some publications in AI-related journals. Although various studies have shown promising results in the fields of AI-assisted gastroenterology and hepatology, there are still several limitations to be discussed and resolved. One of the major criticisms is the lack of high-quality training, testing, and validation datasets for the development and validation of AI models. Due to the retrospective manner of most studies, selection bias must be considered at the training stage, meanwhile, overfitting and spectrum bias may result in overestimation of the model accuracy and generalization. According to the rigorous “six-steps” translation pipelines[149], doctors and AI researchers should join the calls that advocate for developing interconnected networks of collecting raw acquisition data which was shifted from processed medical images over the world and training AI on a large scale to obtain robust and generalizable models. Furthermore, the black-box nature of AI technologies has become a barrier to clinical practice, because developers and users do not know the details about how computers output the conclusion. Explainable AI for reliable healthcare is worth investigating to reach clinical interpretability and transparency. In addition, from the perspective of ethics and legal liabilities, AI models may potentially cause errors and challenge the patient-doctor relationship despite the fact that they improve the clinical workflow with enhanced precision. Especially in the fields of gastroenterology and hepatology, cancer discrimination may mean a completely different treatment. If misdiagnosis occurs during AI application, who should take responsibility- the doctor, the programmer, the company providing the system, or the patient? Issues such as ethics and legal liabilities should be demonstrated in the early phase to maintain the balance between minimal error rates and maximal patient benefits[150,151].

There have been an increasing number of studies applying AI to gastroenterology and hepatology over the past decade. In the future, the trend will continue and larger studies will be carried out to compare the performance of medical professionals with AI vs professionals without AI to highlight the importance of AI assistance. AI technologies will be utilized to develop more accurate models to predict and monitor disease progression and potential complications, and these models may ameliorate the insufficiency of medical resources in the remote underserved or developing regions. Besides, AI-assisted personalized imaging protocols and immediate three-dimensional reconstruction may further improve the diagnostic efficiency and accuracy. Researchers will be able to realize the mechanism of disease progression and treatment response through the combination of multi-modality images or multi-omics data. In addition, there is an emerging trend applying AI to drug development, such as prediction of compound toxicity, physical properties, and biological activities, which may assist chemotherapy for digestive system malignancy. Furthermore, AI could be used to process the data generated from the tissue-on-a-chip platform which could better summarize the tumor microenvironment, thus reach precise and individual chemotherapy in gastroenterology and hepatology. As synthetic lethality becomes a promising genetically targeted cancer therapy[152,153], AI could also be used for the detection of target synthetic lethal partners of overexpressed or mutated genes in tumor cells to kill cancers. Finally, AI tools could not replace endoscopists, radiologists, and pathologists in the near and even distant future. Computers would make predictions and doctors would make the final decision, in other words, they would always work together to benefit patients.

CONCLUSION

AI is rapidly developing and becoming a promising tool in medical image analysis of endoscopy, radiology, and pathology to improve disease diagnosis and treatment in the fields of gastroenterology and hepatology. Nevertheless, we should be aware of the constraints that limit the acceptance and utilization of AI tools in clinical practice. To use AI wisely, doctors and researchers should cooperate to address the current challenges and develop more accurate AI tools to improve patient care.

ACKNOWLEDGEMENTS

We thank Yun Cai for polishing our manuscript. We are grateful to our colleagues for their assistance in checking the data of the studies.

Footnotes

Manuscript source: Invited manuscript

Specialty type: Gastroenterology and hepatology

Country/Territory of origin: China

Peer-review report’s scientific quality classification

Grade A (Excellent): A

Grade B (Very good): 0

Grade C (Good): 0

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Santos-García G S-Editor: Zhang L L-Editor: Webster JR P-Editor: Wang LL

References
1.  Laskaris R. Artificial Intelligence: A Modern Approach, 3rd edition. Library Journal, 2015; 140: 45-45. .  [PubMed]  [DOI]  [Cited in This Article: ]
2.  Colom R, Karama S, Jung RE, Haier RJ. Human intelligence and brain networks. Dialogues Clin Neurosci. 2010;12:489-501.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 154]  [Cited by in F6Publishing: 145]  [Article Influence: 11.2]  [Reference Citation Analysis (0)]
3.  Darcy AM, Louie AK, Roberts LW. Machine Learning and the Profession of Medicine. JAMA. 2016;315:551-552.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 244]  [Cited by in F6Publishing: 225]  [Article Influence: 28.1]  [Reference Citation Analysis (0)]
4.  Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K, Cui C, Corrado G, Thrun S, Dean J. A guide to deep learning in healthcare. Nat Med. 2019;25:24-29.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1123]  [Cited by in F6Publishing: 1219]  [Article Influence: 243.8]  [Reference Citation Analysis (0)]
5.  Yang YJ, Bang CS. Application of artificial intelligence in gastroenterology. World J Gastroenterol. 2019;25:1666-1683.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 166]  [Cited by in F6Publishing: 145]  [Article Influence: 29.0]  [Reference Citation Analysis (4)]
6.  Le Berre C, Sandborn WJ, Aridhi S, Devignes MD, Fournier L, Smaïl-Tabbone M, Danese S, Peyrin-Biroulet L. Application of Artificial Intelligence to Gastroenterology and Hepatology. Gastroenterology 2020; 158: 76-94. e2.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 230]  [Cited by in F6Publishing: 291]  [Article Influence: 72.8]  [Reference Citation Analysis (0)]
7.  Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell. 2013;35:1798-1828.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6542]  [Cited by in F6Publishing: 2416]  [Article Influence: 219.6]  [Reference Citation Analysis (0)]
8.  Kumar A, Kim J, Lyndon D, Fulham M, Feng D. An Ensemble of Fine-Tuned Convolutional Neural Networks for Medical Image Classification. IEEE J Biomed Health Inform. 2017;21:31-40.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 264]  [Cited by in F6Publishing: 156]  [Article Influence: 19.5]  [Reference Citation Analysis (0)]
9.  Ambinder EP. A history of the shift toward full computerization of medicine. J Oncol Pract. 2005;1:54-56.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 17]  [Cited by in F6Publishing: 19]  [Article Influence: 1.5]  [Reference Citation Analysis (0)]
10.  Chen H, Sung JJY. Potentials of AI in medical image analysis in Gastroenterology and Hepatology. J Gastroenterol Hepatol. 2021;36:31-38.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 14]  [Cited by in F6Publishing: 19]  [Article Influence: 6.3]  [Reference Citation Analysis (0)]
11.  Chen M, Zhang B, Topatana W, Cao J, Zhu H, Juengpanich S, Mao Q, Yu H, Cai X. Classification and mutation prediction based on histopathology H&E images in liver cancer using deep learning. NPJ Precis Oncol. 2020;4:14.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 55]  [Cited by in F6Publishing: 86]  [Article Influence: 21.5]  [Reference Citation Analysis (0)]
12.  Fu Y, Schwebel DC, Hu G. Physicians' Workloads in China: 1998⁻2016. Int J Environ Res Public Health. 2018;15.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 31]  [Article Influence: 5.2]  [Reference Citation Analysis (0)]
13.  Miotto R, Wang F, Wang S, Jiang X, Dudley JT. Deep learning for healthcare: review, opportunities and challenges. Brief Bioinform. 2018;19:1236-1246.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1189]  [Cited by in F6Publishing: 785]  [Article Influence: 130.8]  [Reference Citation Analysis (0)]
14.  Shen D, Wu G, Suk HI. Deep Learning in Medical Image Analysis. Annu Rev Biomed Eng. 2017;19:221-248.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2581]  [Cited by in F6Publishing: 1773]  [Article Influence: 253.3]  [Reference Citation Analysis (0)]
15.  Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E. Scikit-learn: Machine Learning in Python. J Mach Learn Res. 2011;12:2825-2830.  [PubMed]  [DOI]  [Cited in This Article: ]
16.  Yu H, Singh R, Shin SH, Ho KY. Artificial intelligence in upper GI endoscopy - current status, challenges and future promise. J Gastroenterol Hepatol. 2021;36:20-24.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 7]  [Article Influence: 2.3]  [Reference Citation Analysis (0)]
17.  Mori Y, Neumann H, Misawa M, Kudo SE, Bretthauer M. Artificial intelligence in colonoscopy - Now on the market. What's next? J Gastroenterol Hepatol. 2021;36:7-11.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 28]  [Cited by in F6Publishing: 40]  [Article Influence: 13.3]  [Reference Citation Analysis (0)]
18.  Wu J, Chen J, Cai J. Application of Artificial Intelligence in Gastrointestinal Endoscopy. J Clin Gastroenterol. 2021;55:110-120.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 7]  [Article Influence: 2.3]  [Reference Citation Analysis (0)]
19.  Takiyama H, Ozawa T, Ishihara S, Fujishiro M, Shichijo S, Nomura S, Miura M, Tada T. Automatic anatomical classification of esophagogastroduodenoscopy images using deep convolutional neural networks. Sci Rep. 2018;8:7497.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 78]  [Cited by in F6Publishing: 74]  [Article Influence: 12.3]  [Reference Citation Analysis (0)]
20.  Wu L, Zhang J, Zhou W, An P, Shen L, Liu J, Jiang X, Huang X, Mu G, Wan X, Lv X, Gao J, Cui N, Hu S, Chen Y, Hu X, Li J, Chen D, Gong D, He X, Ding Q, Zhu X, Li S, Wei X, Li X, Wang X, Zhou J, Zhang M, Yu HG. Randomised controlled trial of WISENSE, a real-time quality improving system for monitoring blind spots during esophagogastroduodenoscopy. Gut. 2019;68:2161-2169.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 169]  [Cited by in F6Publishing: 187]  [Article Influence: 37.4]  [Reference Citation Analysis (0)]
21.  van der Sommen F, Zinger S, Curvers WL, Bisschops R, Pech O, Weusten BL, Bergman JJ, de With PH, Schoon EJ. Computer-aided detection of early neoplastic lesions in Barrett's esophagus. Endoscopy. 2016;48:617-624.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 111]  [Cited by in F6Publishing: 113]  [Article Influence: 14.1]  [Reference Citation Analysis (1)]
22.  Swager AF, van der Sommen F, Klomp SR, Zinger S, Meijer SL, Schoon EJ, Bergman JJGHM, de With PH, Curvers WL. Computer-aided detection of early Barrett's neoplasia using volumetric laser endomicroscopy. Gastrointest Endosc. 2017;86:839-846.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 103]  [Cited by in F6Publishing: 97]  [Article Influence: 13.9]  [Reference Citation Analysis (0)]
23.  Hashimoto R, Requa J, Dao T, Ninh A, Tran E, Mai D, Lugo M, El-Hage Chehade N, Chang KJ, Karnes WE, Samarasena JB. Artificial intelligence using convolutional neural networks for real-time detection of early esophageal neoplasia in Barrett's esophagus (with video). Gastrointest Endosc 2020; 91: 1264-1271. e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 102]  [Cited by in F6Publishing: 116]  [Article Influence: 29.0]  [Reference Citation Analysis (0)]
24.  Ebigbo A, Mendel R, Probst A, Manzeneder J, Prinz F, de Souza LA Jr, Papa J, Palm C, Messmann H. Real-time use of artificial intelligence in the evaluation of cancer in Barrett's oesophagus. Gut. 2020;69:615-616.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 84]  [Cited by in F6Publishing: 101]  [Article Influence: 25.3]  [Reference Citation Analysis (0)]
25.  Horie Y, Yoshio T, Aoyama K, Yoshimizu S, Horiuchi Y, Ishiyama A, Hirasawa T, Tsuchida T, Ozawa T, Ishihara S, Kumagai Y, Fujishiro M, Maetani I, Fujisaki J, Tada T. Diagnostic outcomes of esophageal cancer by artificial intelligence using convolutional neural networks. Gastrointest Endosc. 2019;89:25-32.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 240]  [Cited by in F6Publishing: 235]  [Article Influence: 47.0]  [Reference Citation Analysis (0)]
26.  Kumagai Y, Takubo K, Kawada K, Aoyama K, Endo Y, Ozawa T, Hirasawa T, Yoshio T, Ishihara S, Fujishiro M, Tamaru JI, Mochiki E, Ishida H, Tada T. Diagnosis using deep-learning artificial intelligence based on the endocytoscopic observation of the esophagus. Esophagus. 2019;16:180-187.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 65]  [Cited by in F6Publishing: 56]  [Article Influence: 11.2]  [Reference Citation Analysis (0)]
27.  Zhao YY, Xue DX, Wang YL, Zhang R, Sun B, Cai YP, Feng H, Cai Y, Xu JM. Computer-assisted diagnosis of early esophageal squamous cell carcinoma using narrow-band imaging magnifying endoscopy. Endoscopy. 2019;51:333-341.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 66]  [Cited by in F6Publishing: 74]  [Article Influence: 14.8]  [Reference Citation Analysis (0)]
28.  Cai SL, Li B, Tan WM, Niu XJ, Yu HH, Yao LQ, Zhou PH, Yan B, Zhong YS. Using a deep learning system in endoscopy for screening of early esophageal squamous cell carcinoma (with video). Gastrointest Endosc 2019; 90: 745-753. e2.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 74]  [Cited by in F6Publishing: 92]  [Article Influence: 18.4]  [Reference Citation Analysis (0)]
29.  Nakagawa K, Ishihara R, Aoyama K, Ohmori M, Nakahira H, Matsuura N, Shichijo S, Nishida T, Yamada T, Yamaguchi S, Ogiyama H, Egawa S, Kishida O, Tada T. Classification for invasion depth of esophageal squamous cell carcinoma using a deep neural network compared with experienced endoscopists. Gastrointest Endosc. 2019;90:407-414.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 82]  [Cited by in F6Publishing: 85]  [Article Influence: 17.0]  [Reference Citation Analysis (0)]
30.  Tokai Y, Yoshio T, Aoyama K, Horie Y, Yoshimizu S, Horiuchi Y, Ishiyama A, Tsuchida T, Hirasawa T, Sakakibara Y, Yamada T, Yamaguchi S, Fujisaki J, Tada T. Application of artificial intelligence using convolutional neural networks in determining the invasion depth of esophageal squamous cell carcinoma. Esophagus. 2020;17:250-256.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 64]  [Cited by in F6Publishing: 63]  [Article Influence: 15.8]  [Reference Citation Analysis (0)]
31.  Ali H, Yasmin M, Sharif M, Rehmani MH. Computer assisted gastric abnormalities detection using hybrid texture descriptors for chromoendoscopy images. Comput Methods Programs Biomed. 2018;157:39-47.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 31]  [Cited by in F6Publishing: 25]  [Article Influence: 4.2]  [Reference Citation Analysis (0)]
32.  Sakai Y, Takemoto S, Hori K, Nishimura M, Ikematsu H, Yano T, Yokota H. Automatic detection of early gastric cancer in endoscopic images using a transferring convolutional neural network. Annu Int Conf IEEE Eng Med Biol Soc. 2018;2018:4138-4141.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 39]  [Cited by in F6Publishing: 46]  [Article Influence: 9.2]  [Reference Citation Analysis (0)]
33.  Kanesaka T, Lee TC, Uedo N, Lin KP, Chen HZ, Lee JY, Wang HP, Chang HT. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest Endosc. 2018;87:1339-1344.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 108]  [Cited by in F6Publishing: 114]  [Article Influence: 19.0]  [Reference Citation Analysis (0)]
34.  Wu L, Zhou W, Wan X, Zhang J, Shen L, Hu S, Ding Q, Mu G, Yin A, Huang X, Liu J, Jiang X, Wang Z, Deng Y, Liu M, Lin R, Ling T, Li P, Wu Q, Jin P, Chen J, Yu H. A deep neural network improves endoscopic detection of early gastric cancer without blind spots. Endoscopy. 2019;51:522-531.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 124]  [Cited by in F6Publishing: 133]  [Article Influence: 26.6]  [Reference Citation Analysis (0)]
35.  Horiuchi Y, Aoyama K, Tokai Y, Hirasawa T, Yoshimizu S, Ishiyama A, Yoshio T, Tsuchida T, Fujisaki J, Tada T. Convolutional Neural Network for Differentiating Gastric Cancer from Gastritis Using Magnified Endoscopy with Narrow Band Imaging. Dig Dis Sci. 2020;65:1355-1363.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 70]  [Cited by in F6Publishing: 83]  [Article Influence: 20.8]  [Reference Citation Analysis (1)]
36.  Zhu Y, Wang QC, Xu MD, Zhang Z, Cheng J, Zhong YS, Zhang YQ, Chen WF, Yao LQ, Zhou PH, Li QL. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest Endosc 2019; 89: 806-815. e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 201]  [Cited by in F6Publishing: 195]  [Article Influence: 39.0]  [Reference Citation Analysis (0)]
37.  Luo H, Xu G, Li C, He L, Luo L, Wang Z, Jing B, Deng Y, Jin Y, Li Y, Li B, Tan W, He C, Seeruttun SR, Wu Q, Huang J, Huang DW, Chen B, Lin SB, Chen QM, Yuan CM, Chen HX, Pu HY, Zhou F, He Y, Xu RH. Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: a multicentre, case-control, diagnostic study. Lancet Oncol. 2019;20:1645-1654.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 155]  [Cited by in F6Publishing: 222]  [Article Influence: 44.4]  [Reference Citation Analysis (0)]
38.  Nagao S, Tsuji Y, Sakaguchi Y, Takahashi Y, Minatsuki C, Niimi K, Yamashita H, Yamamichi N, Seto Y, Tada T, Koike K. Highly accurate artificial intelligence systems to predict the invasion depth of gastric cancer: efficacy of conventional white-light imaging, nonmagnifying narrow-band imaging, and indigo-carmine dye contrast imaging. Gastrointest Endosc 2020; 92: 866-873. e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 45]  [Cited by in F6Publishing: 63]  [Article Influence: 15.8]  [Reference Citation Analysis (0)]
39.  Ayaru L, Ypsilantis PP, Nanapragasam A, Choi RC, Thillanathan A, Min-Ho L, Montana G. Prediction of Outcome in Acute Lower Gastrointestinal Bleeding Using Gradient Boosting. PLoS One. 2015;10:e0132485.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 45]  [Cited by in F6Publishing: 38]  [Article Influence: 4.2]  [Reference Citation Analysis (0)]
40.  Xiao Jia, Meng MQ. A deep convolutional neural network for bleeding detection in Wireless Capsule Endoscopy images. Annu Int Conf IEEE Eng Med Biol Soc. 2016;2016:639-642.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 71]  [Cited by in F6Publishing: 55]  [Article Influence: 7.9]  [Reference Citation Analysis (0)]
41.  Usman MA, Satrya GB, Usman MR, Shin SY. Detection of small colon bleeding in wireless capsule endoscopy videos. Comput Med Imaging Graph. 2016;54:16-26.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 27]  [Cited by in F6Publishing: 28]  [Article Influence: 3.5]  [Reference Citation Analysis (0)]
42.  Sengupta N, Tapper EB. Derivation and Internal Validation of a Clinical Prediction Tool for 30-Day Mortality in Lower Gastrointestinal Bleeding. Am J Med 2017; 130: 601.e1-601. e8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 36]  [Cited by in F6Publishing: 36]  [Article Influence: 5.1]  [Reference Citation Analysis (0)]
43.  Leenhardt R, Vasseur P, Li C, Saurin JC, Rahmi G, Cholet F, Becq A, Marteau P, Histace A, Dray X;  CAD-CAP Database Working Group. A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopy. Gastrointest Endosc. 2019;89:189-194.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 129]  [Cited by in F6Publishing: 130]  [Article Influence: 26.0]  [Reference Citation Analysis (0)]
44.  Aoki T, Yamada A, Kato Y, Saito H, Tsuboi A, Nakada A, Niikura R, Fujishiro M, Oka S, Ishihara S, Matsuda T, Nakahori M, Tanaka S, Koike K, Tada T. Automatic detection of blood content in capsule endoscopy images based on a deep convolutional neural network. J Gastroenterol Hepatol. 2020;35:1196-1200.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 46]  [Cited by in F6Publishing: 59]  [Article Influence: 14.8]  [Reference Citation Analysis (0)]
45.  Yang J, Chang L, Li S, He X, Zhu T. WCE polyp detection based on novel feature descriptor with normalized variance locality-constrained linear coding. Int J Comput Assist Radiol Surg. 2020;15:1291-1302.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 7]  [Cited by in F6Publishing: 9]  [Article Influence: 2.3]  [Reference Citation Analysis (0)]
46.  Vieira PM, Freitas NR, Valente J, Vaz IF, Rolanda C, Lima CS. Automatic detection of small bowel tumors in wireless capsule endoscopy images using ensemble learning. Med Phys. 2020;47:52-63.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 10]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
47.  Fernández-Esparrach G, Bernal J, López-Cerón M, Córdova H, Sánchez-Montes C, Rodríguez de Miguel C, Sánchez FJ. Exploring the clinical potential of an automatic colonic polyp detection method based on the creation of energy maps. Endoscopy. 2016;48:837-842.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 84]  [Cited by in F6Publishing: 84]  [Article Influence: 10.5]  [Reference Citation Analysis (0)]
48.  Komeda Y, Handa H, Watanabe T, Nomura T, Kitahashi M, Sakurai T, Okamoto A, Minami T, Kono M, Arizumi T, Takenaka M, Hagiwara S, Matsui S, Nishida N, Kashida H, Kudo M. Computer-Aided Diagnosis Based on Convolutional Neural Network System for Colorectal Polyp Classification: Preliminary Experience. Oncology. 2017;93 Suppl 1:30-34.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 127]  [Cited by in F6Publishing: 120]  [Article Influence: 17.1]  [Reference Citation Analysis (0)]
49.  Misawa M, Kudo SE, Mori Y, Takeda K, Maeda Y, Kataoka S, Nakamura H, Kudo T, Wakamura K, Hayashi T, Katagiri A, Baba T, Ishida F, Inoue H, Nimura Y, Oda M, Mori K. Accuracy of computer-aided diagnosis based on narrow-band imaging endocytoscopy for diagnosing colorectal lesions: comparison with experts. Int J Comput Assist Radiol Surg. 2017;12:757-766.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 47]  [Cited by in F6Publishing: 46]  [Article Influence: 6.6]  [Reference Citation Analysis (0)]
50.  Misawa M, Kudo SE, Mori Y, Cho T, Kataoka S, Yamauchi A, Ogawa Y, Maeda Y, Takeda K, Ichimasa K, Nakamura H, Yagawa Y, Toyoshima N, Ogata N, Kudo T, Hisayuki T, Hayashi T, Wakamura K, Baba T, Ishida F, Itoh H, Roth H, Oda M, Mori K. Artificial Intelligence-Assisted Polyp Detection for Colonoscopy: Initial Experience. Gastroenterology 2018; 154: 2027-2029. e3.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 229]  [Cited by in F6Publishing: 239]  [Article Influence: 39.8]  [Reference Citation Analysis (0)]
51.  Chen PJ, Lin MC, Lai MJ, Lin JC, Lu HH, Tseng VS. Accurate Classification of Diminutive Colorectal Polyps Using Computer-Aided Analysis. Gastroenterology. 2018;154:568-575.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 250]  [Cited by in F6Publishing: 245]  [Article Influence: 40.8]  [Reference Citation Analysis (0)]
52.  Urban G, Tripathi P, Alkayali T, Mittal M, Jalali F, Karnes W, Baldi P. Deep Learning Localizes and Identifies Polyps in Real Time With 96% Accuracy in Screening Colonoscopy. Gastroenterology 2018; 155: 1069-1078. e8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 398]  [Cited by in F6Publishing: 391]  [Article Influence: 65.2]  [Reference Citation Analysis (1)]
53.  Renner J, Phlipsen H, Haller B, Navarro-Avila F, Saint-Hill-Febles Y, Mateus D, Ponchon T, Poszler A, Abdelhafez M, Schmid RM, von Delius S, Klare P. Optical classification of neoplastic colorectal polyps - a computer-assisted approach (the COACH study). Scand J Gastroenterol. 2018;53:1100-1106.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 28]  [Cited by in F6Publishing: 32]  [Article Influence: 5.3]  [Reference Citation Analysis (0)]
54.  Wang P, Xiao X, Glissen Brown JR, Berzin TM, Tu M, Xiong F, Hu X, Liu P, Song Y, Zhang D, Yang X, Li L, He J, Yi X, Liu J, Liu X. Development and validation of a deep-learning algorithm for the detection of polyps during colonoscopy. Nat Biomed Eng. 2018;2:741-748.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 248]  [Cited by in F6Publishing: 247]  [Article Influence: 41.2]  [Reference Citation Analysis (0)]
55.  Mori Y, Kudo SE, Misawa M, Saito Y, Ikematsu H, Hotta K, Ohtsuka K, Urushibara F, Kataoka S, Ogawa Y, Maeda Y, Takeda K, Nakamura H, Ichimasa K, Kudo T, Hayashi T, Wakamura K, Ishida F, Inoue H, Itoh H, Oda M, Mori K. Real-Time Use of Artificial Intelligence in Identification of Diminutive Polyps During Colonoscopy: A Prospective Study. Ann Intern Med. 2018;169:357-366.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 299]  [Cited by in F6Publishing: 308]  [Article Influence: 51.3]  [Reference Citation Analysis (1)]
56.  Byrne MF, Chapados N, Soudan F, Oertel C, Linares Pérez M, Kelly R, Iqbal N, Chandelier F, Rex DK. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model. Gut. 2019;68:94-100.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 363]  [Cited by in F6Publishing: 376]  [Article Influence: 75.2]  [Reference Citation Analysis (0)]
57.  Blanes-Vidal V, Baatrup G, Nadimi ES. Addressing priority challenges in the detection and assessment of colorectal polyps from capsule endoscopy and colonoscopy in colorectal cancer screening using machine learning. Acta Oncol. 2019;58:S29-S36.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 38]  [Cited by in F6Publishing: 54]  [Article Influence: 10.8]  [Reference Citation Analysis (0)]
58.  Lee JY, Jeong J, Song EM, Ha C, Lee HJ, Koo JE, Yang DH, Kim N, Byeon JS. Real-time detection of colon polyps during colonoscopy using deep learning: systematic validation with four independent datasets. Sci Rep. 2020;10:8379.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 31]  [Cited by in F6Publishing: 33]  [Article Influence: 8.3]  [Reference Citation Analysis (0)]
59.  Gohari MR, Biglarian A, Bakhshi E, Pourhoseingholi MA. Use of an artificial neural network to determine prognostic factors in colorectal cancer patients. Asian Pac J Cancer Prev. 2011;12:1469-1472.  [PubMed]  [DOI]  [Cited in This Article: ]
60.  Biglarian A, Bakhshi E, Gohari MR, Khodabakhshi R. Artificial neural network for prediction of distant metastasis in colorectal cancer. Asian Pac J Cancer Prev. 2012;13:927-930.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 20]  [Cited by in F6Publishing: 21]  [Article Influence: 1.8]  [Reference Citation Analysis (0)]
61.  Takeda K, Kudo SE, Mori Y, Misawa M, Kudo T, Wakamura K, Katagiri A, Baba T, Hidaka E, Ishida F, Inoue H, Oda M, Mori K. Accuracy of diagnosing invasive colorectal cancer using computer-aided endocytoscopy. Endoscopy. 2017;49:798-802.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 80]  [Cited by in F6Publishing: 92]  [Article Influence: 13.1]  [Reference Citation Analysis (0)]
62.  Ito N, Kawahira H, Nakashima H, Uesato M, Miyauchi H, Matsubara H. Endoscopic Diagnostic Support System for cT1b Colorectal Cancer Using Deep Learning. Oncology. 2019;96:44-50.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 40]  [Cited by in F6Publishing: 47]  [Article Influence: 7.8]  [Reference Citation Analysis (0)]
63.  Zhou D, Tian F, Tian X, Sun L, Huang X, Zhao F, Zhou N, Chen Z, Zhang Q, Yang M, Yang Y, Guo X, Li Z, Liu J, Wang J, Wang B, Zhang G, Sun B, Zhang W, Kong D, Chen K, Li X. Diagnostic evaluation of a deep learning model for optical diagnosis of colorectal cancer. Nat Commun. 2020;11:2961.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 39]  [Cited by in F6Publishing: 35]  [Article Influence: 8.8]  [Reference Citation Analysis (0)]
64.  Shaheen NJ, Falk GW, Iyer PG, Gerson LB;  American College of Gastroenterology. ACG Clinical Guideline: Diagnosis and Management of Barrett's Esophagus. Am J Gastroenterol. 2016;111:30-50; quiz 51.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 990]  [Cited by in F6Publishing: 1013]  [Article Influence: 126.6]  [Reference Citation Analysis (0)]
65.  Fitzgerald RC, di Pietro M, Ragunath K, Ang Y, Kang JY, Watson P, Trudgill N, Patel P, Kaye PV, Sanders S, O'Donovan M, Bird-Lieberman E, Bhandari P, Jankowski JA, Attwood S, Parsons SL, Loft D, Lagergren J, Moayyedi P, Lyratzopoulos G, de Caestecker J;  British Society of Gastroenterology. British Society of Gastroenterology guidelines on the diagnosis and management of Barrett's oesophagus. Gut. 2014;63:7-42.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 858]  [Cited by in F6Publishing: 836]  [Article Influence: 83.6]  [Reference Citation Analysis (0)]
66.  Schölvinck DW, van der Meulen K, Bergman JJGHM, Weusten BLAM. Detection of lesions in dysplastic Barrett's esophagus by community and expert endoscopists. Endoscopy. 2017;49:113-120.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 13]  [Cited by in F6Publishing: 21]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
67.  Menon S, Trudgill N. How commonly is upper gastrointestinal cancer missed at endoscopy? Endosc Int Open. 2014;2:E46-E50.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 176]  [Cited by in F6Publishing: 207]  [Article Influence: 20.7]  [Reference Citation Analysis (0)]
68.  Hosokawa O, Hattori M, Douden K, Hayashi H, Ohta K, Kaizaki Y. Difference in accuracy between gastroscopy and colonoscopy for detection of cancer. Hepatogastroenterology. 2007;54:442-444.  [PubMed]  [DOI]  [Cited in This Article: ]
69.  Hirasawa T, Aoyama K, Tanimoto T, Ishihara S, Shichijo S, Ozawa T, Ohnishi T, Fujishiro M, Matsuo K, Fujisaki J, Tada T. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer. 2018;21:653-660.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 389]  [Cited by in F6Publishing: 396]  [Article Influence: 66.0]  [Reference Citation Analysis (0)]
70.  Kubota K, Kuroda J, Yoshida M, Ohta K, Kitajima M. Medical image analysis: computer-aided diagnosis of gastric cancer invasion on endoscopic images. Surg Endosc. 2012;26:1485-1489.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 50]  [Cited by in F6Publishing: 51]  [Article Influence: 3.9]  [Reference Citation Analysis (1)]
71.  Boland GW, Guimaraes AS, Mueller PR. The radiologist's conundrum: benefits and costs of increasing CT capacity and utilization. Eur Radiol. 2009;19:9-11; discussion 12.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 28]  [Cited by in F6Publishing: 31]  [Article Influence: 1.9]  [Reference Citation Analysis (0)]
72.  Gatos I, Tsantis S, Spiliopoulos S, Karnabatidis D, Theotokas I, Zoumpoulis P, Loupas T, Hazle JD, Kagadis GC. A new computer aided diagnosis system for evaluation of chronic liver disease with ultrasound shear wave elastography imaging. Med Phys. 2016;43:1428-1436.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 20]  [Cited by in F6Publishing: 14]  [Article Influence: 1.8]  [Reference Citation Analysis (0)]
73.  Gatos I, Tsantis S, Spiliopoulos S, Karnabatidis D, Theotokas I, Zoumpoulis P, Loupas T, Hazle JD, Kagadis GC. A Machine-Learning Algorithm Toward Color Analysis for Chronic Liver Disease Classification, Employing Ultrasound Shear Wave Elastography. Ultrasound Med Biol. 2017;43:1797-1810.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 61]  [Cited by in F6Publishing: 42]  [Article Influence: 6.0]  [Reference Citation Analysis (0)]
74.  Chen Y, Luo Y, Huang W, Hu D, Zheng RQ, Cong SZ, Meng FK, Yang H, Lin HJ, Sun Y, Wang XY, Wu T, Ren J, Pei SF, Zheng Y, He Y, Hu Y, Yang N, Yan H. Machine-learning-based classification of real-time tissue elastography for hepatic fibrosis in patients with chronic hepatitis B. Comput Biol Med. 2017;89:18-23.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 65]  [Cited by in F6Publishing: 53]  [Article Influence: 7.6]  [Reference Citation Analysis (0)]
75.  Li W, Huang Y, Zhuang BW, Liu GJ, Hu HT, Li X, Liang JY, Wang Z, Huang XW, Zhang CQ, Ruan SM, Xie XY, Kuang M, Lu MD, Chen LD, Wang W. Multiparametric ultrasomics of significant liver fibrosis: A machine learning-based analysis. Eur Radiol. 2019;29:1496-1506.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 62]  [Cited by in F6Publishing: 80]  [Article Influence: 13.3]  [Reference Citation Analysis (1)]
76.  Gatos I, Tsantis S, Spiliopoulos S, Karnabatidis D, Theotokas I, Zoumpoulis P, Loupas T, Hazle JD, Kagadis GC. Temporal stability assessment in shear wave elasticity images validated by deep learning neural network for chronic liver disease fibrosis stage assessment. Med Phys. 2019;46:2298-2309.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 33]  [Cited by in F6Publishing: 23]  [Article Influence: 4.6]  [Reference Citation Analysis (0)]
77.  Wang K, Lu X, Zhou H, Gao Y, Zheng J, Tong M, Wu C, Liu C, Huang L, Jiang T, Meng F, Lu Y, Ai H, Xie XY, Yin LP, Liang P, Tian J, Zheng R. Deep learning Radiomics of shear wave elastography significantly improved diagnostic performance for assessing liver fibrosis in chronic hepatitis B: a prospective multicentre study. Gut. 2019;68:729-741.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 226]  [Cited by in F6Publishing: 298]  [Article Influence: 59.6]  [Reference Citation Analysis (1)]
78.  Kuppili V, Biswas M, Sreekumar A, Suri HS, Saba L, Edla DR, Marinho RT, Sanches JM, Suri JS. Extreme Learning Machine Framework for Risk Stratification of Fatty Liver Disease Using Ultrasound Tissue Characterization. J Med Syst. 2017;41:152.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 62]  [Cited by in F6Publishing: 70]  [Article Influence: 10.0]  [Reference Citation Analysis (0)]
79.  Byra M, Styczynski G, Szmigielski C, Kalinowski P, Michałowski Ł, Paluszkiewicz R, Ziarkiewicz-Wróblewska B, Zieniewicz K, Sobieraj P, Nowicki A. Transfer learning with deep convolutional neural network for liver steatosis assessment in ultrasound images. Int J Comput Assist Radiol Surg. 2018;13:1895-1903.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 117]  [Cited by in F6Publishing: 114]  [Article Influence: 19.0]  [Reference Citation Analysis (0)]
80.  Biswas M, Kuppili V, Edla DR, Suri HS, Saba L, Marinhoe RT, Sanches JM, Suri JS. Symtosis: A liver ultrasound tissue characterization and risk stratification in optimized deep learning paradigm. Comput Methods Programs Biomed. 2018;155:165-177.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 100]  [Cited by in F6Publishing: 103]  [Article Influence: 17.2]  [Reference Citation Analysis (0)]
81.  Cao W, An X, Cong L, Lyu C, Zhou Q, Guo R. Application of Deep Learning in Quantitative Analysis of 2-Dimensional Ultrasound Imaging of Nonalcoholic Fatty Liver Disease. J Ultrasound Med. 2020;39:51-59.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 37]  [Cited by in F6Publishing: 36]  [Article Influence: 9.0]  [Reference Citation Analysis (0)]
82.  Guo LH, Wang D, Qian YY, Zheng X, Zhao CK, Li XL, Bo XW, Yue WW, Zhang Q, Shi J, Xu HX. A two-stage multi-view learning framework based computer-aided diagnosis of liver tumors with contrast enhanced ultrasound images. Clin Hemorheol Microcirc. 2018;69:343-354.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 42]  [Cited by in F6Publishing: 54]  [Article Influence: 9.0]  [Reference Citation Analysis (0)]
83.  Schmauch B, Herent P, Jehanno P, Dehaene O, Saillard C, Aubé C, Luciani A, Lassau N, Jégou S. Diagnosis of focal liver lesions from ultrasound using deep learning. Diagn Interv Imaging. 2019;100:227-233.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 61]  [Cited by in F6Publishing: 67]  [Article Influence: 13.4]  [Reference Citation Analysis (0)]
84.  Yang Q, Wei J, Hao X, Kong D, Yu X, Jiang T, Xi J, Cai W, Luo Y, Jing X, Yang Y, Cheng Z, Wu J, Zhang H, Liao J, Zhou P, Song Y, Zhang Y, Han Z, Cheng W, Tang L, Liu F, Dou J, Zheng R, Yu J, Tian J, Liang P. Improving B-mode ultrasound diagnostic performance for focal liver lesions using deep learning: A multicentre study. EBioMedicine. 2020;56:102777.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 20]  [Cited by in F6Publishing: 49]  [Article Influence: 12.3]  [Reference Citation Analysis (0)]
85.  Choi KJ, Jang JK, Lee SS, Sung YS, Shim WH, Kim HS, Yun J, Choi JY, Lee Y, Kang BK, Kim JH, Kim SY, Yu ES. Development and Validation of a Deep Learning System for Staging Liver Fibrosis by Using Contrast Agent-enhanced CT Images in the Liver. Radiology. 2018;289:688-697.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 108]  [Cited by in F6Publishing: 125]  [Article Influence: 20.8]  [Reference Citation Analysis (0)]
86.  He L, Li H, Dudley JA, Maloney TC, Brady SL, Somasundaram E, Trout AT, Dillman JR. Machine Learning Prediction of Liver Stiffness Using Clinical and T2-Weighted MRI Radiomic Data. AJR Am J Roentgenol. 2019;213:592-601.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 26]  [Cited by in F6Publishing: 35]  [Article Influence: 7.0]  [Reference Citation Analysis (0)]
87.  Ahmed Y, Hussein RS, Basha TA, Khalifa AM, Ibrahim AS, Abdelmoaty AS, Abdella HM, Fahmy AS. Detecting liver fibrosis using a machine learning-based approach to the quantification of the heart-induced deformation in tagged MR images. NMR Biomed. 2020;33:e4215.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 13]  [Article Influence: 3.3]  [Reference Citation Analysis (0)]
88.  Hectors SJ, Kennedy P, Huang KH, Stocker D, Carbonell G, Greenspan H, Friedman S, Taouli B. Fully automated prediction of liver fibrosis using deep learning analysis of gadoxetic acid-enhanced MRI. Eur Radiol. 2020;.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 13]  [Cited by in F6Publishing: 31]  [Article Influence: 7.8]  [Reference Citation Analysis (0)]
89.  Vivanti R, Szeskin A, Lev-Cohain N, Sosna J, Joskowicz L. Automatic detection of new tumors and tumor burden evaluation in longitudinal liver CT scan studies. Int J Comput Assist Radiol Surg. 2017;12:1945-1957.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 35]  [Cited by in F6Publishing: 45]  [Article Influence: 6.4]  [Reference Citation Analysis (0)]
90.  Yasaka K, Akai H, Abe O, Kiryu S. Deep Learning with Convolutional Neural Network for Differentiation of Liver Masses at Dynamic Contrast-enhanced CT: A Preliminary Study. Radiology. 2018;286:887-896.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 293]  [Cited by in F6Publishing: 357]  [Article Influence: 51.0]  [Reference Citation Analysis (0)]
91.  Ibragimov B, Toesca D, Chang D, Yuan Y, Koong A, Xing L. Development of deep neural network for individualized hepatobiliary toxicity prediction after liver SBRT. Med Phys. 2018;45:4763-4774.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 69]  [Cited by in F6Publishing: 77]  [Article Influence: 12.8]  [Reference Citation Analysis (0)]
92.  Abajian A, Murali N, Savic LJ, Laage-Gaupp FM, Nezami N, Duncan JS, Schlachter T, Lin M, Geschwind JF, Chapiro J. Predicting Treatment Response to Intra-arterial Therapies for Hepatocellular Carcinoma with the Use of Supervised Machine Learning-An Artificial Intelligence Concept. J Vasc Interv Radiol 2018; 29: 850-857. e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 79]  [Cited by in F6Publishing: 111]  [Article Influence: 18.5]  [Reference Citation Analysis (0)]
93.  Zhang F, Yang J, Nezami N, Laage-Gaupp F, Chapiro J, De Lin M, Duncan J. Liver Tissue Classification Using an Auto-context-based Deep Neural Network with a Multi-phase Training Framework. Patch Based Tech Med Imaging (2018). 2018;11075:59-66.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 14]  [Cited by in F6Publishing: 15]  [Article Influence: 2.5]  [Reference Citation Analysis (0)]
94.  Morshid A, Elsayes KM, Khalaf AM, Elmohr MM, Yu J, Kaseb AO, Hassan M, Mahvash A, Wang Z, Hazle JD, Fuentes D. A machine learning model to predict hepatocellular carcinoma response to transcatheter arterial chemoembolization. Radiol Artif Intell. 2019;1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 71]  [Cited by in F6Publishing: 66]  [Article Influence: 13.2]  [Reference Citation Analysis (0)]
95.  Nayak A, Baidya Kayal E, Arya M, Culli J, Krishan S, Agarwal S, Mehndiratta A. Computer-aided diagnosis of cirrhosis and hepatocellular carcinoma using multi-phase abdomen CT. Int J Comput Assist Radiol Surg. 2019;14:1341-1352.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 35]  [Article Influence: 7.0]  [Reference Citation Analysis (0)]
96.  Hamm CA, Wang CJ, Savic LJ, Ferrante M, Schobert I, Schlachter T, Lin M, Duncan JS, Weinreb JC, Chapiro J, Letzen B. Deep learning for liver tumor diagnosis part I: development of a convolutional neural network classifier for multi-phasic MRI. Eur Radiol. 2019;29:3338-3347.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 117]  [Cited by in F6Publishing: 166]  [Article Influence: 33.2]  [Reference Citation Analysis (0)]
97.  Wang CJ, Hamm CA, Savic LJ, Ferrante M, Schobert I, Schlachter T, Lin M, Weinreb JC, Duncan JS, Chapiro J, Letzen B. Deep learning for liver tumor diagnosis part II: convolutional neural network interpretation using radiologic imaging features. Eur Radiol. 2019;29:3348-3357.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 55]  [Cited by in F6Publishing: 82]  [Article Influence: 16.4]  [Reference Citation Analysis (0)]
98.  Jansen MJA, Kuijf HJ, Veldhuis WB, Wessels FJ, Viergever MA, Pluim JPW. Automatic classification of focal liver lesions based on MRI and risk factors. PLoS One. 2019;14:e0217053.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 38]  [Article Influence: 7.6]  [Reference Citation Analysis (0)]
99.  Mokrane FZ, Lu L, Vavasseur A, Otal P, Peron JM, Luk L, Yang H, Ammari S, Saenger Y, Rousseau H, Zhao B, Schwartz LH, Dercle L. Radiomics machine-learning signature for diagnosis of hepatocellular carcinoma in cirrhotic patients with indeterminate liver nodules. Eur Radiol. 2020;30:558-570.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 80]  [Cited by in F6Publishing: 102]  [Article Influence: 25.5]  [Reference Citation Analysis (0)]
100.  Shi W, Kuang S, Cao S, Hu B, Xie S, Chen S, Chen Y, Gao D, Zhu Y, Zhang H, Liu H, Ye M, Sirlin CB, Wang J. Deep learning assisted differentiation of hepatocellular carcinoma from focal liver lesions: choice of four-phase and three-phase CT imaging protocol. Abdom Radiol (NY). 2020;45:2688-2697.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 17]  [Cited by in F6Publishing: 30]  [Article Influence: 7.5]  [Reference Citation Analysis (0)]
101.  Alirr OI. Deep learning and level set approach for liver and tumor segmentation from CT scans. J Appl Clin Med Phys. 2020;21:200-209.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 37]  [Cited by in F6Publishing: 20]  [Article Influence: 5.0]  [Reference Citation Analysis (0)]
102.  Zheng H, Chen Y, Yue X, Ma C, Liu X, Yang P, Lu J. Deep pancreas segmentation with uncertain regions of shadowed sets. Magn Reson Imaging. 2020;68:45-52.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 29]  [Article Influence: 7.3]  [Reference Citation Analysis (0)]
103.  Liang JD, Ping XO, Tseng YJ, Huang GT, Lai F, Yang PM. Recurrence predictive models for patients with hepatocellular carcinoma after radiofrequency ablation using support vector machines with feature selection methods. Comput Methods Programs Biomed. 2014;117:425-434.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 28]  [Cited by in F6Publishing: 27]  [Article Influence: 2.7]  [Reference Citation Analysis (0)]
104.  Zhou W, Zhang L, Wang K, Chen S, Wang G, Liu Z, Liang C. Malignancy characterization of hepatocellular carcinomas based on texture analysis of contrast-enhanced MR images. J Magn Reson Imaging. 2017;45:1476-1484.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 77]  [Cited by in F6Publishing: 80]  [Article Influence: 10.0]  [Reference Citation Analysis (0)]
105.  Abajian A, Murali N, Savic LJ, Laage-Gaupp FM, Nezami N, Duncan JS, Schlachter T, Lin M, Geschwind JF, Chapiro J. Predicting Treatment Response to Image-Guided Therapies Using Machine Learning: An Example for Trans-Arterial Treatment of Hepatocellular Carcinoma. J Vis Exp. 2018;.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 5]  [Article Influence: 0.8]  [Reference Citation Analysis (0)]
106.  Ma X, Wei J, Gu D, Zhu Y, Feng B, Liang M, Wang S, Zhao X, Tian J. Preoperative radiomics nomogram for microvascular invasion prediction in hepatocellular carcinoma using contrast-enhanced CT. Eur Radiol. 2019;29:3595-3605.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 100]  [Cited by in F6Publishing: 148]  [Article Influence: 29.6]  [Reference Citation Analysis (0)]
107.  Dong Y, Zhou L, Xia W, Zhao XY, Zhang Q, Jian JM, Gao X, Wang WP. Preoperative Prediction of Microvascular Invasion in Hepatocellular Carcinoma: Initial Application of a Radiomic Algorithm Based on Grayscale Ultrasound Images. Front Oncol. 2020;10:353.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 43]  [Cited by in F6Publishing: 28]  [Article Influence: 7.0]  [Reference Citation Analysis (0)]
108.  He M, Zhang P, Ma X, He B, Fang C, Jia F. Radiomic Feature-Based Predictive Model for Microvascular Invasion in Patients With Hepatocellular Carcinoma. Front Oncol. 2020;10:574228.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11]  [Cited by in F6Publishing: 22]  [Article Influence: 5.5]  [Reference Citation Analysis (0)]
109.  Schoenberg MB, Bucher JN, Koch D, Börner N, Hesse S, De Toni EN, Seidensticker M, Angele MK, Klein C, Bazhin AV, Werner J, Guba MO. A novel machine learning algorithm to predict disease free survival after resection of hepatocellular carcinoma. Ann Transl Med. 2020;8:434.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 15]  [Cited by in F6Publishing: 15]  [Article Influence: 3.8]  [Reference Citation Analysis (0)]
110.  Zhao Y, Wu J, Zhang Q, Hua Z, Qi W, Wang N, Lin T, Sheng L, Cui D, Liu J, Song Q, Li X, Wu T, Guo Y, Cui J, Liu A. Radiomics Analysis Based on Multiparametric MRI for Predicting Early Recurrence in Hepatocellular Carcinoma After Partial Hepatectomy. J Magn Reson Imaging. 2020;.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 20]  [Cited by in F6Publishing: 44]  [Article Influence: 11.0]  [Reference Citation Analysis (0)]
111.  Liu F, Liu D, Wang K, Xie X, Su L, Kuang M, Huang G, Peng B, Wang Y, Lin M, Tian J. Deep Learning Radiomics Based on Contrast-Enhanced Ultrasound Might Optimize Curative Treatments for Very-Early or Early-Stage Hepatocellular Carcinoma Patients. Liver Cancer. 2020;9:397-413.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 39]  [Cited by in F6Publishing: 61]  [Article Influence: 15.3]  [Reference Citation Analysis (0)]
112.  Chen M, Cao J, Hu J, Topatana W, Li S, Juengpanich S, Lin J, Tong C, Shen J, Zhang B, Wu J, Pocha C, Kudo M, Amedei A, Trevisani F, Sung PS, Zaydfudim VM, Kanda T, Cai X. Clinical-Radiomic Analysis for Pretreatment Prediction of Objective Response to First Transarterial Chemoembolization in Hepatocellular Carcinoma. Liver Cancer. 2021;10:38-51.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 29]  [Cited by in F6Publishing: 46]  [Article Influence: 15.3]  [Reference Citation Analysis (0)]
113.  European Association for the Study of the Liver. EASL Clinical Practice Guidelines: Management of hepatocellular carcinoma. J Hepatol. 2018;69:182-236.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3934]  [Cited by in F6Publishing: 5390]  [Article Influence: 898.3]  [Reference Citation Analysis (0)]
114.  Gillies RJ, Kinahan PE, Hricak H. Radiomics: Images Are More than Pictures, They Are Data. Radiology. 2016;278:563-577.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4541]  [Cited by in F6Publishing: 4926]  [Article Influence: 615.8]  [Reference Citation Analysis (2)]
115.  Lambin P, Leijenaar RTH, Deist TM, Peerlings J, de Jong EEC, van Timmeren J, Sanduleanu S, Larue RTHM, Even AJG, Jochems A, van Wijk Y, Woodruff H, van Soest J, Lustberg T, Roelofs E, van Elmpt W, Dekker A, Mottaghy FM, Wildberger JE, Walsh S. Radiomics: the bridge between medical imaging and personalized medicine. Nat Rev Clin Oncol. 2017;14:749-762.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1825]  [Cited by in F6Publishing: 3051]  [Article Influence: 435.9]  [Reference Citation Analysis (0)]
116.  Erstad DJ, Tanabe KK. Prognostic and Therapeutic Implications of Microvascular Invasion in Hepatocellular Carcinoma. Ann Surg Oncol. 2019;26:1474-1493.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 139]  [Cited by in F6Publishing: 259]  [Article Influence: 51.8]  [Reference Citation Analysis (0)]
117.  Metter DM, Colgan TJ, Leung ST, Timmons CF, Park JY. Trends in the US and Canadian Pathologist Workforces From 2007 to 2017. JAMA Netw Open. 2019;2:e194337.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 123]  [Cited by in F6Publishing: 154]  [Article Influence: 30.8]  [Reference Citation Analysis (0)]
118.  Tomita N, Abdollahi B, Wei J, Ren B, Suriawinata A, Hassanpour S. Attention-Based Deep Neural Networks for Detection of Cancerous and Precancerous Esophagus Tissue on Histopathological Slides. JAMA Netw Open. 2019;2:e1914645.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 133]  [Cited by in F6Publishing: 91]  [Article Influence: 18.2]  [Reference Citation Analysis (0)]
119.  Sharma H, Zerbe N, Klempert I, Hellwich O, Hufnagl P. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology. Comput Med Imaging Graph. 2017;61:2-13.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 176]  [Cited by in F6Publishing: 157]  [Article Influence: 22.4]  [Reference Citation Analysis (0)]
120.  Li Y, Li X, Xie X, Shen L.   Deep learning based gastric cancer identification. Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging; 2018: 182-185.  [PubMed]  [DOI]  [Cited in This Article: ]
121.  Leon F, Gelvez M, Jaimes Z, Gelvez T, Arguello H.   Supervised Classification of Histopathological Images Using Convolutional Neuronal Networks for Gastric Cancer Detection. Proceedings of the 2019 XXII Symposium on Image, Signal Processing and Artificial Vision; 2019: 1-5.  [PubMed]  [DOI]  [Cited in This Article: ]
122.  Sun M, Zhang G, Dang H, Qi X, Zhou X, Chang Q.   Accurate Gastric Cancer Segmentation in Digital Pathology Images Using Deformable Convolution and Multi-Scale Embedding Networks. IEEE Access 2019; 7: 75530-75541.  [PubMed]  [DOI]  [Cited in This Article: ]
123.  Ma B, Guo Y, Hu W, Yuan F, Zhu Z, Yu Y, Zou H. Artificial Intelligence-Based Multiclass Classification of Benign or Malignant Mucosal Lesions of the Stomach. Front Pharmacol. 2020;11:572372.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 12]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
124.  Yoshida H, Shimazu T, Kiyuna T, Marugame A, Yamashita Y, Cosatto E, Taniguchi H, Sekine S, Ochiai A. Automated histological classification of whole-slide images of gastric biopsy specimens. Gastric Cancer. 2018;21:249-257.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 63]  [Cited by in F6Publishing: 67]  [Article Influence: 11.2]  [Reference Citation Analysis (0)]
125.  Qu J, Hiruta N, Terai K, Nosato H, Murakawa M, Sakanashi H. Gastric Pathology Image Classification Using Stepwise Fine-Tuning for Deep Neural Networks. J Healthc Eng. 2018;2018:8961781.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 43]  [Cited by in F6Publishing: 34]  [Article Influence: 5.7]  [Reference Citation Analysis (0)]
126.  Iizuka O, Kanavati F, Kato K, Rambeau M, Arihiro K, Tsuneki M. Deep Learning Models for Histopathological Classification of Gastric and Colonic Epithelial Tumours. Sci Rep. 2020;10:1504.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 132]  [Cited by in F6Publishing: 186]  [Article Influence: 46.5]  [Reference Citation Analysis (0)]
127.  Korbar B, Olofson AM, Miraflor AP, Nicka CM, Suriawinata MA, Torresani L, Suriawinata AA, Hassanpour S. Deep Learning for Classification of Colorectal Polyps on Whole-slide Images. J Pathol Inform. 2017;8:30.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 140]  [Cited by in F6Publishing: 160]  [Article Influence: 22.9]  [Reference Citation Analysis (0)]
128.  Wei JW, Suriawinata AA, Vaickus LJ, Ren B, Liu X, Lisovsky M, Tomita N, Abdollahi B, Kim AS, Snover DC, Baron JA, Barry EL, Hassanpour S. Evaluation of a Deep Neural Network for Automated Classification of Colorectal Polyps on Histopathologic Slides. JAMA Netw Open. 2020;3:e203398.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 49]  [Cited by in F6Publishing: 60]  [Article Influence: 15.0]  [Reference Citation Analysis (0)]
129.  Shapcott M, Hewitt KJ, Rajpoot N. Deep Learning With Sampling in Colon Cancer Histology. Front Bioeng Biotechnol. 2019;7:52.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 33]  [Cited by in F6Publishing: 38]  [Article Influence: 7.6]  [Reference Citation Analysis (0)]
130.  Geessink OGF, Baidoshvili A, Klaase JM, Ehteshami Bejnordi B, Litjens GJS, van Pelt GW, Mesker WE, Nagtegaal ID, Ciompi F, van der Laak JAWM. Computer aided quantification of intratumoral stroma yields an independent prognosticator in rectal cancer. Cell Oncol (Dordr). 2019;42:331-341.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 46]  [Cited by in F6Publishing: 76]  [Article Influence: 15.2]  [Reference Citation Analysis (0)]
131.  Song Z, Yu C, Zou S, Wang W, Huang Y, Ding X, Liu J, Shao L, Yuan J, Gou X, Jin W, Wang Z, Chen X, Chen H, Liu C, Xu G, Sun Z, Ku C, Zhang Y, Dong X, Wang S, Xu W, Lv N, Shi H. Automatic deep learning-based colorectal adenoma detection system and its similarities with pathologists. BMJ Open. 2020;10:e036423.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 32]  [Cited by in F6Publishing: 16]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
132.  Wang TH, Chen TC, Teng X, Liang KH, Yeh CT. Automated biphasic morphological assessment of hepatitis B-related liver fibrosis using second harmonic generation microscopy. Sci Rep. 2015;5:12962.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 29]  [Article Influence: 3.2]  [Reference Citation Analysis (0)]
133.  Forlano R, Mullish BH, Giannakeas N, Maurice JB, Angkathunyakul N, Lloyd J, Tzallas AT, Tsipouras M, Yee M, Thursz MR, Goldin RD, Manousou P. High-Throughput, Machine Learning-Based Quantification of Steatosis, Inflammation, Ballooning, and Fibrosis in Biopsies From Patients With Nonalcoholic Fatty Liver Disease. Clin Gastroenterol Hepatol 2020; 18: 2081-2090. e9.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 54]  [Cited by in F6Publishing: 76]  [Article Influence: 19.0]  [Reference Citation Analysis (0)]
134.  Li S, Jiang H, Pang W. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading. Comput Biol Med. 2017;84:156-167.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 36]  [Cited by in F6Publishing: 36]  [Article Influence: 5.1]  [Reference Citation Analysis (0)]
135.  Kiani A, Uyumazturk B, Rajpurkar P, Wang A, Gao R, Jones E, Yu Y, Langlotz CP, Ball RL, Montine TJ, Martin BA, Berry GJ, Ozawa MG, Hazard FK, Brown RA, Chen SB, Wood M, Allard LS, Ylagan L, Ng AY, Shen J. Impact of a deep learning assistant on the histopathologic classification of liver cancer. NPJ Digit Med. 2020;3:23.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 181]  [Cited by in F6Publishing: 129]  [Article Influence: 32.3]  [Reference Citation Analysis (0)]
136.  Steinbuss G, Kriegsmann K, Kriegsmann M. Identification of Gastritis Subtypes by Convolutional Neuronal Networks on Histological Images of Antrum and Corpus Biopsies. Int J Mol Sci. 2020;21.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 19]  [Article Influence: 4.8]  [Reference Citation Analysis (0)]
137.  Liu Y, Li X, Zheng A, Zhu X, Liu S, Hu M, Luo Q, Liao H, Liu M, He Y, Chen Y. Predict Ki-67 Positive Cells in H&E-Stained Images Using Deep Learning Independently From IHC-Stained Images. Front Mol Biosci. 2020;7:183.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 21]  [Cited by in F6Publishing: 14]  [Article Influence: 3.5]  [Reference Citation Analysis (0)]
138.  Kather JN, Pearson AT, Halama N, Jäger D, Krause J, Loosen SH, Marx A, Boor P, Tacke F, Neumann UP, Grabsch HI, Yoshikawa T, Brenner H, Chang-Claude J, Hoffmeister M, Trautwein C, Luedde T. Deep learning can predict microsatellite instability directly from histology in gastrointestinal cancer. Nat Med. 2019;25:1054-1056.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 835]  [Cited by in F6Publishing: 647]  [Article Influence: 129.4]  [Reference Citation Analysis (0)]
139.  Bychkov D, Linder N, Turkki R, Nordling S, Kovanen PE, Verrill C, Walliander M, Lundin M, Haglund C, Lundin J. Deep learning based tissue analysis predicts outcome in colorectal cancer. Sci Rep. 2018;8:3395.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 427]  [Cited by in F6Publishing: 336]  [Article Influence: 56.0]  [Reference Citation Analysis (0)]
140.  Kather JN, Krisam J, Charoentong P, Luedde T, Herpel E, Weis CA, Gaiser T, Marx A, Valous NA, Ferber D, Jansen L, Reyes-Aldasoro CC, Zörnig I, Jäger D, Brenner H, Chang-Claude J, Hoffmeister M, Halama N. Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study. PLoS Med. 2019;16:e1002730.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 555]  [Cited by in F6Publishing: 412]  [Article Influence: 82.4]  [Reference Citation Analysis (0)]
141.  Echle A, Grabsch HI, Quirke P, van den Brandt PA, West NP, Hutchins GGA, Heij LR, Tan X, Richman SD, Krause J, Alwers E, Jenniskens J, Offermans K, Gray R, Brenner H, Chang-Claude J, Trautwein C, Pearson AT, Boor P, Luedde T, Gaisa NT, Hoffmeister M, Kather JN. Clinical-Grade Detection of Microsatellite Instability in Colorectal Tumors by Deep Learning. Gastroenterology 2020; 159: 1406-1416. e11.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 210]  [Cited by in F6Publishing: 192]  [Article Influence: 48.0]  [Reference Citation Analysis (0)]
142.  Skrede OJ, De Raedt S, Kleppe A, Hveem TS, Liestøl K, Maddison J, Askautrud HA, Pradhan M, Nesheim JA, Albregtsen F, Farstad IN, Domingo E, Church DN, Nesbakken A, Shepherd NA, Tomlinson I, Kerr R, Novelli M, Kerr DJ, Danielsen HE. Deep learning for prediction of colorectal cancer outcome: a discovery and validation study. Lancet. 2020;395:350-360.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 332]  [Cited by in F6Publishing: 308]  [Article Influence: 77.0]  [Reference Citation Analysis (0)]
143.  Sirinukunwattana K, Domingo E, Richman SD, Redmond KL, Blake A, Verrill C, Leedham SJ, Chatzipli A, Hardy C, Whalley CM, Wu CH, Beggs AD, McDermott U, Dunne PD, Meade A, Walker SM, Murray GI, Samuel L, Seymour M, Tomlinson I, Quirke P, Maughan T, Rittscher J, Koelzer VH;  S:CORT consortium. Image-based consensus molecular subtype (imCMS) classification of colorectal cancer using deep learning. Gut. 2021;70:544-554.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 150]  [Cited by in F6Publishing: 129]  [Article Influence: 43.0]  [Reference Citation Analysis (0)]
144.  Jang HJ, Lee A, Kang J, Song IH, Lee SH. Prediction of clinically actionable genetic alterations from colorectal cancer histopathology images using deep learning. World J Gastroenterol. 2020;26:6207-6223.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 46]  [Cited by in F6Publishing: 42]  [Article Influence: 10.5]  [Reference Citation Analysis (0)]
145.  Chaudhary K, Poirion OB, Lu L, Garmire LX. Deep Learning-Based Multi-Omics Integration Robustly Predicts Survival in Liver Cancer. Clin Cancer Res. 2018;24:1248-1259.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 483]  [Cited by in F6Publishing: 523]  [Article Influence: 87.2]  [Reference Citation Analysis (0)]
146.  Saillard C, Schmauch B, Laifa O, Moarii M, Toldo S, Zaslavskiy M, Pronier E, Laurent A, Amaddeo G, Regnault H, Sommacale D, Ziol M, Pawlotsky JM, Mulé S, Luciani A, Wainrib G, Clozel T, Courtiol P, Calderaro J. Predicting Survival After Hepatocellular Carcinoma Resection Using Deep Learning on Histological Slides. Hepatology. 2020;72:2000-2013.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 95]  [Cited by in F6Publishing: 150]  [Article Influence: 37.5]  [Reference Citation Analysis (0)]
147.  Fu Y, Jung AW, Torne RV, Gonzalez S, Vöhringer H, Shmatko A, Yates LR, Jimenez-Linan M, Moore L, Gerstung M. Pan-cancer computational histopathology reveals mutations, tumor composition and prognosis. Nature Cancer. 2020;1:800-810.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 168]  [Cited by in F6Publishing: 171]  [Article Influence: 42.8]  [Reference Citation Analysis (0)]
148.  Sangro B, Melero I, Wadhawan S, Finn RS, Abou-Alfa GK, Cheng AL, Yau T, Furuse J, Park JW, Boyd Z, Tang HT, Shen Y, Tschaika M, Neely J, El-Khoueiry A. Association of inflammatory biomarkers with clinical outcomes in nivolumab-treated patients with advanced hepatocellular carcinoma. J Hepatol. 2020;73:1460-1469.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 155]  [Cited by in F6Publishing: 267]  [Article Influence: 66.8]  [Reference Citation Analysis (0)]
149.  Stead WW. Clinical Implications and Challenges of Artificial Intelligence and Deep Learning. JAMA. 2018;320:1107-1108.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 137]  [Cited by in F6Publishing: 122]  [Article Influence: 20.3]  [Reference Citation Analysis (4)]
150.  Sung JJ, Poon NC. Artificial intelligence in gastroenterology: where are we heading? Front Med. 2020;14:511-517.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 8]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
151.  Poon NC, Sung JJ. Self-driving cars and AI-assisted endoscopy: Who should take the responsibility when things go wrong? J Gastroenterol Hepatol. 2019;34:625-626.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 5]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
152.  Li S, Topatana W, Juengpanich S, Cao J, Hu J, Zhang B, Ma D, Cai X, Chen M. Development of synthetic lethality in cancer: molecular and cellular classification. Signal Transduct Target Ther. 2020;5:241.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 24]  [Cited by in F6Publishing: 50]  [Article Influence: 12.5]  [Reference Citation Analysis (0)]
153.  Topatana W, Juengpanich S, Li S, Cao J, Hu J, Lee J, Suliyanto K, Ma D, Zhang B, Chen M, Cai X. Advances in synthetic lethality for cancer therapy: cellular mechanism and clinical translation. J Hematol Oncol. 2020;13:118.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 46]  [Cited by in F6Publishing: 89]  [Article Influence: 22.3]  [Reference Citation Analysis (0)]