Minireviews Open Access
Copyright ©The Author(s) 2021. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Clin Cases. Nov 6, 2021; 9(31): 9376-9385
Published online Nov 6, 2021. doi: 10.12998/wjcc.v9.i31.9376
Deep learning driven colorectal lesion detection in gastrointestinal endoscopic and pathological imaging
Yu-Wen Cai, Li-Yuan Lu, Chen Chen, Ping Lin, Yu-Shan Xue, Department of Clinical Medicine, Fujian Medical University, Fuzhou 350004, Fujian Province, China
Fang-Fen Dong, Department of Medical Technology and Engineering, Fujian Medical University, Fuzhou 350004, Fujian Province, China
Yu-Heng Shi, Computer Science and Engineering College, University of Alberta, Edmonton T6G 2R3, Canada
Jian-Hua Chen, Su-Yu Chen, Endoscopy Center, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou 350014, Fujian Province, China
Xiong-Biao Luo, Department of Computer Science, Xiamen University, Xiamen 361005, Fujian, China
ORCID number: Yu-Wen Cai (0000-0002-3022-8693); Fang-Fen Dong (0000-0003-1535-6555); Yu-Heng Shi (0000-0001-9483-4767); Li-Yuan Lu (0000-0001-8612-2620); Chen Chen (0000-0001-9809-4794); Ping Lin (0000-0002-8547-8198); Yu-Shan Xue (0000-0002-6464-1996); Jian-Hua Chen (0000-0003-0433-9346); Su-Yu Chen (0000-0002-0199-6925); Xiong-Biao Luo (0000-0001-7906-8857).
Author contributions: Cai YW and Dong FF performed the majority of the writing and prepared the figures and tables, and they contributed equally to the work and should be regarded as co-first authors; Shi YH performed data accusation and writing; Lu LY, Chen C, Lin P, Xue YS, and Chen JH provided the input in writing the paper; Chen SY and Luo XB designed the outline and coordinated the writing of the paper.
Conflict-of-interest statement: The authors declare that there are no conflicts of interest regarding the publication of this paper.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Su-Yu Chen, MD, Endoscopy Center, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, No. 420 Fuma Road, Jin’an District, Fuzhou 350014, Fujian Province, China. endosuyuchen@163.com
Received: June 15, 2021
Peer-review started: June 15, 2021
First decision: July 15, 2021
Revised: July 26, 2021
Accepted: August 13, 2021
Article in press: August 13, 2021
Published online: November 6, 2021
Processing time: 136 Days and 0.4 Hours

Abstract

Colorectal cancer has the second highest incidence of malignant tumors and is the fourth leading cause of cancer deaths in China. Early diagnosis and treatment of colorectal cancer will lead to an improvement in the 5-year survival rate, which will reduce medical costs. The current diagnostic methods for early colorectal cancer include excreta, blood, endoscopy, and computer-aided endoscopy. In this paper, research on image analysis and prediction of colorectal cancer lesions based on deep learning is reviewed with the goal of providing a reference for the early diagnosis of colorectal cancer lesions by combining computer technology, 3D modeling, 5G remote technology, endoscopic robot technology, and surgical navigation technology. The findings will supplement the research and provide insights to improve the cure rate and reduce the mortality of colorectal cancer.

Key Words: Deep learning; Artificial intelligence; Image analysis; Endoscopic; Colorectal lesions; Colorectal cancer

Core Tip: The development of computer technology has promoted the progress of medical treatment. Artificial intelligence (AI) has been gradually applied in the medical field and achieved good results. The detection of colorectal lesions in the conventional gastrointestinal endoscopy is difficult, the diagnosis time is long, and there is often the problem of missed diagnosis and misdiagnosis. AI is a good aid for doctors. In this review, we summarize the application of AI in the detection of colorectal lesions in recent years, in order to provide reference for the follow-up development and research.



INTRODUCTION

Colorectal cancer (CRC) is one of the most common human cancers[1]. According to the latest cancer data survey in China, the incidence rates of CRC rank third and fourth and the mortality rates rank fifth and fourth among male and female cancers, respectively[2]. The cure rate of early CRC is more than 90%[3,4]; thus, early detection, early diagnosis, and early treatment are very important for reducing the incidence rate and mortality of CRC. However, in real clinical practice, the early discovery of CRC is very limited. Huang et al[5] summarized the recent progress of early diagnosis of CRC, which is based on approaches that include excreta, blood, computer-aided endoscopy, and enteroscopy evaluations. They indicated that artificial intelligence (AI)-assisted endoscopy, which adopts facial recognition technology based on AI, can quickly identify abnormal conditions based on analyses of images of the colorectal area, thus providing a timely warning to avoid nontumor polypectomy. In addition, this method has a high accuracy and sensitivity, which indicates the application prospects of computer-aided endoscopy in early CRC diagnosis. In recent years, the rapid development of AI technology has provided us with a new computer-based screening approach[6]. It is hoped to be able to detect, analyze, and classify colonic polyps automatically through the rapid and high-precision processing of endoscopic images via AI to distinguish tumor polyps that need to be removed from nontumor polyps that do not need to be removed and improve the early detection rate of tumors. All these processes require image analysis technology based on deep learning.

In this minireview, we briefly introduce the principles of deep learning-based image analysis in predicting colorectal lesions, as well as the recent progress and clinical application effects of deep learning techniques. Further, the shortcomings of existing studies and future research directions are summarized with a view to providing ideas for the next step of research.

PRINCIPLES OF DEEP LEARNING
Origin and development of deep learning

Deep learning is derived from research on artificial neural networks (Figure 1) and based on the combination of low-level features, the superposition of higher-level abstract feature attributes, and the classification of perceptual objects[7]. Its depth is mainly reflected in the multiple transformations of target recognition features from shallow to deep, and the abstract features are calculated in the deep neural network by stacking multilayer nonlinear mapping to aid in the classification[8]. The development process of deep learning is as follows: (1) Since Hopfield et al[9] proposed a neural network model with a complete theoretical basis in 1982, the problem of multilayer perceptron training for deep learning was solved and gradually formed a basic mature deep learning model in recent years; (2) In 2006, Hinton et al[10] pointed out that a “multihidden-layer neural network has better feature learning ability” and formally proposed the concept of deep learning; and (3) In recent years, deep learning has been widely used in the analysis and prediction of medical images. In March 2019, Wei et al[11] published an article on applying a deep learning network to a histological model of lung adenocarcinoma sections and performing pathological classification, and they obtained a model that could help pathologists more efficiently classify lung adenocarcinoma models.

Figure 1
Figure 1 Phase description.
Origin and development of convolutional neural networks

Convolutional neural networks (CNNs) or deep convolutional neural networks (DCNNs) are most commonly used deep learning models for image processing and analysis. Typically, one of the most popularly used CNNs is the recurrent neural network (RNN) model that has various network architectures, e.g., long short-term memory (LSTM) networks that are a type of recurrent neural network capable of learning order dependence in sequence prediction, Bidirectional-LSTM, and gated recurrent units that are a gating mechanism in recurrent neural networks introduced by Kyunghyun Cho and colleagues in 2014. Different CNNs deal with different problems such as object recognition, target detection and tracking, and medical image segmentation and classification.

A typical CNN is an in-depth learning architecture that has excellent performance in image target recognition[12]. A CNN is a series of methods to reduce the dimensionality of the image recognition problem with a large amount of data and extract data features effectively, and it is widely used in image and video recognition, recommendation systems, and natural language processing[13]. The first CNN was proposed by Waibel et al[14] in 1987, and it was first applied to handwritten font recognition by Lecun et al[15] in 1989 . Data have shown that convolution layers based on deep learning will make great achievements in image recognition, speech recognition, and computer vision, including the prediction of colonic polyps. The most classical pattern of CNNs applied to image classification was constructed by Lecun et al[16], who built a more complete CNN. Since then, other application research based on CNNs has been performed. In 2003, Microsoft developed optical character reading using a CNN[17]. In 2004, Garcia et al[18] applied a CNN to facial recognition . In 2014, Abdel-Hamid et al[19] applied a CNN to speech recognition.

While CNNs are widely introduced into various computer vision tasks, e.g., face and facial recognition, text image extraction and recognition, and 3D reconstruction, they are still a relatively new analysis method and technique in the field of medical image analysis (Figure 2). Because of specific medical applications of CNNs, there are relatively few studies that have focused on medical image analysis. The research procedure including arriving at results and translating these results to clinical verification is generally long, which results in few studies on CNN driven medical image analysis. Currently, many researchers have employed CNNs to various clinical tasks of medical image processing and analysis. By applying CNNs to clinical data analysis, the diagnostic yield can be improved as well as clinical outcomes can be enhanced[20-22].

Figure 2
Figure 2 Origin and development of convolutional neural network.
DEEP LEARNING DRIVEN COLORECTAL LESION ANALYSIS

At present, deep learning CNNs are widely used in speech recognition[23], face recognition[24,25], and behavior recognition[26]. In clinical image analysis, these networks are also widely used in feature recognition classification or image segmentation model construction[27,28]. Deep learning-based image analysis plays a key role in improving the diagnostic accuracy of many clinical ailments[29]. First, in terms of medical image segmentation, a deep learning algorithm combined with other methods can analyze the heart, cervical cancer, and colon polyps[30]. In deep learning-based image recognition, benign and malignant focal lesions can be classified, thus indicating the diagnostic accuracy and sensitivity but also the superior specificity[31]. Image analysis has achieved good results in colonoscopy. Optical biopsy can be used as a diagnostic method to accurately predict the histological changes of 5-mm or smaller polyps[32]. Recent studies have successfully used automated image analysis techniques to accurately predict histopathology based on images captured by endoscopy and magnifying endoscopy[33].

For image diagnoses of clinical colorectal lesions, high-resolution endoscopy, fluorescence imaging, enhanced endoscopy, and other advanced technologies are available to improve the detection rate of tumors under endoscopy; however, the inspection level of endoscopists is important to fully exploit these advanced technologies[34-36]. The application of deep learning in computer-aided detection improves the detection rate of early CRC and polyps and the inspection quality of optical biopsy, reduces the influence of doctors’ inspection level on the inspection results, improves the detection rate of tumors, reduces the rates of missed and misdiagnosed cases, and improves the quality of endoscopy[37].

Endoscopic video colorectal lesion detection

The detection of colonic polyps has become one of the most important fields in the application of AI deep learning for the detection of colorectal endoscopy. A large number of studies have shown that the detection rate of polyps is related to cancer risk. Summers et al[38] have shown that computer-aided detection can help inexperienced clinicians because of its sensitivity in detecting polyps; thus, it can balance the gap between different levels of endoscopic physicians and improve the accuracy of diagnosis. Corley et al[39] assessed the association of polyp detection rates with CRC risk and cancer-related deaths diagnosed 6 mo to 10 years after colonoscopy and concluded that polyp detection rates were negatively correlated with the risk of interstitial CRC, advanced interphase cancer, and fatal interphase cancer. Computer-aided detection has great advantages in terms of improving the detection rate of polyps and reducing the cost of examination.

In this review, we investigate important research on computer-aided detection of colon diseases in recent years (Table 1). Computer-aided detection systems for colon diseases in gastrointestinal endoscopy have been developed continuously since 2003 and numerous studies have been published in the literature. In 2003, Karkanis et al[40] extracted color wavelet features to test the performance of computer-aided colon tumor detection and found that its specificity was as high as 97% and sensitivity was as high as 90%, which is a very significant breakthrough in this field. Recently, Misawa et al[41] developed an AI system based on modeled deep learning. The system could detect 94% of polyps, and the false-positive detection rate was 60%, which verifies the feasibility of the detection system. This retrospective analysis confirmed that computer-aided detection can indeed play a great role in the diagnosis of colonic diseases.

Table 1 Summary of important studies of computer-aided endoscopic colorectal lesion detection.
Ref.
Methods and data
Important results
Limitation and drawback
Karkanis et al[40], 2003Endoscopic video tumor detection by color wavelet covariance, supported by linear discriminant analysis, 66 patients with 95 polypsSpecificity 90% and sensitivity 97%It is not enough stable to classify different types of colorectal polyps
Misawa et al[41], 2018An AI-assisted CADe system using 3D CNNs, 155 polyp-positive videos with 391 polyp-negativeSensitivity 90.0%, specificity 63.3%, and accuracy 76.5%Further machine and deep learning and prospective evaluations are mandatory
Urban et al[42], 2018CNNs; 8641 hand-labeled images with 4088 unique polypsAUC of 0.991 and accuracy of 96.4%Unknown effects of CNNs on inspection behavior by colonoscopists, anonymous and unidentified natural or endoscopic videos
Mori et al[43], 2018Retrospective analysis: An AI system by machine learning, 144 diminutive polyps (≤ 5 mm)Sensitivity 98%, specificity 71%, accuracy 81%, positive 67%, and negative 98%Insufficient endoscopic video image data
Yamada et al[44], 2020Retrospective analysis: A deep learning driven system using a Single Shot Multibox Detector for capsule endoscopic colon lesions detection, 15933 training images and 4784 testing imagesAUC 0.902, sensitivity 79.0%, specificity 87.0%, accuracy 83.9%, and at a probability cutoff of 0.348It was a retrospective study that only used the selected images, while it also did not consider pathological diagnoses and the clinical utility of the AI model has not been evaluated

Previous clinical data showed that in a group of 8641 colonoscopy images containing 4088 unique polyps, deep learning could locate and identify polyps in real time, and the accuracy rate in colonoscopy screening was approximately 96%[42]. At present, Japan has developed EndoBrain[43], which uses AI to analyze the blood vessels and cell structure of the lesion site and shows the tumor probability in an instant, and it has been used to identify tumor or nontumor polyps. To better distinguish the invasion depth of lesions and reduce the misdiagnosis rate, various dyes must be used on the mucous membrane, which limits the current routine application of this technology. Yamada et al[44] developed an AI system that uses deep learning to automatically detect such lesions in CCE images, and after training with 15933 CCE images and assessing 4784 images, the sensitivity, specificity, and accuracy of this system were 79.0%, 87.0%, and 83.9%, respectively. The effectiveness of AI technology was demonstrated in 324 patients in a randomized controlled trial by Wu et al[45], and compared to the control group, the blind spot rate was reduced (5.68% vs 22.46%, P < 0.001).

Computer-aided endoscopic detection has important potential in the field of colon diseases and is under continuous research and development. Its high specificity and sensitivity can help to improve the detection rate of various diseases and help doctors judge the condition. However, the existing studies still have shortcomings, such as the system is not stable enough, the classification is not complete enough, it is influenced by the operator and the subject, more clinical practice proof is needed, etc. Therefore, future studies will identify a more robust classification scheme and the developed system can be enhanced with a classifier fusion scheme to identify different types of colorectal polyps. More importantly, future randomized studies could directly address the overall value (quality vs cost) of CNN by examining the impact of CNN on colonoscopy time, pathology cost, ADR, polyps per procedure, surveillance-associated polyps per procedure, and surveillance-unassociated polyps per procedure (e.g., normal and lymphatic aggregates).

Pathological image colorectal lesion detection

Among colonic mucosal diseases, optical biopsy can accurately identify the activity degree of ulcerative colitis and the nature of ulcerative colitis-related intraepithelial neoplasia and colonic polyps. Optical biopsy is a new endoscopic diagnosis technology represented by confocal microscopy, and its principle is similar to that of confocal microscopy[46] according to the use of advanced imaging technology combined with an existing classification system. However, the precision of optical biopsy is based on the professional knowledge of the operators.

Although optical biopsy technology depends on the professional knowledge of the operator, the development of computer-aided technology in recent years can aid in a more accurate diagnosis (Table 2). At present, AI technology can realize automatic optical biopsy, which is mainly based on the extraction of image features of colonic lesions, sorting and then inputting the input layer into the computer system for deep learning, sorting the results in the output layer, and then finally outputting the diagnosis results (Figure 3). Although outstanding achievements have not yet been made in image analysis and predictions of colorectal lesions by deep learning in China, Tamaki et al[47] proposed a new combination of local features and sampling schemes and tested 908 narrow band imaging (NBI) images. The system achieved a recognition rate of 96% for 10-fold cross validation on a real dataset of 908 NBI images collected during actual colonoscopies, and the rate was 93% for a separate test dataset. Then, Nao Ito and other Japanese scholars[48] published relevant research results of endoscopic diagnosis in support of a cT1b CRC deep learning system. The accuracy of the CNN in this study was 81.2%, which showed that the effect of CNN examinations is equivalent to the judgment of clinicians in endoscopic diagnosis. The skill of deep learning of surgical navigation is expected to be applied to assistant clinicians in endoscopic examinations. In another study[49], the authors created an autonomous computational system to classify endoscopy findings and showed that autonomous classification of endoscopic images with AI technology is possible. The overall accuracy for the benign classifier was 80.8%. The binary classifier correctly identified 92.0% of the malignant-premalignant lesions, with an overall accuracy of 93.0%. However, better network implementations and larger datasets are needed to improve the classifier’s accuracy. Zachariah et al[50] demonstrated the feasibility of in situ diagnosis of colorectal polyps using CNN. Their model exceeded PIVI thresholds for both “resect and discard” and “diagnose and leave” strategies independent of NBI use. Point-of-care adenoma detection rates and surveillance recommendations are potential added benefits. The study of Shahidi et al[51] provided the first description of a potential future application of AI, in which AI can help in the arbitration between endoscopists and pathologists when discordant diagnoses occur. The study results were consistent with the endoscopic diagnosis in 577 (89.6%) lesions. Concerning discordant endoscopic and pathologic diagnoses, the results were consistent with the endoscopic diagnosis in 168 (90.3%) lesions. Of those lesions identified on pathology as normal mucosa, 90 (90.9%) were consistent with the endoscopic diagnosis.

Figure 3
Figure 3 Computer-aided diagnosis of colonic lesions.
Table 2 Important studies of computer-aided pathological prediction of colorectal lesions.
Ref.
Methods and data
Important results
Limitation and drawback
Tamaki et al[47], 2013A new combination of local features and sampling tested on 908 NBI imagesA recognition rate of 96% for 10-fold cross validation and a rate of 93% for separate dataWithout investigation on robustness, motion blur, focus, window size, color bleeding, and highlight areas
Ito et al[48], 2018Use AlexNet to diagnose cT1b, 190 colorectal lesion images from 41 patient casesSensitivity 67.5%, specificity 89.0%, accuracy 81.2%, and AUC 0.871Insufficient pathological images to build CNNs
Zachariah et al[50], 2020A CNNs model using TensorFlow and ImageNet, 6223 images with 80% train and 20% test, processing at 77 frames per secondNegative 97% among diminutive rectum/ rectosigmoid polyps, surveillance interval 93%. In fresh validation, NPV 97% and surveillance interval 94%Retrospective study using offline images
Shahidi et al[51], 2020An established real-time AI clinical decision support solution to resolve endoscopic and pathologic discrepancies, 644 images with colorectal lesions ≤ 3 mmCDSS was consistent with the endoscopic diagnosis in 577 (89.6%) lesionsInevitable CDSS optimization, given the increasingly used deep learning for development of current AI platforms, manifesting in AI’s ability to adapt with increasing data exposure

Based on these studies, we reveal the application value and development prospect of AI in optical biopsy. Of course, there are also some limitations, such as inadequate images, many influences, inconsistent sizes, color differences, and others. In the future, new computer hardware, algorithms, and multicenter model cross-validations are needed to improve the accuracy of diagnosis for clinical use. In addition, the combined and joint applications of colon disease big data and AI computer systems are also essential. The application of AI requires analysis, classification, and deep learning based on a large amount of data. The larger the amount of data, the higher the accuracy of the final learning system. However, at present, we still lack the support of such a large amount of data. If such a large amount of data can be collected, the application of AI in optical biopsy will make great progress.

CONCLUSION

With the development of science and technology and the improvement of deep learning algorithms, AI will be applied in many fields in the future[52]. At the same time, with the improvement of computer technology and the increase in image data, the application of image analysis and prediction based on deep learning for clinical colorectal lesions will be gradually increased and the accuracy of diagnosis will be significantly improved. The application of AI in clinical work will greatly reduce the workload of clinicians.

At present, deep learning algorithms have shown good benefit for histopathological diagnosis in the context of tumor risk stratification[53]. Recent applications have focused on the most common types of cancer, such as breast, prostate, and lung cancer. At present, research on the application of image analysis of colorectal lesions based on deep learning has been able to distinguish the pathological types of colorectal polyps and improve the detection rate of polyps. We hope to be able to collect colorectal disease images (including endoscopic ultrasound images) and image data through computer-aided technology of relevant depth learning combined with the advantages of clinical big data and develop an image processing method that can determine the invasion depth of colorectal disease and construct three-dimensional images in the process of endoscopic colonoscopy to guide further diagnosis and treatment. Computer-aided diagnosis (CADx) has been used for cancer staging and invasion depth estimation[54], and Kubota et al[55] developed another CADx for the automatic diagnosis of gastric cancer invasion depth. AI systems help determine whether additional surgery is needed after endoscopic resection of T1 CRC by predicting lymph node metastasis[56]. According to the bronchoscope navigation system researched by Luo and others, an operation navigation system of the digestive tract can be developed to realize the localization and treatment of lesions[57]. At present, an image-based navigation strategy is proposed in Van Der Stap[58] to realize the automation of the flexible endoscope, and a framework composed of robot steering and cavity concentration is proposed to realize the automation of the colonoscope[59]. Perhaps in the future, endoscopic surgery will present operations similar to that of a surgical Da Vinci robot. For example, a soft endoscopic robot was developed to replace hard endoscopic surgery for surgical transanal resection of tumors. In addition, with the development of new materials and computer technology, printing digestive tract models with 3D materials and using 5G remote technology to improve the efficiency of diagnosis and follow-up could increase the detection rate of early CRC[60,61] (Figure 4).

Figure 4
Figure 4 Future development directions. AI: Artificial intelligence.

Visual processing of computer images and videos has achieved excellent results, thus showing its superiority in clinical medical diagnosis and examination and resolving the gap between doctors at different levels. Further research and popularization will be of great significance for the diagnosis and treatment of colorectal lesions and could significantly reduce the incidence rate of CRC and improve patient survival rates.

Footnotes

Manuscript source: Unsolicited manuscript

Specialty type: Oncology

Country/Territory of origin: China

Peer-review report’s scientific quality classification

Grade A (Excellent): 0

Grade B (Very good): 0

Grade C (Good): C

Grade D (Fair): D

Grade E (Poor): 0

P-Reviewer: Muneer A, Shafqat S S-Editor: Wang JL L-Editor: Wang TQ P-Editor: Yuan YY

References
1.  Beilmann-Lehtonen I, Böckelman C, Mustonen H, Koskensalo S, Hagström J, Haglund C. The prognostic role of tissue TLR2 and TLR4 in colorectal cancer. Virchows Arch. 2020;477:705-715.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 20]  [Article Influence: 5.0]  [Reference Citation Analysis (0)]
2.  Chen W, Zheng R, Baade PD, Zhang S, Zeng H, Bray F, Jemal A, Yu XQ, He J. Cancer statistics in China, 2015. CA Cancer J Clin. 2016;66:115-132.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11444]  [Cited by in F6Publishing: 12924]  [Article Influence: 1615.5]  [Reference Citation Analysis (2)]
3.  Courtney RJ, Paul CL, Carey ML, Sanson-Fisher RW, Macrae FA, D'Este C, Hill D, Barker D, Simmons J. A population-based cross-sectional study of colorectal cancer screening practices of first-degree relatives of colorectal cancer patients. BMC Cancer. 2013;13:13.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 41]  [Cited by in F6Publishing: 42]  [Article Influence: 3.8]  [Reference Citation Analysis (0)]
4.  Siegel RL, Miller KD, Fedewa SA, Ahnen DJ, Meester RGS, Barzi A, Jemal A. Colorectal cancer statistics, 2017. CA Cancer J Clin. 2017;67:177-193.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2526]  [Cited by in F6Publishing: 2837]  [Article Influence: 405.3]  [Reference Citation Analysis (3)]
5.  Huang YL, Huang B, Yan J. Progress in early diagnosis of colorectal cancer. Fenzi Yingxiangxue Zazhi. 2019;42:83-86.  [PubMed]  [DOI]  [Cited in This Article: ]
6.  Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng. 2018;2:719-731.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 732]  [Cited by in F6Publishing: 945]  [Article Influence: 157.5]  [Reference Citation Analysis (0)]
7.  Sun zj, Xue L, Xu YM, Wang Z. Review of deep learning research. Jisuanji Yingyong Yanjiu. 2012;29:2806-2810.  [PubMed]  [DOI]  [Cited in This Article: ]
8.  Hu Y, Luo DY, Hua K, Lu HM, Zhang XG. Review and Discussion on deep learning. Zhineng Xitong Xuebao. 2019;14:5-23.  [PubMed]  [DOI]  [Cited in This Article: ]
9.  Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A. 1982;79:2554-2558.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11760]  [Cited by in F6Publishing: 5004]  [Article Influence: 119.1]  [Reference Citation Analysis (0)]
10.  Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Comput. 2006;18:1527-1554.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 9547]  [Cited by in F6Publishing: 3117]  [Article Influence: 173.2]  [Reference Citation Analysis (0)]
11.  Wei JW, Tafe LJ, Linnik YA, Vaickus LJ, Tomita N, Hassanpour S. Pathologist-level classification of histologic patterns on resected lung adenocarcinoma slides with deep neural networks. Sci Rep. 2019;9:3358.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 157]  [Cited by in F6Publishing: 152]  [Article Influence: 30.4]  [Reference Citation Analysis (0)]
12.  Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans Med Imaging. 2016;35:1285-1298.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3594]  [Cited by in F6Publishing: 1832]  [Article Influence: 229.0]  [Reference Citation Analysis (0)]
13.  Lin JC, Pang Y, Xu LM, Huang ZW. Research progress of medical image processing based on deep learning. Shengming Kexue Yiqi. 2018;16:47-56.  [PubMed]  [DOI]  [Cited in This Article: ]
14.  Waibel A, Hanazawa T, Hinton GE, Shikano K, Lang KJ. Phoneme recognition using time-delay neural networks. IEEE Trans Acoust. 1989;37:328-339.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1393]  [Cited by in F6Publishing: 1304]  [Article Influence: 37.3]  [Reference Citation Analysis (0)]
15.  Lecun Y, Boser B, Denker J, Henderson D, Howard R, Hubbard W. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989;1:541-551.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5472]  [Cited by in F6Publishing: 5492]  [Article Influence: 156.9]  [Reference Citation Analysis (0)]
16.  Lecun Y, Bottou L. Gradient-based learning applied to document recognition. IEEE Inst Electr Electron Eng. 1998;86:2278-2324.  [PubMed]  [DOI]  [Cited in This Article: ]
17.  Simard P, Steinkraus D, Platt JC. Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis. International Conference on Document Analysis & Recognition. IEEE Computer Society. 2003;.  [PubMed]  [DOI]  [Cited in This Article: ]
18.  Garcia C, Delakis M. Convolutional face finder: a neural architecture for fast and robust face detection. IEEE Trans Pattern Anal Mach Intell. 2004;26:1408-1423.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 377]  [Cited by in F6Publishing: 70]  [Article Influence: 3.5]  [Reference Citation Analysis (0)]
19.  Abdel-Hamid O, Mohamed AR, Jiang H, Deng L, Penn G, Yu D. Convolutional neural networks for speech recognition. IEEE/ACM Trans Aud Speech Lan Proc. 2014;22:1533-1545.  [PubMed]  [DOI]  [Cited in This Article: ]
20.  Yasaka K, Akai H, Kunimatsu A, Kiryu S, Abe O. Deep learning with convolutional neural network in radiology. Jpn J Radiol. 2018;36:257-272.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 150]  [Cited by in F6Publishing: 201]  [Article Influence: 33.5]  [Reference Citation Analysis (0)]
21.  Chen MC, Ball RL, Yang L, Moradzadeh N, Chapman BE, Larson DB, Langlotz CP, Amrhein TJ, Lungren MP. Deep Learning to Classify Radiology Free-Text Reports. Radiology. 2018;286:845-852.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 117]  [Cited by in F6Publishing: 106]  [Article Influence: 15.1]  [Reference Citation Analysis (0)]
22.  Yamashita R, Nishio M, Do RKG, Togashi K. Convolutional neural networks: an overview and application in radiology. Insights Imaging. 2018;9:611-629.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1202]  [Cited by in F6Publishing: 1067]  [Article Influence: 177.8]  [Reference Citation Analysis (0)]
23.  Mustaqeem, Kwon S. A CNN-Assisted Enhanced Audio Signal Processing for Speech Emotion Recognition. Sensors (Basel). 2019;20.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 175]  [Cited by in F6Publishing: 47]  [Article Influence: 9.4]  [Reference Citation Analysis (0)]
24.  Yang YX, Wen C, Xie K, Wen FQ, Sheng GQ, Tang XG. Face Recognition Using the SR-CNN Model. Sensors (Basel). 2018;18.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 21]  [Cited by in F6Publishing: 21]  [Article Influence: 3.5]  [Reference Citation Analysis (0)]
25.  Li J, Qiu T, Wen C, Xie K, Wen FQ. Robust Face Recognition Using the Deep C2D-CNN Model Based on Decision-Level Fusion. Sensors (Basel). 2018;18.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 43]  [Cited by in F6Publishing: 44]  [Article Influence: 7.3]  [Reference Citation Analysis (0)]
26.  Hu Y, Wong Y, Wei W, Du Y, Kankanhalli M, Geng W. A novel attention-based hybrid CNN-RNN architecture for sEMG-based gesture recognition. PLoS One. 2018;13:e0206049.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 148]  [Cited by in F6Publishing: 106]  [Article Influence: 17.7]  [Reference Citation Analysis (0)]
27.  Laukamp KR, Thiele F, Shakirin G, Zopfs D, Faymonville A, Timmer M, Maintz D, Perkuhn M, Borggrefe J. Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI. Eur Radiol. 2019;29:124-132.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 96]  [Cited by in F6Publishing: 110]  [Article Influence: 18.3]  [Reference Citation Analysis (0)]
28.  Spuhler KD, Ding J, Liu C, Sun J, Serrano-Sosa M, Moriarty M, Huang C. Task-based assessment of a convolutional neural network for segmenting breast lesions for radiomic analysis. Magn Reson Med. 2019;82:786-795.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 17]  [Cited by in F6Publishing: 22]  [Article Influence: 4.4]  [Reference Citation Analysis (0)]
29.  Azuaje F. Artificial intelligence for precision oncology: beyond patient stratification. NPJ Precis Oncol. 2019;3:6.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 85]  [Cited by in F6Publishing: 64]  [Article Influence: 12.8]  [Reference Citation Analysis (0)]
30.  Sánchez-González A, García-Zapirain B, Sierra-Sosa D, Elmaghraby A. Automatized colon polyp segmentation via contour region analysis. Comput Biol Med. 2018;100:152-164.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 24]  [Cited by in F6Publishing: 24]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
31.  Haygood TM, Liu MA, Galvan E, Bassett R, Murphy WA Jr, Ng CS, Matamoros A, Marom EM. Consistency of response and image recognition, pulmonary nodules. Br J Radiol. 2014;87:20130767.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 2]  [Article Influence: 0.2]  [Reference Citation Analysis (0)]
32.  ASGE Technology Committee; Abu Dayyeh BK, Thosani N, Konda V, Wallace MB, Rex DK, Chauhan SS, Hwang JH, Komanduri S, Manfredi M, Maple JT, Murad FM, Siddiqui UD, Banerjee S. ASGE Technology Committee systematic review and meta-analysis assessing the ASGE PIVI thresholds for adopting real-time endoscopic assessment of the histology of diminutive colorectal polyps. Gastrointest Endosc. 2015;81:502.e1-502.e16.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 210]  [Cited by in F6Publishing: 230]  [Article Influence: 25.6]  [Reference Citation Analysis (0)]
33.  Tajbakhsh N, Gurudu SR, Liang J. A Comprehensive Computer-Aided Polyp Detection System for Colonoscopy Videos. Inf Process Med Imaging. 2015;24:327-338.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 30]  [Cited by in F6Publishing: 35]  [Article Influence: 3.9]  [Reference Citation Analysis (0)]
34.  Tan T, Qu YW, Shu J, Liu ML, Zhang L, Liu HF. Diagnostic value of high-resolution micro-endoscopy for the classification of colon polyps. World J Gastroenterol. 2016;22:1869-1876.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 5]  [Cited by in F6Publishing: 4]  [Article Influence: 0.5]  [Reference Citation Analysis (0)]
35.  Mizrahi I, Wexner SD. Clinical role of fluorescence imaging in colorectal surgery - a review. Expert Rev Med Devices. 2017;14:75-82.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 30]  [Cited by in F6Publishing: 25]  [Article Influence: 3.1]  [Reference Citation Analysis (0)]
36.  Ho SH, Uedo N, Aso A, Shimizu S, Saito Y, Yao K, Goh KL. Development of Image-enhanced Endoscopy of the Gastrointestinal Tract: A Review of History and Current Evidences. J Clin Gastroenterol. 2018;52:295-306.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 16]  [Cited by in F6Publishing: 17]  [Article Influence: 2.8]  [Reference Citation Analysis (0)]
37.  Song X, Sun J. Application and Prospect of artificial intelligence in diagnosis and treatment of digestive system diseases. Weichagn Bing Xue. 2018;23:552-556.  [PubMed]  [DOI]  [Cited in This Article: ]
38.  Summers RM. Improving the accuracy of CTC interpretation: computer-aided detection. Gastrointest Endosc Clin N Am. 2010;20:245-257.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 22]  [Cited by in F6Publishing: 21]  [Article Influence: 1.5]  [Reference Citation Analysis (0)]
39.  Corley DA, Levin TR, Doubeni CA. Adenoma detection rate and risk of colorectal cancer and death. N Engl J Med. 2014;370:2541.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 31]  [Cited by in F6Publishing: 63]  [Article Influence: 6.3]  [Reference Citation Analysis (0)]
40.  Karkanis SA, Iakovidis DK, Maroulis DE, Karras DA, Tzivras M. Computer-aided tumor detection in endoscopic video using color wavelet features. IEEE Trans Inf Technol Biomed. 2003;7:141-152.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 319]  [Cited by in F6Publishing: 199]  [Article Influence: 9.5]  [Reference Citation Analysis (0)]
41.  Misawa M, Kudo SE, Mori Y, Cho T, Kataoka S, Yamauchi A, Ogawa Y, Maeda Y, Takeda K, Ichimasa K, Nakamura H, Yagawa Y, Toyoshima N, Ogata N, Kudo T, Hisayuki T, Hayashi T, Wakamura K, Baba T, Ishida F, Itoh H, Roth H, Oda M, Mori K. Artificial Intelligence-Assisted Polyp Detection for Colonoscopy: Initial Experience. Gastroenterology. 2018;154:2027-2029.e3.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 229]  [Cited by in F6Publishing: 242]  [Article Influence: 40.3]  [Reference Citation Analysis (0)]
42.  Urban G, Tripathi P, Alkayali T, Mittal M, Jalali F, Karnes W, Baldi P. Deep Learning Localizes and Identifies Polyps in Real Time With 96% Accuracy in Screening Colonoscopy. Gastroenterology. 2018;155:1069-1078.e8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 398]  [Cited by in F6Publishing: 404]  [Article Influence: 67.3]  [Reference Citation Analysis (1)]
43.  Mori Y, Kudo SE, Mori K. Potential of artificial intelligence-assisted colonoscopy using an endocytoscope (with video). Dig Endosc. 2018;30 Suppl 1:52-53.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 20]  [Cited by in F6Publishing: 20]  [Article Influence: 3.3]  [Reference Citation Analysis (0)]
44.  Yamada A, Niikura R, Otani K, Aoki T, Koike K. Automatic detection of colorectal neoplasia in wireless colon capsule endoscopic images using a deep convolutional neural network. Endoscopy. 2020;.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 20]  [Cited by in F6Publishing: 28]  [Article Influence: 9.3]  [Reference Citation Analysis (0)]
45.  Wu L, Zhang J, Zhou W, An P, Shen L, Liu J, Jiang X, Huang X, Mu G, Wan X, Lv X, Gao J, Cui N, Hu S, Chen Y, Hu X, Li J, Chen D, Gong D, He X, Ding Q, Zhu X, Li S, Wei X, Li X, Wang X, Zhou J, Zhang M, Yu HG. Randomised controlled trial of WISENSE, a real-time quality improving system for monitoring blind spots during esophagogastroduodenoscopy. Gut. 2019;68:2161-2169.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 169]  [Cited by in F6Publishing: 187]  [Article Influence: 37.4]  [Reference Citation Analysis (0)]
46.  Li YQ  Clinical application of optical biopsy. In: 2012 China Digestive Diseases Academic Conference; 2012 Sep 20; Shanghai, China. Beijing: Chinese Medical Association, 2012: 40-41.  [PubMed]  [DOI]  [Cited in This Article: ]
47.  Tamaki T, Yoshimuta J, Kawakami M, Raytchev B, Kaneda K, Yoshida S, Takemura Y, Onji K, Miyaki R, Tanaka S. Computer-aided colorectal tumor classification in NBI endoscopy using local features. Med Image Anal. 2013;17:78-100.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 106]  [Cited by in F6Publishing: 107]  [Article Influence: 8.9]  [Reference Citation Analysis (0)]
48.  Ito N, Kawahira H, Nakashima H, Uesato M, Miyauchi H, Matsubara H. Endoscopic Diagnostic Support System for cT1b Colorectal Cancer Using Deep Learning. Oncology. 2019;96:44-50.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 40]  [Cited by in F6Publishing: 47]  [Article Influence: 7.8]  [Reference Citation Analysis (0)]
49.  Dunham ME, Kong KA, McWhorter AJ, Adkins LK. Optical Biopsy: Automated Classification of Airway Endoscopic Findings Using a Convolutional Neural Network. Laryngoscope. 2020;.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 12]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
50.  Zachariah R, Samarasena J, Luba D, Duh E, Dao T, Requa J, Ninh A, Karnes W. Prediction of Polyp Pathology Using Convolutional Neural Networks Achieves "Resect and Discard" Thresholds. Am J Gastroenterol. 2020;115:138-144.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 52]  [Cited by in F6Publishing: 82]  [Article Influence: 20.5]  [Reference Citation Analysis (0)]
51.  Shahidi N, Rex DK, Kaltenbach T, Rastogi A, Ghalehjegh SH, Byrne MF. Use of Endoscopic Impression, Artificial Intelligence, and Pathologist Interpretation to Resolve Discrepancies Between Endoscopy and Pathology Analyses of Diminutive Colorectal Polyps. Gastroenterology. 2020;158:783-785.e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 27]  [Cited by in F6Publishing: 26]  [Article Influence: 6.5]  [Reference Citation Analysis (0)]
52.  Luo H, Xu G, Li C, He L, Luo L, Wang Z, Jing B, Deng Y, Jin Y, Li Y, Li B, Tan W, He C, Seeruttun SR, Wu Q, Huang J, Huang DW, Chen B, Lin SB, Chen QM, Yuan CM, Chen HX, Pu HY, Zhou F, He Y, Xu RH. Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: a multicentre, case-control, diagnostic study. Lancet Oncol. 2019;20:1645-1654.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 155]  [Cited by in F6Publishing: 230]  [Article Influence: 46.0]  [Reference Citation Analysis (0)]
53.  Robertson S, Azizpour H, Smith K, Hartman J. Digital image analysis in breast pathology-from image processing techniques to artificial intelligence. Transl Res. 2018;194:19-35.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 155]  [Cited by in F6Publishing: 120]  [Article Influence: 20.0]  [Reference Citation Analysis (0)]
54.  Yoon JH, Kim S, Kim JH, Keum JS, Jo J, Cha JH, Jung DH, Park JJ, Youn YH, Park H. Sa1235 application of artificial intelligence for prediction of invasion depth in early gastric cancer: preliminary study. Gastrointest Endosc. 2018;87:AB176.  [PubMed]  [DOI]  [Cited in This Article: ]
55.  Kubota K, Kuroda J, Yoshida M, Ohta K, Kitajima M. Medical image analysis: computer-aided diagnosis of gastric cancer invasion on endoscopic images. Surg Endosc. 2012;26:1485-1389.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 50]  [Cited by in F6Publishing: 51]  [Article Influence: 3.9]  [Reference Citation Analysis (1)]
56.  Ichimasa K, Kudo SE, Mori Y, Misawa M, Matsudaira S, Kouyama Y, Baba T, Hidaka E, Wakamura K, Hayashi T, Kudo T, Ishigaki T, Yagawa Y, Nakamura H, Takeda K, Haji A, Hamatani S, Mori K, Ishida F, Miyachi H. Artificial intelligence may help in predicting the need for additional surgery after endoscopic resection of T1 colorectal cancer. Endoscopy. 2018;50:230-240.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 64]  [Cited by in F6Publishing: 98]  [Article Influence: 16.3]  [Reference Citation Analysis (0)]
57.  Luo X, Kitasaka T, Mori K.   Bronchoscopy Navigation beyond Electromagnetic Tracking Systems: A Novel Bronchoscope Tracking Prototype. In: Fichtinger G., Martel A., Peters T, editors. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2011. MICCAI 2011. Lecture Notes in Computer Science. Heidelberg: Springer, 2011.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 9]  [Cited by in F6Publishing: 9]  [Article Influence: 0.7]  [Reference Citation Analysis (0)]
58.  van der Stap N, Rozeboom ED, Pullens HJ, van der Heijden F, Broeders IA. Feasibility of automated target centralization in colonoscopy. Int J Comput Assist Radiol Surg. 2016;11:457-465.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3]  [Cited by in F6Publishing: 3]  [Article Influence: 0.3]  [Reference Citation Analysis (0)]
59.  Pullens HJ, van der Stap N, Rozeboom ED, Schwartz MP, van der Heijden F, van Oijen MG, Siersema PD, Broeders IA. Colonoscopy with robotic steering and automated lumen centralization: a feasibility study in a colon model. Endoscopy. 2016;48:286-290.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 11]  [Article Influence: 1.4]  [Reference Citation Analysis (0)]
60.  Mishra S. Application of 3D printing in medicine. Indian Heart J. 2016;68:108-109.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 21]  [Cited by in F6Publishing: 17]  [Article Influence: 2.1]  [Reference Citation Analysis (0)]
61.  Li D. 5G and intelligence medicine-how the next generation of wireless technology will reconstruct healthcare? Precis Clin Med. 2019;2:205-208.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 72]  [Cited by in F6Publishing: 55]  [Article Influence: 11.0]  [Reference Citation Analysis (0)]