Review Open Access
Copyright ©The Author(s) 2022. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Diabetes. Oct 15, 2022; 13(10): 822-834
Published online Oct 15, 2022. doi: 10.4239/wjd.v13.i10.822
Everything real about unreal artificial intelligence in diabetic retinopathy and in ocular pathologies
Arvind Kumar Morya, Siddharam S Janti, Antervedi Tejaswini, Department of Ophthalmology, All India Institute of Medical Sciences Bibinagar, Hyderabad 508126, Telangana, India
Priya Sisodiya, Department of Ophthalmology, Sadguru Netra Chikitsalaya, Chitrakoot 485001, Madhya Pradesh, India
Rajendra Prasad, Department of Ophthalmology, R P Eye Institute, New Delhi 110001, New Delhi, India
Kalpana R Mali, Department of Pharmacology, All India Institute of Medical Sciences, Bibinagar, Hyderabad 508126, Telangana, India
Bharat Gurnani, Department of Ophthalmology, Aravind Eye Hospital and Post Graduate Institute of Ophthalmology, Pondicherry 605007, Pondicherry, India
ORCID number: Arvind Kumar Morya (0000-0003-0462-119X); Siddharam S Janti (0000-0001-5903-4297); Priya Sisodiya (0000-0002-7284-1269); Antarvedi Tejaswini (0000-0003-3777-3966); Kalpana R Mali (0000-0002-0378-2779); Bharat Gurnani (0000-0003-0848-5172).
Author contributions: Morya AK and Janti SS contributed equally to this work; Sisodiya P, Tejaswini A, Gurnani B, Prasad R and Mali KR designed the research study; Morya AK and Janti SS performed the research; Sisodiya P, Tejaswini A, Gurnani B contributed new search and analytic tools; Morya AK, Janti SS,Tejaswini A and Gurnani B analyzed the data and wrote the manuscript; all authors have read and approve the final manuscript.
Conflict-of-interest statement: All authors report no relevant conflict of interest for this article.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Arvind Kumar Morya, MBBS, MNAMS, MS, Additional Professor, Department of Ophthalmology, All India Institute of Medical Sciences Bibinagar, Warangal Road, Hyderabad 508126, Telangana, India. bulbul.morya@gmail.com
Received: May 25, 2022
Peer-review started: May 25, 2022
First decision: August 1, 2022
Revised: August 11, 2022
Accepted: September 9, 2022
Article in press: September 9, 2022
Published online: October 15, 2022
Processing time: 141 Days and 22.4 Hours

Abstract

Artificial Intelligence is a multidisciplinary field with the aim of building platforms that can make machines act, perceive, reason intelligently and whose goal is to automate activities that presently require human intelligence. From the cornea to the retina, artificial intelligence (AI) is expected to help ophthalmologists diagnose and treat ocular diseases. In ophthalmology, computerized analytics are being viewed as efficient and more objective ways to interpret the series of images and come to a conclusion. AI can be used to diagnose and grade diabetic retinopathy, glaucoma, age-related macular degeneration, cataracts, IOL power calculation, retinopathy of prematurity and keratoconus. This review article intends to discuss various aspects of artificial intelligence in ophthalmology.

Key Words: Artificial intelligence; Diabetic retinopathy; Deep learning; Machine learning; Ophthalmology

Core Tip: It is said that necessity is the mother of all inventions and converging global trends make multiplying eye care efficiency an increasingly urgent necessity. Artificial intelligence refers to an artificial creation of human-like intelligence of computer machines that can learn, reason, plan, perceive or process natural language.



INTRODUCTION

Artificial intelligence (AI) refers to a machine’s ability to mimic human cognitive functions, such as learning, reasoning, problem-solving, knowledge representation, social intelligence and general intelligence. It represents a significant advance in computer science and enables doing tasks using a computer with little human mind involvement following human training. AI was developed in the 1940s, but major advances ensued during the 1990s with significant improvements in machine learning, multi-agent planning, case-based reasoning, scheduling, data mining, natural language understanding and translation, vision, virtual reality, games, etc. Researchers have created an algorithm that can guess whether patients with cardiovascular diseases have lived or died based on their condition within a year. The algorithm could predict patient survival in 85% of cases based on data obtained by measuring the heart’s electrical activity using electrocardiography. The rapid development in AI technology requires physicians and computer scientists to have a good mutual understanding of the technology and the medical practice to improve medical care. This review article presents the role of AI in various fields of ophthalmology.

METHODOLOGY

We searched highly cited articles in PubMed, Scopus database, Google Scholar, Web of Science, Cochrane library and Embase database on Artificial - Intelligence in Diabetic - Retinopathy, Age-related macular degeneration, Glaucoma, Keratoconus, Cataract, Dry Eye and other common ocular diseases published between the year 2000 to 2021. We also used Reference Citation Tool (RCA) for searching the keywords and articles were ranked based on the “Impact Index Per Article.” The latest highlighted articles were selected for review. Only articles published in English were considered and the rest were rejected.

ARTIFICIAL INTELLIGENCE BASICS
Machine learning

Machine learning (ML) is a core AI branch that aims to provide computers with the ability to learn without being explicitly programmed[1]. ML focuses on developing algorithms that can analyze data and make predictions[2] (Figure 1).

Figure 1
Figure 1 Steps of machine learning algorithms that forms to analyze data and make predictions.
Deep learning

Deep learning (DL) differs from ML in that DL uses neural networks for making predictions and decisions. These neural networks were inspired by the biological neural networks of animal brains. They use the statistical probability principle derived from large data volumes to learn how to improve their accuracy, making DL a valuable tool for aiding physicians in clinical practice.

Generative adversarial network

Generative adversarial networks (GANs) are paired neural networks used for unsupervised ML. They can generate images or other data for the discriminative neural network to evaluate the data and provide feedback to aid the learning process[3].

ARTIFICIAL INTELLIGENCE PLATFORMS

Algorithms resemble the AI software, whereas platforms resemble the computer hardware in which algorithms are installed and work to predict and make decisions. AI platforms simulate cognitive functions of the human mind including learning, reasoning, problem-solving, social intelligence and general intelligence[4].

Top Artificial intelligence platforms

The top AI platforms include Google, Microsoft Azure, TensorFlow, Railbird, Infosys Nia, Wipro HOLMES, Premonition, Dialogflow, Ayasdi, MindMeld, Meya, KAI and Vital A.I. Following the initial learning steps, the system or machine is taught to advance its initial learning for more accuracy and efficiency. This learning is further compounded by using complex mathematical equations to understand nonlinear relationships between different variables through an information flow called “neural networks.” This “higher training” form enables AI to judge and weigh different outcome possibilities.

USE OF ARTIFICIAL INTELLIGENCE IN OPHTHALMOLOGY

From the back of the eye to the front, AI is expected to provide ophthalmologists with novel automated tools to diagnose and treat ocular diseases. Recently, the application of AI in medicine has garnered much attention from big players in the digital world, such as Google and IBM. This is expected to stimulate research and development for disease diagnosis and treatment. Researchers in the field of AI ophthalmology view computerized analytics as the path toward more efficient and objective ways of image interpretation compared with modern eye care practice.

Diabetic retinopathy

Patients with diabetes require regular and repetitive screening to detect and treat diabetic retinopathy (DR)[4,5]. Conventionally, this screening is performed by dilated fundus examination or color fundus photography using conventional fundus cameras (mydriatic or nonmydriatic). The primary issue in this screening is retinal image grading by retinal specialists or trained personnel, who are few compared with the patient load requiring screening. Another problem is that most patients reside in rural areas. Finally, constant follow-ups are needed for several years[4].

DR, a complication of chronic diabetes, is a vasculopathy affecting one-third of patients with diabetes and possibly leading to irreversible blindness[6,7]. Most AIs have been evaluated for their application in DR detection with the primary goal of assisting the development of a mass and rapid screening tool with high sensitivity and specificity. Considering the huge diversity in the clinical presentation of DR, it is essential for an AI neural network to be multilayered and extensively trained. This requires the use of multiple images evaluated against the ground truth.

Most studies have used the International Clinical Diabetic Retinopathy Disease severity scale, a 5-point scale [no apparent retinopathy, mild non-proliferative DR (NPDR), moderate NPDR, severe NPDR and proliferative diabetic retinopathy (PDR)]. Referable DR is defined as moderate or severe DR as disease management often changes from yearly screening to closer follow-up for moderate disease severity. A recent study by Shah et al[8] used an AI algorithm with a deep convolutional neural network (DCNN). It assessed its sensitivity and specificity with double validation, i.e. external and internal validation. External validation was performed using the Methods to Evaluate Segmentation and Indexing Techniques in the Retinal Ophthalmology dataset, i.e. the MESSIDOR dataset. In contrast, internal validation was performed by two retinal specialists. The main advantage of this study was that 112489 images, acquired from various fundus cameras taking pictures of both mydriatic and nonmydriatic eyes, were fed into AI, thereby giving a multiethnicity advantage to the dataset. The agreement between AI and internal/external validation was high for ANY DR and REFERRAL DR, with a sensitivity of > 95%. The agreement for sight-threatening DR between AI and external validation was high but moderate between AI and internal validation. However, this did not affect the conclusion that AI proved to be a useful screening tool and detected referral DR cases with high specificity.

Valverde et al[9] reviewed the available algorithms and detailed the methods for segmenting exudates, red lesions and screening systems. These segmentation methods were used to develop a computer-aided diagnosis for automated DR detection, such as Retmarker DR, Retinalyze System, IDx-DR (first FDA-approved system), iGradingM and Telemedical Retinal Image Analysis and Diagnosis Network. Overall, all these systems achieved high sensitivity and specificity, provided that the segmentation of exudates was used to screen for DR rather than the segmentation of red lesions. Medios, an offline AI, was developed and studied by Sosale et al[10]. This offline algorithm was created because of Internet access limitations and the high computational power required for all cloud-based AIs in a developing country. Fundus photographs were captured using Remidio Non-Mydriatic Fundus on Phone 10 (NM FOP 10) and image processing was directly performed on the smartphone graphics processing unit. The sensitivity and specificity of the AI algorithm for detecting referral DR were 98% and 86%, respectively. For any DR, its sensitivity and specificity were 86% and 95%, respectively. Compared with other online cloud-based software, such as EyeArt and IDx-DR, Medios had better sensitivity and equivalent specificity (Figure 2). The specific abnormalities that can be detected using continuous machine learning (CML) are macular edema[11,12], exudates[13], cotton-wool spots[14], microaneurysms and optic disc neovascularization[15]. Commercially available DR detection and analysis technologies include Retinalyze System, IDx-DR, iGradingM and RetmarkerDR. The difference is that only a few modalities use lesion-based grading, whereas the others use image-based grading. The sensitivity of this system has reached around 80%, but its specificity remains lower than 90%.

Figure 2
Figure 2 AI software assess the diabetic retinopathy into referrable and non-referrable interventions.

A DL GAN can be trained to map anatomical features from different image modalities, i.e. fundus photographs and fluorescein angiography (FA) images, onto a shared feature manifold to generate one image modality from another[16]. Using GAN, detailed retinal vascular structures can be produced without the requirement of FA to avoid its potential side effects. The inferred structural measurements of retinal vasculature may allow clinicians to identify the natural history of changes in the retinal vasculature and the clinical outcomes of retinal diseases, as previously reported by direct fundus image analysis, but with the accuracy of FA or even optical coherence tomography angiography image analysis[17].

Morya et al[18] evaluated the first smartphone-based online annotation in the world, a tool for rapid and accurate image labeling, using AI-based DL for DR. This DL model evaluated its accuracy based on a binary referral DR classification system, depending on whether a retinal image had referral DR or not. A total of 32 ophthalmologists used the tool for over 55000 images. The data analysis proved considerable flexibility and portability with favorable grader variability in concurrence with image annotation. Table 1 demonstrates the collective data of various studies on DR-related AI. This table has been reproduced from the article by Padhy et al[19].

Table 1 Comparative analysis of various studies done on artificial intelligence in diabetic retinopathy[19].
Ref.
Sensitivity, specificity or accuracy of the study
Total fundus images examined
Types of AI used
Main objective
Wong et al[20] Area under the curve were 0.97 and 0.92 for microaneurysm and hemorrhages respectively143 imagesA three-layer feed forward neural networkDeals with detecting the microaneurysm and hemorrhages. Frangi filter used
Imani et al[57]Sensitivity of 75.02%-75.24%; Specificity of 97.45%-97.53%60 imagesMCADetected the exudation and blood vessel
Yazid et al[58]97.8% in sensitivity, 99% in specificity and 83.3% in predictivity for STARE database. 90.7% in sensitivity, 99.4% in specificity and 74% in predictivity for the custom database30 imagesInverse surface thresholdingDetected both hard and soft exudates
Akyol et al[59]Percentage accuracy of disc detection ranged from 90%-94.38% using different data set239 imagesKey point detection, texture analysis, and visual dictionary techniquesDetected the optic disc of fundus images
Niemeijer et al[13]Accuracy in 99.9% cases in finding the disc1000 imagesCombined k-nearest neighbor and cuesFast detection of the optic disc
Rajalakshmi et al[60], Smart phone based study 95.8% sensitivity and 80.2% specificity for detecting any DR. 99.1% sensitivity and 80.4% specificity in detecting STDRRetinal images of 296 patientsEye Art AI Dr screening software usedRetinal photography with Remidio ‘Fundus on Phone’
Eye Nuk study Sensitivity was 91.7%; Specificity was 91.5%40542 imagesEye PAC Stelescreening systemRetinal images taken with traditional desktop fundus cameras
Ting et al[61]Sensitivity and specificity for RDR was 90.5% and 91.6%; For STDR the sensitivity was 100% and the specificity was 91.1%494661 retinal imagesDeep learning systemMultiple Retinal images taken with conventional fundus cameras
IRIS Sensitivity of the IRIS algorithm in detecting STDR was 66.4% with false-negative rate of 2% and the specificity was 72.8%. Positive Predictive value of 10.8% and negative predictive value 97.8%15015 patientsIntelligent Retinal Imaging System (IRIS)Retinal screening examination and nonmydriatic fundus photography
AGE-RELATED MACULAR DEGENERATION

Age-related macular degeneration (AMD) is the cause of approximately 9% of cases of blindness globally[20]. The worldwide number of people with AMD was projected to be 196 million in 2020, which is expected to substantially increase to 288 million in 2040[20]. The age-related eye disease study (AREDS) developed a simplified severity scale for AMD[22]. This scale combines the risk factors from both eyes to generate an overall score for the individual based on the presence of one or more large drusen (diameter of > 125 mm) or AMD pigmentary abnormalities in the macula of each eye. The simplified severity scale is also clinically useful because it allows ophthalmologists to predict an individual’s 5-year risk of developing late AMD[23]. AMD detection and prediction are essential for individualized treatment. Using AI in cases of AMD could increase the detection rate of lesions such as drusen, with the presence of fluid and reticular pseudo-drusen and geographic atrophy.

Several DL systems have been developed for classifying color fundus photographs based on AMD severity scales. These severity scales include referable and non-referable AMD[22] and multiclass AMD classification systems (e.g., 9-step AREDS severity scale and 4-class). Recent studies have shown the robust performance of automated AMD classification systems based on optical coherence tomography (OCT) scans[24].

DeepSeeNet is based on color fundus photography and uses three networks-Drusen-Net, Pigment-Net and Late AMD-Net (Figure 3). These three networks were designed as DCNNs, each with an Inception-v3 architecture and a state-of-the-art convolutional neural network (CNN) model for image classification. Similar to the study by De Fauw et al[25], DeepSeeNet includes two stages by design for improved performance and increased transparency. Images were obtained from the AREDS dataset, comprising approximately 60000 retinal images. DeepSeeNet operates by first detecting individual risk factors (drusen and pigmentary abnormalities) in each eye and then combining values from both eyes to assign an AMD score for the patient. Therefore, DeepSeeNet closely matches the clinical decision-making process (Figure 3). The accuracy of Fine-Tuned DeepSeeNet (FT-DSN) was superior to that of human retinal specialists (67% vs 60%). On further analysis, the overall accuracy of FT-DSN was superior. However, subgroup analysis showed that FT-DSN correctly classified participants with severity scale scores of 0-4 more often than the retinal specialists. In contrast, the retinal specialists correctly classified those with late AMD more often than FT-DSN. Lee et al[26] developed an AMD screening system to differentiate between normal and AMD OCT images. They trained their CML using 48312 normal and 52690 AMD images. Their CML had a peak sensitivity and specificity of 92% and 93%, respectively. Treder et al[27] used OCT images (1112 images) to develop a CML that could differentiate a healthy macula from a macula showing exudative AMD, with a sensitivity of 100% and a specificity of 92%.

Figure 3
Figure 3 Deep Sea net classify age-related macular degeneration (AMD) into dry AMD and wet AMD based on fundus photograph.

Bogunovic et al[28] developed a data-driven interpretable predictive model to predict the progression risk in those with intermediate AMD. Drusen regression, an anatomic intermediate AMD endpoint, and advanced AMD onset can be predicted using this specifically designed, fully automated, ML-based classifier. Treder et al[27] fed corresponding OCT images of patients with low or high anti-vascular endothelial growth factor (VEGF) injection requirements into a random forest (RF) classifier to develop a predictive model. The treatment requirement prediction showed an area under the curve (AUC) of 70%-80%. Prahs et al[29] trained a DCNN on OCT images to facilitate decision-making regarding anti-VEGF injection, and the outcomes were better than those using CML. These studies are an essential step toward image-guided prediction of treatment intervals in neovascular AMD or PDR management. In addition to screening, some studies have focused on grading AMD and predicting visual acuity from OCT images. This will aid clinicians in formulating a visual prognosis and support them in their decision-making. Aslam et al[30] and Schmidt-Erfurth et al[31] developed CMLs that could estimate visual acuity. Aslam et al[30] trained their CML on data from 847 OCT scans, whereas Schmidt-Erfurth et al[31] trained their CML on data from 2456 OCT scans (from 614 eyes).

AI systems can be trained to perform segmentation, classification and prediction using retinal OCT images. Several AI systems were demonstrated to display high accuracy for segmentation which is essential to quantify intraretinal fluid, subretinal fluid and pigment epithelial detachment. Compared with noncomputerized segmentation techniques, the DL algorithm developed by Lee et al[26] accurately differentiated fluid accumulation from other abnormal retinal findings. Further, De Fauw et al[25] confirmed the ability of DL to detect > 50 retinal conditions and the robustness of the AI system in triaging the urgency of referrals for patients with retinal diseases. Table 2 is a summary of AI algorithms used for AMD.

Table 2 Summary of artificial intelligence algorithm used in age-related macular degeneration.
Ref.
Sensitivity
Specificity
Diagnostic accuracy
Output
Grassman et al[62]84.2094.3063.3, Kappa of 92%Final probability value for referable vs not referable
Ting et al[61]93.2088.70Area under curve-0.932Identifying referable AMD and advanced AMD
Lee et al[26]84.6091.5087.60Prediction of binary segmentation map
Treder et al[27]1009296AMD testing score-score of 0.98 or greater adequate for diagnosis of AMD
Glaucoma

Glaucoma, also known as the silent sight killer, is the leading cause of preventable and irreversible blindness worldwide. The disease remains asymptomatic and an estimated 50%-90% of individuals with glaucoma remain undiagnosed. Thus, glaucoma screening is recommended for its early detection and treatment. Cup-disk ratio (CDR) can be calculated to assist early-stage glaucoma diagnosis using AI models[32]. After locating the coarse disk margin using a spatial correlation smoothness constraint, a support vector machine (SVM) model is trained to find the patches on OCT images to identify a reference plane that can calculate the CDR[33].

In 2013, Yousefi et al[16] published an AI study on the progression of primary open-angle glaucoma (POAG) in 180 patients using many different CMLs and independent features. They found that retinal nerve fiber layer features provided sufficient information for CMLs to differentiate between stable POAG and progressing POAG at an early-moderate disease stage. RF tree and lazy K star were the most sensitive CMLs. Chen et al[34] developed a CNN using two different datasets [ORIGA dataset: 650 images (99 for training and 551 for validation) and SCES dataset: 1676 images (entirely used for validation as the images in the ORIGA set were used for training)] to detect POAG based on optic disk images. They reported the area under the receiver operating characteristic curve values of 0.831 and 0.887 for ORIGA and SCES datasets, respectively. Kim et al[35] and Raghavendra focused on detecting glaucoma vs normal fundus images. They reported an accuracy of 87.9%, equivalent to the accuracy of human experts, demonstrating an efficient method for glaucoma screening. Raghavendra et al[36] tested their CML on 589 normal and 837 glaucoma images and obtained a score of 0.98 for sensitivity, specificity and accuracy.

DL performs better than CML in detecting pre-perimetric open-angle glaucoma[36]. Holistic and local features of the optic disc on fundus images have been used to mitigate the influence of optic disk misalignment for glaucoma diagnosis[37]. Li et al[38] demonstrated that DL could be used to identify referable glaucomatous optic neuropathy with high sensitivity and specificity. Table 3 is a summary of studies using AI to detect progression in eyes with glaucoma.

Table 3 Summary of studies using artificial intelligence to detect progression in Glaucomatous eyes.
Ref.
No. of eyes
Instrument
Approach
Comments
Lin et al[63]80SAPSupervised MLSensitivity-86%; Specificity-88%
Goldbaum et al[64]478 suspects; 150 glaucoma; 55 stable glaucomaSAPUnsupervised MLSpecificity-98.4%, AROC not available; Use of variational Byesian. Independent component analysis mixture model in indentifying patterns of glaucomatous visual field defects and its validation
Wang et al[65]11817 (method developing cohort) and 397 (clinical evaluation cohort)SAPUnsupervised MLAROC of the archetype method 0.77
Yousefi et al[16]939 Abnormal SAP and 1146 normal SAP in the cross section and 270 glaucoma in the longitudinal databaseSAPUnsupervised MLSensitivity 34.5%-63.4% at specificity 87% Comment: it took 3.5 years for ML analysis to detect progression while it took over 3.5 years for other methods to detect progression in 25% of eyes
Belghith et al27- progressing; 26-stable glaucoma and 40 healthy controlsSD OCT Supervised MLSensitivity -78% Specificity in normal eyes-93%; 94% in non-progressive eyes
Retinopathy of prematurity

Retinopathy of prematurity (ROP) is a leading cause of treatable childhood blindness, provided it is diagnosed timely[39]. This disease necessitates strict follow-up and screening which are very tedious and demanding. Repeated ROP screening and follow-up consume substantial manpower and energy. Therefore, the application of AI in ROP screening may improve the efficiency of care for ROP.

Wang et al[40] developed an automated ROP detection system called DeepROP using deep neural networks (DNNs). ROP detection was divided into ROP identification and grading tasks. Two specific DNN models–Id-Net and Gr-Net–were designed for the identification and grading tasks, respectively. Id-Net achieved a sensitivity of 96.62% and a specificity of 99.32% for ROP identification, whereas Gr-Net attained a sensitivity of 88.46% and a specificity of 92.31% for ROP grading. In another 552 cases, the developed DNNs outperformed some human experts[41].

A similar AI, developed by Tan Z, achieved similar accuracy for detecting plus ROP. They reported that this AI could distinguish the plus disease with 95% accuracy, comparable to the diagnoses of experts and much more precise than those of non-experts. Various studies have reported promising results, most of which were based on two-level sorting (plus or not plus disease).

Keratoconus

There are significant obstacles in distinguishing patients with very early keratoconus signs from the normal population. This is attributed to the limited availability of samples owing to low disease prevalence. For this purpose, the application of AI in corneal topography interpretation has been attempted. The methods used discriminative classifiers that, given a set of independent machine-derived variables from corneal topography (e.g., simulated K readings and topographic asymmetries), can be trained to differentiate between two or more classes of topography (e.g., normal, astigmatic and keratoconus).

AI has been used to detect keratoconus and forme fruste keratoconus[42] based on data from Placido topography, Scheimpflug tomography[43], anterior segment spectral domain OCT and biomechanical metrics (CorvisST and corneal hysteresis). Further, data from Pentacam[44], Sirius[45], Orbscan II, Galilei and TMS-1 have been studied using ML algorithms to detect early keratoconus.

The Pentacam RF index (PRFI) is an RF model built using data from Pentacam HR (Oculus, Wetzlar, Germany). It was the only model trained using the preoperative examination data of patients that developed ectasia. The index already available on the device (BAD-D) presented a sensitivity of 55.3%, whereas PRFI identified 80% of the cases correctly. In the external validation set, the model showed an accuracy of 85% for detecting the normal topographic eye of very asymmetric cases (VAE-NT), reaching a specificity of 96.6%[46].

A single decision tree method was proposed based on the data obtained from the Galilei Dual Scheimpflug Analyzer (Ziemer Ophthalmic Systems AG, Port, Switzerland). This index showed a sensitivity of 90% and a specificity of 86% for detecting early disease forms[47]. Discriminant linear models were also successfully used to analyze the data obtained from Orbscan II (Technolas, Munich, Germany) with a sensitivity of 92% and a specificity of 96% in the first validation set and a sensitivity of 70.8%, and a specificity of 98.1% in a different ethnic background population[48].

Ambrósio et al[48] evaluated AI-based tomographic and biomechanical index (TBI), which combines Scheimpflug-based corneal tomography and biomechanics (Corvis ST) for improving ectasia detection. The Kerato Detect algorithm analyzes the corneal eye topography using a CNN that can extract and learn the features of a keratoconus eye. The results ensure high-level performance yielding an accuracy of 99.33% for the test dataset. Neural networks have been used to evaluate the waveform signals of the Ocular Response Analyzer (Reichert Ophthalmic Instruments, Buffalo, United States) yielding high accuracy for the study validation sample comprising early keratoconus forms (AUC, 0.978). The RF model called TBI achieved a sensitivity of 90.3% and a specificity of 96% for detecting VAE-NT. The combination of tomographic and biomechanical parameters was superior to either method used alone.

Sharif et al[49] showed that corneal images obtained via confocal microscopy could be assessed in detail using a committee machine developed from artificial neural networks and adaptive neuro-fuzzy inference systems that can detect abnormalities with high accuracy and enable 3D visualization. Nevertheless, considering that the research on these aspects is limited, there is a possibility that the characteristics learned in AI training may not be similar to those in another clinical population. When using tomographic data rather than Placido topographic data, researchers have found that combining biomechanical or additional imaging data is necessary to enhance the performance for detecting early keratoconus signs.

CORNEAL DYSTROPHIES AND DYSPLASIA

Eleiwa et al[50] used AI to differentiate Fuchs endothelial corneal dystrophy (FECD, without corneal edema) from late-stage FECD (with corneal edema) based on high-definition OCT images. The model they developed had a sensitivity of 99% and a specificity of 98% in differentiating normal cornea from FECD (early or late).

Gu et al[51] reported an AUC of 0.939 for detecting corneal dystrophy or degeneration using a slit-lamp photograph-based DL model. They included ocular surface disorders such as limbal dermoid, papilloma, pterygium, conjunctival dermolipoma, conjunctival nevus and conjunctival melanocytic tumors to differentiate ocular surface neoplasms. However, considering the limited existing evidence, the use of AI for detecting ocular surface neoplasms warrants further exploration. Kessel et al[52] created trained DL algorithms to detect and analyze amyloid deposition in corneal sections in patients with familial amyloidosis undergoing full-thickness keratoplasty.

Dry eye

Dry eye disease is a common condition that affects 8% of the global population and is caused by the reduced quantity or quality of tears. Left untreated, dry eye can result in pain, ulcers and even corneal scars. Therefore, rapid diagnosis is essential and clinically based on tear production measurement and a tear film stability evaluation.

In a recent study, researchers used infrared thermal images of the eye along with the ML algorithms Gabor transform and Discrete Wavelet Transform (DWT) to detect dry eyes[53]. These ML methodologies were used to extract features from specific image frames, further segmented into eye regions, and the data were analyzed accordingly. Principal component analysis was ranked using a t-value and fed into the SVM classifier. Using the 1st, 5th, and 10th after the first blink, they achieved classification accuracies of: (1) 82.3%, 89.2% and 88.2% for the left eye; and (2) 93.4%, 81.5% and 84.4% for the right eye, respectively. Similarly, using the 1st, 5th, and 10th frames of the lower half of the ocular region, they achieved accuracies of: (1) 95.0%, 95.0% and 89.2%; and (2) 91.2%, 97.0% and 92.2% for the left and right eyes, respectively. This study showed that the lower half of the ocular region is superior to the upper half of the ocular region.

This method offers several advantages, such as being semiautomatic and making it less susceptible to interobserver variability. It is more accurate than standard clinical tools, more convenient for the patient and does not require a special dye. Gabor transform and DWT are methodologies for automatic feature extraction from biomedical images.

Cataract

Cataract refers to the clouding of the eye lens. It is the leading cause of blindness worldwide. Therefore, automatic detection for the diagnosis of this disease will be cost-effective.

Srivastava et al[54] proposed a system that automatically grades the severity of nuclear cataracts based on slit-lamp images. First, the lens region of interest is identified, following which the CNN filters randomly selected image patches, generating local representations via an iteration process with random weights. They named it ACASIA-NC_v0.10 (i.e., Automatic Cataract Screening from Image Analysis-Nuclear Cataract, version 0.10) and specifically used the “visibility cue” for nuclear cataract grading C. Their system used visible features of the nucleus, such as sutures and demarcation lines, in greyscale. With the help of the software, they could analyze the number of visible features. ACASIA-NC_v0.10 achieved a similarity of > 70% against clinical grading and reduced the error by > 8.5%. Other studies similar to Liu et al[55] mainly focused on identifying pediatric cataracts. They reported exceptional accuracy and sensitivity for lens classification and density. In addition, cataract grading can also be achieved automatically based on lens OCT findings.

SMARTPHONE-BASED APPS USING AI IN OPHTHALMOLOGY

The advantages of using smartphones are many, including having built-in internal data storage and cloud storage capabilities[56]. Pegasus VISULYTIX, an inexpensive smartphone clip-on optic nerve scanner expected to aid the diagnosis and treatment of those with chronic blinding diseases, such as glaucoma, using AI, is being adapted. Pegasus could detect glaucomatous optic neuropathy with an accuracy of 83.4%, comparable to the average accuracies of ophthalmologists (80.5%) and optometrists (80%) using the same images.

CC-Cruiser was developed to study the application of AI in congenital cataracts (CC). CCs cause irreversible vision loss and breakthroughs in the research on CCs have substantially contributed to the field of medicine. Researchers have developed a three-fold AI system that includes identification networks for CC screening in populations, evaluation networks for risk stratification of patients with CC and strategist networks to assist ophthalmologists in making treatment decisions.

Shaw created a ComputeR Assisted Detector LEukocoia (CRADLE) app that uses AI to identify white eyes indicative of several serious eye diseases. The sensitivity of CRADLE for detecting white eyes in children aged ≤ 2 years surpassed 80%, which was substantially higher than the sensitivity of physical examination (8%). This new smartphone app takes advantage of parents’ fondness for snapping pictures of their children to identify signs of a severe eye disease that the child might be developing. On average, the app detected white eyes in pictures collected 1.3 years before diagnosis.

FUTURE OF ARTIFICIAL INTELLIGENCE APPLICATION

The AI-based platform provides an intelligent diagnosis of eye diseases at present. It focuses on binary classification problems, whereas visiting patients suffer multi-categorical retinal disorders in clinical settings. Multimodal clinical images, such as OCTA, visual field and fundus images should be integrated to build a generalized AI system for more reliable AI diagnosis. The challenge is coordinating multicenter collaborations to build good quality and extensive data collection to train and improve AI models. AI is an instrument to upturn clinical decision power with many possible applications for ophthalmologists.

LIMITATIONS OF ARTIFICIAL INTELLIGENCE

Any software design is not perfect, and so artificial intelligence is also not bias-proof. Five distinct types of machine learning bias that we need to be aware of and guard against: (1) Sample bias: poor data collection for training. Example: Labeling other vascular retinopathies as DR; (2) Prejudice bias: Prejudice bias results from training data that is influenced by stereotypes. For example, a large cup is always glaucoma; (3) Measurement bias. For example, fundus photo color, different cameras give different color measurements; best avoided by having multiple or similar measuring devices and humans trained to compare the output of these devices when developing the algorithm; (4) Algorithm bias: Choosing the wrong software algorithm for a specific disease; and (5) The quality control of images for prediction.

CONCLUSION

With the substantial advances in AI in the field of ophthalmology, it can be assumed that now is the dawn of AI in ophthalmology. With the advent of technologies based on different AI modules, such as DL, ML and GAN, it can be assumed that AI has a promising role in the diagnosis of DR, ARMD, dry eye, glaucoma, keratoconus and cataracts. In particular, these AI-based applications are more relevant during the present coronavirus disease 2019 era and for serving the remotest of areas worldwide. Compared with conventional tests performed at tertiary ophthalmic centers, AI performs better in the screening and diagnosis of various eye diseases. After considering all the facts and overcoming challenges in its application, it can be said that AI in the field of ophthalmology is here to stay and revolutionize eye care in the 21st century. Nonetheless, researchers in the field of ophthalmology need to develop more robust AI modules with better verification and validation. Further, we must not rely only on near-real AI as no modality can possibly replace the level of affection, care and sensitivity as that provided by human caregivers.

Footnotes

Provenance and peer review: Invited article; Externally peer reviewed.

Peer-review model: Single blind

Specialty type: Ophthalmology

Country/Territory of origin: India

Peer-review report’s scientific quality classification

Grade A (Excellent): 0

Grade B (Very good): B

Grade C (Good): C, C

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Hanada E, Japan; Jheng YC, Taiwan S-Editor: Wu YXJ L-Editor: Filipodia P-Editor: Wu YXJ

References
1.  Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 2000;44:207-219.  [PubMed]  [DOI]  [Cited in This Article: ]
2.  Deng L, Yu D. Deep Learning: Methods and Applications. Found Trends Signal Process. 2014;7:197-387.  [PubMed]  [DOI]  [Cited in This Article: ]
3.  Fong DS, Aiello LP, Ferris FL 3rd, Klein R. Diabetic retinopathy. Diabetes Care. 2004;27:2540-2553.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 467]  [Cited by in F6Publishing: 449]  [Article Influence: 22.5]  [Reference Citation Analysis (0)]
4.  Namperumalsamy P, Nirmalan PK, Ramasamy K. Developing a screening program to detect sight-threatening diabetic retinopathy in South India. Diabetes Care. 2003;26:1831-1835.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 48]  [Cited by in F6Publishing: 48]  [Article Influence: 2.3]  [Reference Citation Analysis (0)]
5.  Ryan ME, Rajalakshmi R, Prathiba V, Anjana RM, Ranjani H, Narayan KM, Olsen TW, Mohan V, Ward LA, Lynn MJ, Hendrick AM. Comparison Among Methods of Retinopathy Assessment (CAMRA) Study: Smartphone, Nonmydriatic, and Mydriatic Photography. Ophthalmology. 2015;122:2038-2043.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 78]  [Cited by in F6Publishing: 74]  [Article Influence: 8.2]  [Reference Citation Analysis (0)]
6.  Oh E, Yoo TK, Park EC. Diabetic retinopathy risk prediction for fundus examination using sparse learning: a cross-sectional study. BMC Med Inform Decis Mak. 2013;13:106.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 33]  [Cited by in F6Publishing: 44]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
7.  Congdon NG, Friedman DS, Lietman T. Important causes of visual impairment in the world today. JAMA. 2003;290:2057-2060.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 479]  [Cited by in F6Publishing: 474]  [Article Influence: 22.6]  [Reference Citation Analysis (0)]
8.  Shah P, Mishra DK, Shanmugam MP, Doshi B, Jayaraj H, Ramanjulu R. Validation of Deep Convolutional Neural Network-based algorithm for detection of diabetic retinopathy - Artificial intelligence versus clinician for screening. Indian J Ophthalmol. 2020;68:398-405.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 21]  [Article Influence: 5.3]  [Reference Citation Analysis (0)]
9.  Valverde C, Garcia M, Hornero R, Lopez-Galvez MI. Automated detection of diabetic retinopathy in retinal images. Indian J Ophthalmol. 2016;64:26-32.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 26]  [Cited by in F6Publishing: 21]  [Article Influence: 2.6]  [Reference Citation Analysis (0)]
10.  Sosale B, Sosale AR, Murthy H, Sengupta S, Naveenam M. Medios- An offline, smartphone-based artificial intelligence algorithm for the diagnosis of diabetic retinopathy. Indian J Ophthalmol. 2020;68:391-395.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 16]  [Cited by in F6Publishing: 32]  [Article Influence: 8.0]  [Reference Citation Analysis (0)]
11.  Hassan T, Akram MU, Hassan B, Syed AM, Bazaz SA. Automated segmentation of subretinal layers for the detection of macular edema. Appl Opt. 2016;55:454-461.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 42]  [Cited by in F6Publishing: 15]  [Article Influence: 1.9]  [Reference Citation Analysis (0)]
12.  Akram MU, Tariq A, Khan SA, Javed MY. Automated detection of exudates and macula for grading of diabetic macular edema. Comput Methods Programs Biomed. 2014;114:141-152.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 79]  [Cited by in F6Publishing: 38]  [Article Influence: 3.8]  [Reference Citation Analysis (0)]
13.  Niemeijer M, van Ginneken B, Russell SR, Suttorp-Schulten MS, Abràmoff MD. Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for diabetic retinopathy diagnosis. Invest Ophthalmol Vis Sci. 2007;48:2260-2267.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 264]  [Cited by in F6Publishing: 153]  [Article Influence: 9.0]  [Reference Citation Analysis (0)]
14.  Wang S, Tang HL, Al Turk LI, Hu Y, Sanei S, Saleh GM, Peto T. Localizing Microaneurysms in Fundus Images Through Singular Spectrum Analysis. IEEE Trans Biomed Eng. 2017;64:990-1002.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 54]  [Cited by in F6Publishing: 29]  [Article Influence: 3.6]  [Reference Citation Analysis (0)]
15.  Niemeijer M, van Ginneken B, Staal J, Suttorp-Schulten MS, Abràmoff MD. Automatic detection of red lesions in digital color fundus photographs. IEEE Trans Med Imaging. 2005;24:584-592.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 344]  [Cited by in F6Publishing: 188]  [Article Influence: 9.9]  [Reference Citation Analysis (0)]
16.  Yousefi S, Goldbaum MH, Balasubramanian M, Jung TP, Weinreb RN, Medeiros FA, Zangwill LM, Liebmann JM, Girkin CA, Bowd C. Glaucoma progression detection using structural retinal nerve fiber layer measurements and functional visual field points. IEEE Trans Biomed Eng. 2014;61:1143-1154.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 63]  [Cited by in F6Publishing: 58]  [Article Influence: 5.8]  [Reference Citation Analysis (0)]
17.  Tavakkoli A, Kamran SA, Hossain KF, Zuckerbrod SL. A novel deep learning conditional generative adversarial network for producing angiography images from retinal fundus photographs. Sci Rep. 2020;10:21580.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 53]  [Cited by in F6Publishing: 31]  [Article Influence: 7.8]  [Reference Citation Analysis (0)]
18.  Morya AK, Gowdar J, Kaushal A, Makwana N, Biswas S, Raj P, Singh S, Hegde S, Vaishnav R, Shetty S, S P V, Shah V, Paul S, Muralidhar S, Velis G, Padua W, Waghule T, Nazm N, Jeganathan S, Reddy Mallidi A, Susan John D, Sen S, Choudhary S, Parashar N, Sharma B, Raghav P, Udawat R, Ram S, Salodia UP. Evaluating the Viability of a Smartphone-Based Annotation Tool for Faster and Accurate Image Labelling for Artificial Intelligence in Diabetic Retinopathy. Clin Ophthalmol. 2021;15:1023-1039.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 3]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
19.  Padhy SK, Takkar B, Chawla R, Kumar A. Artificial intelligence in diabetic retinopathy: A natural step to the future. Indian J Ophthalmol. 2019;67:1004-1009.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 32]  [Cited by in F6Publishing: 38]  [Article Influence: 7.6]  [Reference Citation Analysis (0)]
20.  Wong WL, Su X, Li X, Cheung CM, Klein R, Cheng CY, Wong TY. Global prevalence of age-related macular degeneration and disease burden projection for 2020 and 2040: a systematic review and meta-analysis. Lancet Glob Health. 2014;2:e106-e116.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2195]  [Cited by in F6Publishing: 3003]  [Article Influence: 300.3]  [Reference Citation Analysis (0)]
21.  Fritsche LG, Fariss RN, Stambolian D, Abecasis GR, Curcio CA, Swaroop A. Age-related macular degeneration: genetics and biology coming together. Annu Rev Genomics Hum Genet. 2014;15:151-171.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 288]  [Cited by in F6Publishing: 354]  [Article Influence: 35.4]  [Reference Citation Analysis (0)]
22.  Kermany DS, Goldbaum M, Cai W, Valentim CCS, Liang H, Baxter SL, McKeown A, Yang G, Wu X, Yan F, Dong J, Prasadha MK, Pei J, Ting MYL, Zhu J, Li C, Hewett S, Ziyar I, Shi A, Zhang R, Zheng L, Hou R, Shi W, Fu X, Duan Y, Huu VAN, Wen C, Zhang ED, Zhang CL, Li O, Wang X, Singer MA, Sun X, Xu J, Tafreshi A, Lewis MA, Xia H, Zhang K. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. Cell. 2018;172:1122-1131.e9.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2132]  [Cited by in F6Publishing: 1546]  [Article Influence: 309.2]  [Reference Citation Analysis (0)]
23.  Matsuba S, Tabuchi H, Ohsugi H, Enno H, Ishitobi N, Masumoto H, Kiuchi Y. Accuracy of ultra-wide-field fundus ophthalmoscopy-assisted deep learning, a machine-learning technology, for detecting age-related macular degeneration. Int Ophthalmol. 2019;39:1269-1275.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 39]  [Cited by in F6Publishing: 33]  [Article Influence: 5.5]  [Reference Citation Analysis (0)]
24.  Srinivasan PP, Kim LA, Mettu PS, Cousins SW, Comer GM, Izatt JA, Farsiu S. Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images. Biomed Opt Express. 2014;5:3568-3577.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 295]  [Cited by in F6Publishing: 179]  [Article Influence: 17.9]  [Reference Citation Analysis (0)]
25.  De Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Blackwell S, Askham H, Glorot X, O'Donoghue B, Visentin D, van den Driessche G, Lakshminarayanan B, Meyer C, Mackinder F, Bouton S, Ayoub K, Chopra R, King D, Karthikesalingam A, Hughes CO, Raine R, Hughes J, Sim DA, Egan C, Tufail A, Montgomery H, Hassabis D, Rees G, Back T, Khaw PT, Suleyman M, Cornebise J, Keane PA, Ronneberger O. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018;24:1342-1350.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1420]  [Cited by in F6Publishing: 1103]  [Article Influence: 183.8]  [Reference Citation Analysis (0)]
26.  Lee CS, Baughman DM, Lee AY. Deep learning is effective for the classification of OCT images of normal versus Age-related Macular Degeneration. Ophthalmol Retina. 2017;1:322-327.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 299]  [Cited by in F6Publishing: 302]  [Article Influence: 43.1]  [Reference Citation Analysis (0)]
27.  Treder M, Lauermann JL, Eter N. Automated detection of exudative age-related macular degeneration in spectral domain optical coherence tomography using deep learning. Graefes Arch Clin Exp Ophthalmol. 2018;256:259-265.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 125]  [Cited by in F6Publishing: 124]  [Article Influence: 17.7]  [Reference Citation Analysis (0)]
28.  Bogunovic H, Waldstein SM, Schlegl T, Langs G, Sadeghipour A, Liu X, Gerendas BS, Osborne A, Schmidt-Erfurth U. Prediction of Anti-VEGF Treatment Requirements in Neovascular AMD Using a Machine Learning Approach. Invest Ophthalmol Vis Sci. 2017;58:3240-3248.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 89]  [Cited by in F6Publishing: 101]  [Article Influence: 14.4]  [Reference Citation Analysis (0)]
29.  Prahs P, Radeck V, Mayer C, Cvetkov Y, Cvetkova N, Helbig H, Märker D. OCT-based deep learning algorithm for the evaluation of treatment indication with anti-vascular endothelial growth factor medications. Graefes Arch Clin Exp Ophthalmol. 2018;256:91-98.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 63]  [Cited by in F6Publishing: 68]  [Article Influence: 9.7]  [Reference Citation Analysis (0)]
30.  Aslam TM, Zaki HR, Mahmood S, Ali ZC, Ahmad NA, Thorell MR, Balaskas K. Use of a Neural Net to Model the Impact of Optical Coherence Tomography Abnormalities on Vision in Age-related Macular Degeneration. Am J Ophthalmol. 2018;185:94-100.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 26]  [Article Influence: 4.3]  [Reference Citation Analysis (0)]
31.  Schmidt-Erfurth U, Bogunovic H, Sadeghipour A, Schlegl T, Langs G, Gerendas BS, Osborne A, Waldstein SM. Machine Learning to Analyze the Prognostic Value of Current Imaging Biomarkers in Neovascular Age-Related Macular Degeneration. Ophthalmol Retina. 2018;2:24-30.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 95]  [Cited by in F6Publishing: 98]  [Article Influence: 16.3]  [Reference Citation Analysis (0)]
32.  Raja C, Gangatharan N. A Hybrid Swarm Algorithm for optimizing glaucoma diagnosis. Comput Biol Med. 2015;63:196-207.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 24]  [Cited by in F6Publishing: 25]  [Article Influence: 2.8]  [Reference Citation Analysis (0)]
33.  Singh A, Dutta MK, ParthaSarathi M, Uher V, Burget R. Image processing based automatic diagnosis of glaucoma using wavelet features of segmented optic disc from fundus image. Comput Methods Programs Biomed. 2016;124:108-120.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 144]  [Cited by in F6Publishing: 71]  [Article Influence: 8.9]  [Reference Citation Analysis (0)]
34.  Chen XY, Xu YW, Wong DWK, Wong TY, Liu J. Glaucoma detection based on deep convolutional neural network. Annu Int Conf IEEE Eng Med Biol Soc. 2015;2015:715-718.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 159]  [Cited by in F6Publishing: 74]  [Article Influence: 8.2]  [Reference Citation Analysis (0)]
35.  Kim M, Zuallaert J, de Neve W.   Few-shot learning using a small-sized dataset of high-resolution FUNDUS images for glaucoma diagnosis. MMHealth 2017 - Proceedings of the 2nd International Workshop on Multimedia for Personal Health and Health Care, co-located with MM 2017: 89-92.  [PubMed]  [DOI]  [Cited in This Article: ]
36.  Raghavendra U, Fujita H, Bhandary SV, Gudigar A, Tan JH, Acharya UR. Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images. Information Sciences. 2018;441:41-49.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 240]  [Cited by in F6Publishing: 244]  [Article Influence: 40.7]  [Reference Citation Analysis (0)]
37.  Li AN, Cheng J, Wong DWK, Liu J. Integrating holistic and local deep features for glaucoma classification. Annu Int Conf IEEE Eng Med Biol Soc. 2016;2016:1328-1331.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 20]  [Cited by in F6Publishing: 15]  [Article Influence: 2.1]  [Reference Citation Analysis (0)]
38.  Li Z, He Y, Keel S, Meng W, Chang RT, He M. Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs. Ophthalmology. 2018;125:1199-1206.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 369]  [Cited by in F6Publishing: 417]  [Article Influence: 69.5]  [Reference Citation Analysis (0)]
39.  Blencowe H, Vos T, Lee AC, Philips R, Lozano R, Alvarado MR, Cousens S, Lawn JE. Estimates of neonatal morbidities and disabilities at regional and global levels for 2010: introduction, methods overview, and relevant findings from the Global Burden of Disease study. Pediatr Res. 2013;74 Suppl 1:4-16.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 98]  [Cited by in F6Publishing: 104]  [Article Influence: 9.5]  [Reference Citation Analysis (0)]
40.  Wang J, Ju R, Chen Y, Zhang L, Hu J, Wu Y, Dong W, Zhong J, Yi Z. Automated retinopathy of prematurity screening using deep neural networks. EBioMedicine. 2018;35:361-368.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 102]  [Cited by in F6Publishing: 79]  [Article Influence: 13.2]  [Reference Citation Analysis (0)]
41.  Tan Z, Simkin S, Lai C, Dai S. Deep Learning Algorithm for Automated Diagnosis of Retinopathy of Prematurity Plus Disease. Transl Vis Sci Technol. 2019;8:23.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 61]  [Cited by in F6Publishing: 45]  [Article Influence: 9.0]  [Reference Citation Analysis (0)]
42.  Saad A, Gatinel D. Topographic and tomographic properties of forme fruste keratoconus corneas. Invest Ophthalmol Vis Sci. 2010;51:5546-5555.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 180]  [Cited by in F6Publishing: 178]  [Article Influence: 12.7]  [Reference Citation Analysis (0)]
43.  Kovács I, Miháltz K, Kránitz K, Juhász É, Takács Á, Dienes L, Gergely R, Nagy ZZ. Accuracy of machine learning classifiers using bilateral data from a Scheimpflug camera for identifying eyes with preclinical signs of keratoconus. J Cataract Refract Surg. 2016;42:275-283.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 64]  [Cited by in F6Publishing: 72]  [Article Influence: 9.0]  [Reference Citation Analysis (0)]
44.  Hwang ES, Perez-Straziota CE, Kim SW, Santhiago MR, Randleman JB. Distinguishing Highly Asymmetric Keratoconus Eyes Using Combined Scheimpflug and Spectral-Domain OCT Analysis. Ophthalmology. 2018;125:1862-1871.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 63]  [Cited by in F6Publishing: 80]  [Article Influence: 13.3]  [Reference Citation Analysis (0)]
45.  Vinciguerra R, Ambrósio R Jr, Roberts CJ, Azzolini C, Vinciguerra P. Biomechanical Characterization of Subclinical Keratoconus Without Topographic or Tomographic Abnormalities. J Refract Surg. 2017;33:399-407.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 75]  [Cited by in F6Publishing: 111]  [Article Influence: 18.5]  [Reference Citation Analysis (0)]
46.  Lopes BT, Ramos IC, Salomão MQ, Guerra FP, Schallhorn SC, Schallhorn JM, Vinciguerra R, Vinciguerra P, Price FW Jr, Price MO, Reinstein DZ, Archer TJ, Belin MW, Machado AP, Ambrósio R Jr. Enhanced Tomographic Assessment to Detect Corneal Ectasia Based on Artificial Intelligence. Am J Ophthalmol. 2018;195:223-232.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 78]  [Cited by in F6Publishing: 98]  [Article Influence: 16.3]  [Reference Citation Analysis (0)]
47.  Chan C, Ang M, Saad A, Chua D, Mejia M, Lim L, Gatinel D. Validation of an Objective Scoring System for Forme Fruste Keratoconus Detection and Post-LASIK Ectasia Risk Assessment in Asian Eyes. Cornea. 2015;34:996-1004.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 45]  [Cited by in F6Publishing: 49]  [Article Influence: 5.4]  [Reference Citation Analysis (0)]
48.  Ambrósio R Jr, Lopes BT, Faria-Correia F, Salomão MQ, Bühren J, Roberts CJ, Elsheikh A, Vinciguerra R, Vinciguerra P. Integration of Scheimpflug-Based Corneal Tomography and Biomechanical Assessments for Enhancing Ectasia Detection. J Refract Surg. 2017;33:434-443.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 189]  [Cited by in F6Publishing: 207]  [Article Influence: 29.6]  [Reference Citation Analysis (0)]
49.  Sharif MS, Qahwaji R, Ipson S, Brahma A. Medical image classification based on artificial intelligence approaches: A practical study on normal and abnormal confocal corneal images. Appl Soft Comput. 2015;36:269282.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 21]  [Cited by in F6Publishing: 9]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
50.  Eleiwa T, Elsawy A, Özcan E, Abou Shousha M. Automated diagnosis and staging of Fuchs' endothelial cell corneal dystrophy using deep learning. Eye Vis (Lond). 2020;7:44.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 18]  [Article Influence: 4.5]  [Reference Citation Analysis (0)]
51.  Gu H, Guo Y, Gu L, Wei A, Xie S, Ye Z, Xu J, Zhou X, Lu Y, Liu X, Hong J. Deep learning for identifying corneal diseases from ocular surface slit-lamp photographs. Sci Rep. 2020;10:17851.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 24]  [Cited by in F6Publishing: 46]  [Article Influence: 11.5]  [Reference Citation Analysis (0)]
52.  Kessel K, Mattila J, Linder N, Kivelä T, Lundin J. Deep Learning Algorithms for Corneal Amyloid Deposition Quantitation in Familial Amyloidosis. Ocul Oncol Pathol. 2020;6:58-65.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 3]  [Article Influence: 0.6]  [Reference Citation Analysis (0)]
53.  Yedidya T, Hartley R, Guillon JP, Kanagasingam Y. Automatic dry eye detection. Med Image Comput Comput Assist Interv. 2007;10:792-799.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 7]  [Article Influence: 0.4]  [Reference Citation Analysis (0)]
54.  Srivastava R, Gao X, Yin F, Wong DW, Liu J, Cheung CY, Wong TY. Automatic nuclear cataract grading using image gradients. J Med Imaging (Bellingham). 2014;1:014502.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 19]  [Cited by in F6Publishing: 20]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
55.  Liu X, Jiang J, Zhang K, Long E, Cui J, Zhu M, An Y, Zhang J, Liu Z, Lin Z, Li X, Chen J, Cao Q, Li J, Wu X, Wang D, Lin H. Localization and diagnosis framework for pediatric cataracts based on slit-lamp images using deep features of a convolutional neural network. PLoS One. 2017;12:e0168606.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 64]  [Cited by in F6Publishing: 46]  [Article Influence: 6.6]  [Reference Citation Analysis (0)]
56.  Akkara J, Kuriakose A. Review of recent innovations in ophthalmology. Kerala J Ophthalmol. 2018;30:54.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4]  [Cited by in F6Publishing: 5]  [Article Influence: 0.8]  [Reference Citation Analysis (0)]
57.  Imani E, Pourreza HR, Banaee T. Fully automated diabetic retinopathy screening using morphological component analysis. Comput Med Imaging Graph. 2015;43:78-88.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 35]  [Cited by in F6Publishing: 27]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
58.  Yazid H, Arof H, Isa HM. Automated identification of exudates and optic disc based on inverse surface thresholding. J Med Syst. 2012;36:1997-2004.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 16]  [Article Influence: 1.2]  [Reference Citation Analysis (0)]
59.  Akyol K, Şen B, Bayır Ş. Automatic Detection of Optic Disc in Retinal Image by Using Keypoint Detection, Texture Analysis, and Visual Dictionary Techniques. Comput Math Methods Med. 2016;2016:6814791.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 30]  [Cited by in F6Publishing: 34]  [Article Influence: 4.3]  [Reference Citation Analysis (0)]
60.  Rajalakshmi R, Subashini R, Anjana RM, Mohan V. Automated diabetic retinopathy detection in smartphone-based fundus photography using artificial intelligence. Eye (Lond). 2018;32:1138-1144.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 163]  [Cited by in F6Publishing: 191]  [Article Influence: 31.8]  [Reference Citation Analysis (0)]
61.  Ting DSW, Cheung CY, Lim G, Tan GSW, Quang ND, Gan A, Hamzah H, Garcia-Franco R, San Yeo IY, Lee SY, Wong EYM, Sabanayagam C, Baskaran M, Ibrahim F, Tan NC, Finkelstein EA, Lamoureux EL, Wong IY, Bressler NM, Sivaprasad S, Varma R, Jonas JB, He MG, Cheng CY, Cheung GCM, Aung T, Hsu W, Lee ML, Wong TY. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes. JAMA. 2017;318:2211-2223.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1098]  [Cited by in F6Publishing: 1158]  [Article Influence: 165.4]  [Reference Citation Analysis (0)]
62.  Grassmann F, Mengelkamp J, Brandl C, Harsch S, Zimmermann ME, Linkohr B, Peters A, Heid IM, Palm C, Weber BHF. A Deep Learning Algorithm for Prediction of Age-Related Eye Disease Study Severity Scale for Age-Related Macular Degeneration from Color Fundus Photography. Ophthalmology. 2018;125:1410-1420.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 238]  [Cited by in F6Publishing: 249]  [Article Influence: 41.5]  [Reference Citation Analysis (0)]
63.  Lin A, Hoffman D, Gaasterland DE, Caprioli J. Neural networks to identify glaucomatous visual field progression. Am J Ophthalmol135:49-54.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 24]  [Cited by in F6Publishing: 26]  [Article Influence: 1.2]  [Reference Citation Analysis (0)]
64.  Goldbaum MH, Sample PA, Zhang Z, Chan K, Hao J, Lee TW, Boden C, Bowd C, Bourne R, Zangwill L, Sejnowski T, Spinak D, Weinreb RN. Using unsupervised learning with independent component analysis to identify patterns of glaucomatous visual field defects. Invest Ophthalmol Vis Sci. 2005;46:3676-3683.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 27]  [Cited by in F6Publishing: 34]  [Article Influence: 1.8]  [Reference Citation Analysis (0)]
65.  Wang M, Shen LQ, Pasquale LR, Petrakos P, Formica S, Boland MV, Wellik SR, De Moraes CG, Myers JS, Saeedi O, Wang H, Baniasadi N, Li D, Tichelaar J, Bex PJ, Elze T. An Artificial Intelligence Approach to Detect Visual Field Progression in Glaucoma Based on Spatial Pattern Analysis. Invest Ophthalmol Vis Sci. 2019;60:365-375.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 46]  [Cited by in F6Publishing: 72]  [Article Influence: 14.4]  [Reference Citation Analysis (0)]