Editorial Open Access
Copyright ©The Author(s) 2022. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Psychiatry. Feb 19, 2022; 12(2): 204-211
Published online Feb 19, 2022. doi: 10.5498/wjp.v12.i2.204
Screening dementia and predicting high dementia risk groups using machine learning
Haewon Byeon, Department of Medical Big Data, Inje University, Gimhae 50834, South Korea
ORCID number: Haewon Byeon (0000-0002-3363-390X).
Author contributions: Byeon H designed the study, involved in data interpretation, preformed the statistical analysis, and assisted with writing the article.
Supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education, No. 2018R1D1A1B07041091 and 2021S1A5A8062526.
Conflict-of-interest statement: No benefits in any form have been received or will be received from a commercial party related directly or indirectly to the subject of this article.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Haewon Byeon, DSc, PhD, Associate Professor, Director, Department of Medical Big Data, Inje University, 197 Inje-ro, Gimhae 50834, South Korea. bhwpuma@naver.com
Received: June 25, 2021
Peer-review started: June 25, 2021
First decision: September 5, 2021
Revised: September 6, 2021
Accepted: January 19, 2022
Article in press: January 19, 2022
Published online: February 19, 2022
Processing time: 236 Days and 23.8 Hours

Abstract

New technologies such as artificial intelligence, the internet of things, big data, and cloud computing have changed the overall society and economy, and the medical field particularly has tried to combine traditional examination methods and new technologies. The most remarkable field in medical research is the technology of predicting high dementia risk group using big data and artificial intelligence. This review introduces: (1) the definition, main concepts, and classification of machine learning and overall distinction of it from traditional statistical analysis models; and (2) the latest studies in mental science to detect dementia and predict high-risk groups in order to help competent researchers who are challenging medical artificial intelligence in the field of psychiatry. As a result of reviewing 4 studies that used machine learning to discriminate high-risk groups of dementia, various machine learning algorithms such as boosting model, artificial neural network, and random forest were used for predicting dementia. The development of machine learning algorithms will change primary care by applying advanced machine learning algorithms to detect high dementia risk groups in the future.

Key Words: Dementia; Artificial intelligence; Clinical decision support system; Machine learning; Mild cognitive impairment

Core Tip: The predictive performance of machine learning techniques varies among studies because of the difference in machine data (especially, Y variables) imbalance, characteristics of features included in the model, and measurement methods of outcome variables. Therefore, further studies are continuously needed to check the predictive performance of each algorithm because, although some studies have proven that the performance of a specific machine learning algorithm is excellent, the results cannot be generalized for all types of data.



INTRODUCTION

New technologies such as artificial intelligence, the internet of things, big data, and cloud computing have appeared with the advent of the Fourth Industrial Revolution. These new technologies have changed the overall society and economy, and the medical field particularly has tried to combine traditional examination methods and new technologies. The most remarkable field in medical research is the technology of predicting high-risk groups using big data and artificial intelligence. The picture archiving and communication system and electrical medical records have been implemented in hospitals over the past 20 years, and it has accumulated an enormous amount of medical data. However, there is a limit to analyzing patterns or characteristics of the data using only traditional statistical methods due to the size (volume) and complexity of such medical big data.

However, studies have persistently predicted dementia based on machine learning[1-5] over the past 10 years by using cognitive abilities such as neuropsychological tests, in addition to brain imaging and image analysis, which has shown new possibilities for screening dementia and predicting groups with high dementia risk based on medical artificial intelligence. It is expected that the clinical decision support system (CDSS) using artificial intelligence including machine learning will be widely introduced in medical research and it will affect disease prediction and early detection. It is critical to collect high-quality data and analyze the data with an appropriate machine learning technique suitable for the properties of the data to create safe and meaningful medical artificial intelligence. It is necessary to understand the characteristics of machine learning algorithms, different from traditional statistical methods, in order to develop a CDSS that is scientifically meaningful and shows good performance with the participation of medical experts in this process.

Machine learning has been widely used over the past 20 years mainly because of the emergence of big data[6]. It is because the performance of machine learning mostly depends on the quantity and quality of data, and the required level of data has become available only recently. The amount of digital data produced worldwide has been skyrocketing, and it is forecasted that it will be 163 zettabytes per year in 2025[7]. Big data that can be used for medical research include electronic medical record and picture archiving and communication system data individually constructed by a medical institution, insurance claim data of the Health Insurance Corporation, and epidemiological data such as the National Health and Nutrition Examination Survey data. More mental science studies[8,9] have tried to identify risk factors for mental disorders such as depression and cognitive disorders such as dementia using these epidemiological data.

Machine learning algorithms have been successfully applied in medical image processing fields such as neurology and neurosurgery. However, mental science, which mainly deals with clinical data (structured data) such as cognition and emotion, has relatively fewer studies on disease prediction using machine learning. Furthermore, researchers in mental science do not have a deep understanding on machine learning, either. This review introduces: (1) The definition, main concepts, and classification of machine learning and overall distinction of it from traditional statistical analysis models; and (2) The latest studies in mental science to detect dementia and predict high-risk groups in order to help competent researchers who are challenging medical artificial intelligence in the field of psychiatry.

DEFINITION OF MACHINE LEARNING

The machine learning technique is a representative method for exploring the risk factors or high-risk groups of a disease by analyzing medical big data (Figure 1). Many studies mix the concepts of artificial intelligence, machine learning, and deep learning. Machine learning means the algorithm for data classification and prediction, while deep learning is the algorithm that is composed of an input layer, multiple hidden layers, and an output layer, imitating human neurons, among many machine learning algorithms. Moreover, artificial intelligence can be defined as the highest concept encompassing both deep learning and machine learning. Traditional statistical techniques such as analysis of variance and regression analysis can also be used for analyzing big data. However, traditional statistical techniques cannot identify the complex linear relationships among variables well because big data contain multiple independent variables, and they are limited in analyzing data with many missing values.

Figure 1
Figure 1 Diagram for concepts of artificial intelligence, deep learning and machine learning. KNN: K-nearest neighbors; SVM: Support vector machine; RNN: Recurrent neural network; MLP: Multilayer perceptron; CNN: Convolutional neural network.

Machine learning refers to a method of improving the performance of an algorithm by itself through learning from data. Mitchell[10], a world-renowned machine learning scientist, defined machine learning using task, experience, and performance measure. If there is a computer program, which gradually performs a task better as it accumulates experience through performance measures, it is considered that learning has been accomplished in that computer program. In other words, machine learning is a method that allows a computer to learn using data and finds an optimal solution as a result of it.

In general, machine learning algorithms develop various machine learning models to predict disease risk factors and select the model showing the best performance as the final model. While traditional statistical techniques such as regression analysis use the significance probability to evaluate the predictive performance of models, machine learning algorithms use a loss function. Mean squared errors and mean absolute errors are used as loss functions to evaluate the performance of machine learning for continuous variables, while cross entropy is used for categorical variables[11]. If there are many model parameters or there is a possibility to misrepresent the result due to biased parameters, regularization, a method of adding a penalty to a loss function, is used. L1 (lasso) regularization and L2 (ridge) regularization are representative regularizations used in machine learning, and the Akaike information criterion and Bayesian information criterion are also used[12].

EVALUATING THE PREDICTIVE PERFORMANCE OF MACHINE LEARNING MODELS

Generally, hold-out validation and k-fold validation are mainly used to evaluate the predictive performance of machine learning models. Hold-out validation validates the accuracy by separating the dataset into a training dataset and a test dataset (Figure 2A). For example, 80% of the dataset is used as a training dataset to train a learning model, and the remaining 20% is used as a test dataset to evaluate predictive performance (accuracy). However, if the size of data is not large enough, the hold-out validation may suffer from overfitting. The k-fold validation can be used as an alternative to overcoming the limitation of the hold-out validation. The k-fold validation divides the data into k groups, uses each group as a verification group, and selects the model with the smallest mean error (Figure 2B).

Figure 2
Figure 2 The concept of two validations. A: The concept of hold-out validation; B: The concept of k-fold validation.
THE STRENGTH OF MACHINE LEARNING IN PREDICTING HIGH DEMENTIA RISK GROUPS

Many previous studies[4,5] did not define the high dementia risk group as a dementia group because although their memory or cognitive functions were lower than the group with the same age and education level in a standardized cognitive test, the ability to perform daily life (e.g. activities of daily living) was preserved. In other words, since it is the preclinical stage of dementia, it has been receiving attention in terms of early detection and prevention of dementia.

In general, the main goals of data analysis for predicting high dementia risk groups are inference and prediction. The inference is based on theories and previous studies, and it assumes that data is generated by a specific statistical-based model and tests hypotheses established by the researcher. Even though traditional statistical analyses emphasize inference, prediction using machine learning, unlike inference, often does not establish hypotheses or does not conduct hypothesis testing. Therefore, statistical learning can be considered more advantageous than machine learning in analyzing social science data (or mental science data) emphasizing the relationship between variables. However, as convergence studies on disease prediction have been active recently, this comparison is gradually becoming meaningless. In other words, it has become more common not to strictly distinguish terminologies such as machine learning, statistical analysis, and predictive analysis. Nevertheless, the followings are the strengths of machine learning over traditional statistical analyses. First, it is important to build a predictive model and identify the relationship between key variables associated with the issue in traditional statistical analyses. On the other hand, machine learning focuses on identifying patterns and exploring predictive factors of dementia rather than testing a specific hypothesis. Therefore, machine learning techniques can be applied more flexibly to more diverse data than traditional statistical analysis techniques.

Second, while traditional statistical analysis techniques focus on linear models, machine learning has the advantage of handling nonlinear models and complex interactions between variables[13].

Third, machine learning can analyze a large amount of data that are difficult to handle with traditional statistical methods. Data generally used in statistics are called “long data” and they refer to data in which the number of cases exceeds the number of variables, while “wide data” indicate data in which the number of variables is larger than the number of cases[14]. Even though it is hard to analyze wide data with traditional statistical techniques, machine learning has the advantage that it can analyze long data as well as wide data easily. In other words, while traditional statistical techniques are optimized to analyze data collected through researchers' research design, machine learning can analyze large volumes of data collected without a specific intention.

LIMITATIONS OF MACHINE LEARNING IN PREDICTING HIGH DEMENTIA RISK GROUP

The limitations of machine learning in detecting dementia or predicting high dementia risk groups are as follows. First, it is difficult to interpret the relationship between explanatory variables and response variables with black-box techniques (e.g., boosting models, artificial neural networks, and random forests) among machine learning techniques. While traditional statistical analysis techniques aim to explain (interpret) the relationship between independent and dependent variables, the goal of machine learning techniques is to predict. For example, studies that aim to infer high dementia risk groups develop a study model based on theories and previous studies and test hypotheses. It is possible to explain the characteristics of these high dementia risk groups through the model. On the other hand, studies that aim to predict usually don’t have a clear study model and often don’t test a hypothesis. However, it is possible to confirm which variables are critical to predicting dementia. In particular, when there are new learning data, even if dementia does not develop, it has the advantage of providing the necessary help to the high dementia risk group by categorizing the elderly in the community into a high-risk group and a low-risk group. In summary, traditional statistical analyses emphasize inference, and machine learning focuses on prediction. Machine learning models such as random forests and neural networks partially overcome the issues of the black box by visually presenting the relative importance of variables using “variable importance” and “partial dependence plot”. However, it still has limitations in interpreting the relationship or causality between variables.

Second, it may be difficult for mental science researchers to understand machine learning techniques that emphasize the accuracy of prediction rather than explaining the relationship between variables and do not focus on inference of hypotheses. Among the machine learning techniques, the penalized regression model, which is relatively close to the traditional statistical model, presents which explanatory variable is related to the response variable in which direction and how much, but it generally does not show the statistical significance of the explanatory variable like the linear regression model.

Third, unlike the traditional statistical model that models a small number of variables for a theoretical test, the machine learning technique is data-driven. Therefore, unless the data are unbiased good quality data, it is highly likely that biased results will be derived.

TYPES OF MACHINE LEARNING
Regression algorithm

Regression models based on stepwise selection have very poor performance in high-dimensional models. Therefore, it is compensated by using the regulation method, which gives a penalty every time the number of variables is increased. Lasso regression is a representative method[15]. In order to reduce the effect of outliers or singularity in the data, a robust regression technique that selects and trains a part of the data and reiterates this process can also be used[16].

Clustering algorithms

The clustering algorithm classifies data into a specified number of clusters according to the similarity of the attributes. Since the data have only attribute values and labels do not exist, it is called unsupervised learning. The k-means algorithm is a representative clustering algorithm.

Classification algorithms

Classification algorithms include decision tree (DT), support vector machine (SVM), k-nearest neighbor, and multilayer perceptron (MLP) ensemble learning. It is important to treat the imbalance of y-class when applying the classification algorithm. If there is an imbalance of classes, the group with a larger number of data is treated as more important, and the predictive performance decreases. Undersampling, oversampling, and synthetic minority over-sampling technique (SMOTE) methods are mainly used to deal with data imbalance[17], and it has been reported that the performance of SMOTE is generally better than that of undersampling and oversampling[18].

DT

DT is a classifier that repeats binary classification based on the threshold value of a specific variable to the desired depth. Classification criteria variables and values are automatically learned from the data. The classification and regression tree algorithm is used for the learning of DT, instead of gradient descent. This method adds nodes step by step to minimize Shannon entropy or Gini index. The advantage of DT is that the learned classification results can be easily understood by people.

SVM

SVM is a machine learning algorithm that finds the optimal decision boundary through linear separation that separates the hyperplane optimally. If data have a non-linear relationship, the same method is applied after transforming the input variable using a kernel function. SVM solves nonlinear problems related to input space (e.g., two-dimension) by transforming it into a high-dimensional feature space. For example, when A = (a, d) and B = (b, c) are non-linearly separable in 2D, it has linearly separable characteristics if they are mapped in 3D. Thus, when adequate nonlinear mapping is used in a sufficiently large dimension, data with two classes can always be separated in the maximum-margin hyperplane. The advantage of SVM is that it can model complex nonlinear decision-making domains.

MLP

Until the late 20th century, studies using artificial neural networks used shallow networks with two or less hidden layers[19]. However, as the effectiveness of deep neural networks was confirmed in the 21st century[19], the dropout technique and a rectified linear unit function were developed after 2010[20]. Through them, the era of deep learning has begun. The advantage of MLP is its excellent accuracy. Since the accuracy of deep neural networks is generally higher than that of shallow networks[21], it is recommended to apply deep neural networks to obtain more accurate classification or prediction in disease data. Although deep neural networks generally have slightly higher accuracy than other machine learning models, the learning time of it is longer[22]. Therefore, researchers need to select an algorithm suitable for the purpose when developing a machine learning model.

Ensemble learning methods

Ensemble learning refers to a method to learn many models using only some samples or some variables of the data and use these models at the same time, which usually provides better predictive performance than when using a single model. Bootstrap aggregating (bagging) and boosting are representative ensemble learning techniques. Bagging is a method of determining the final output by fitting the result variables several times using some samples or only some variables of the training dataset[23]. Bagging shows good performance because as the number of classifiers increases, the variance of the prediction means of the classifiers decreases. Boosting refers to a method of sequentially generating multiple classifiers. The bagging of DT and random forest are typical examples of the ensemble learning technique. Fernandez-Delgado et al[24] compared the performance of classifiers for 121 datasets and reported that random forest impressively outperformed the rest 179 classifiers.

STUDIES OF PREDICTING DEMENTIA BASED ON MACHINE LEARNING

Most of the previous studies[25,26] on the detection of dementia and the prediction of high-risk groups used traditional statistical methods such as regression analysis or structural equation models, but some studies[2-5] applied machine learning (Table 1). Previous studies using machine learning techniques for the elderly with dementia predicted dementia, mild cognitive impairment, and very mild dementia using various features including demographic information[2], medical records[2-5], dementia test scores[3,4], and normalized whole-brain volume[2]. Previous studies have shown that machine learning models had different predictive performance. Bansal et al[2] reported that the accuracy (99.52) of the DT model (J48) had the highest accuracy compared to other machine learning models (e.g., naïve Bayes, random forest, and MLP). On the other hand, Zhu et al[4] revealed that the accuracy (predictive performance) of MLP (87%), naive Bayes (87%), and SVM (87%) was excellent. Jammeh et al[5] confirmed that the area under the curve (AUC) (predictive performance) of naive Bayes (AUC = 0.869) was the best compared to other machine learning models. The predictive performance of machine learning techniques varies among studies because of the difference in machine data (especially, Y variables) imbalance, characteristics of features included in the model, and measurement methods of outcome variables. Therefore, further studies are continuously needed to check the predictive performance of each algorithm because, although some studies have proven that the performance of a specific machine learning algorithm is excellent, the results cannot be generalized for all types of data.

Table 1 Summary of studies.
Ref.
Data
Features
Models/algoritms
Results
Bansal et al[2]Total of 416 subjects in cross-sectional data and 373 records in longitudinal data Age, sex, education, socioeconomic status, mini-mental state examination, clinical dementia rating, atlas scaling factor, estimated total intracranial volume, and normalized whole-brain volume J48, naive Bayes, random forest, multilayer perceptron Classification accuracy; J48: 99.52%; Naive Bayes: 99.28%; Random forest: 92.55%; Multilayer perceptron: 96.88%
Bhagyashree et al[3]Total of 466 men and women, health and ageing, in South IndiaConsortium to establish a registry for Alzheimer’s disease, community screening instrument fordementia Jrip, naive Bayes, random forest and J48, synthetic minority oversampling techniqueSensitivity; Word list recall (WLR) score lower than the population mean: 92.5%; cog-score/verbal fluency/WLR score lower than 0.5 SD lower than population mean: 85.1%
Zhu et al[4]Total of 5272 patients were analyzed. Normal cognition, mild cognitive impairment, very mild dementiaHistory of cognitive status, objective assessments including the clinical dementia rating, cognitive abilities screening instrument, and montreal cognitive assessmentRandom forest, AdaBoost, LogitBoost, neural network, naive Bayes, and support vector machine (SVM)Overall performance of the diagnostic models; Overall accuracy; Random forest: 0.86; AdaBoost: 0.83; LogitBoost: 0.81; Multilayer perceptron: 0.87; Naive Bayes: 0.87; SVM: 0.87
Jammeh et al[5]Total of 26483 patients aged > 65 yr (National Health Service data)Total of 15469 read codes, of which 4301 were diagnosis codes, 5028 process of care codes, and 6101 medication codesSVM, naive Bayes, random forest, logistic regressionNaive Bayes classifier gave the best performance with a sensitivity and specificity of 84.47% and 86.67%; The area under the curve naive Bayes: 0.869
CONCLUSION

This study introduced the definition and classification of machine learning techniques and case studies of predicting dementia based on machine learning. Various machine learning algorithms such as boosting model, artificial neural network, and random forest were used for predicting dementia. After the concept of deep learning was introduced, multilayer perceptron has been mainly used for recognizing the patterns of diseases. The development of machine learning algorithms will change primary care by applying advanced machine learning algorithms to detect high dementia risk groups in the future. If researchers pay attention to machine learning and make an effort to learn it while coping with these changes, artificial intelligence technology can be used as a powerful tool (method) for conducting mental science studies.

Footnotes

Provenance and peer review: Invited article; Externally peer reviewed.

Peer-review model: Single blind

Specialty type: Psychiatry

Country/Territory of origin: South Korea

Peer-review report’s scientific quality classification

Grade A (Excellent): A

Grade B (Very good): 0

Grade C (Good): 0

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Bareeqa SB S-Editor: Zhang H L-Editor: A P-Editor: Zhang H

References
1.  Aschwanden D, Aichele S, Ghisletta P, Terracciano A, Kliegel M, Sutin AR, Brown J, Allemand M. Predicting Cognitive Impairment and Dementia: A Machine Learning Approach. J Alzheimers Dis. 2020;75:717-728.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 12]  [Cited by in F6Publishing: 24]  [Article Influence: 8.0]  [Reference Citation Analysis (0)]
2.  Bansal D, Chhikara R, Khanna K, Gupta P. Comparative analysis of various machine learning algorithms for detecting dementia. Procedia Comput Sci. 2018;132:1497-1502.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 46]  [Cited by in F6Publishing: 46]  [Article Influence: 7.7]  [Reference Citation Analysis (0)]
3.  Bhagyashree SIR, Nagaraj K, Prince M, Fall CHD, Krishna M. Diagnosis of Dementia by Machine learning methods in Epidemiological studies: a pilot exploratory study from south India. Soc Psychiatry Psychiatr Epidemiol. 2018;53:77-86.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 37]  [Cited by in F6Publishing: 29]  [Article Influence: 4.8]  [Reference Citation Analysis (0)]
4.  Zhu F, Li X, Tang H, He Z, Zhang C, Hung GU, Chiu PY, Zhou W. Machine learning for the preliminary diagnosis of dementia. Sci Program. 2020;2020:5629090.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 30]  [Cited by in F6Publishing: 14]  [Article Influence: 3.5]  [Reference Citation Analysis (0)]
5.  Jammeh EA, Carroll CB, Pearson SW, Escudero J, Anastasiou A, Zhao P, Chenore T, Zajicek J, Ifeachor E. Machine-learning based identification of undiagnosed dementia in primary care: a feasibility study. BJGP Open. 2018;2:bjgpopen18X101589.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 31]  [Article Influence: 5.2]  [Reference Citation Analysis (0)]
6.  Zhou L, Pan S, Wang J, Vasilakos AV. Machine learning on big data: opportunities and challenges. Neurocomputing. 2017;237:350-361.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 408]  [Cited by in F6Publishing: 425]  [Article Influence: 60.7]  [Reference Citation Analysis (0)]
7.  Reinsel D, Gantz J, Rydning J.   Data age 2025: the evolution of data to life-critical. International Data Corporation: California, 2017.  [PubMed]  [DOI]  [Cited in This Article: ]
8.  Chung HK, Cho Y, Choi S, Shin MJ. The association between serum 25-hydroxyvitamin D concentrations and depressive symptoms in Korean adults: findings from the fifth Korea National Health and Nutrition Examination Survey 2010. PLoS One. 2014;9:e99185.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 17]  [Cited by in F6Publishing: 18]  [Article Influence: 1.8]  [Reference Citation Analysis (0)]
9.  Byeon H. Development of a depression in Parkinson's disease prediction model using machine learning. World J Psychiatry. 2020;10:234-244.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 15]  [Cited by in F6Publishing: 11]  [Article Influence: 2.8]  [Reference Citation Analysis (2)]
10.  Mitchell T  Machine learning. McGraw Hill: New York, 1997.  [PubMed]  [DOI]  [Cited in This Article: ]
11.  Lee HC, Cung CW. Anesthesia research in the artificial intelligence era. Anesthesia and Pain Medicine. 2018;13:248-255.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Cited by in F6Publishing: 1]  [Article Influence: 0.2]  [Reference Citation Analysis (0)]
12.  Diebold FX, Shin M. Machine learning for regularized survey forecast combination: partially-egalitarian LASSO and its derivatives. Int J Forecast. 2019;35:1679-1691.  [PubMed]  [DOI]  [Cited in This Article: ]
13.  Yoo JE, Rho M. TIMSS 2015 Korean student, teacher, and school predictor exploration and identification via random forests. The SNU Journal of Education Research. 2017;26:43-61.  [PubMed]  [DOI]  [Cited in This Article: ]
14.  Bzdok D, Altman N, Krzywinski M. Statistics versus machine learning. Nat Methods. 2018;15:233-234.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 681]  [Cited by in F6Publishing: 570]  [Article Influence: 95.0]  [Reference Citation Analysis (0)]
15.  Hesterberg T, Choi NH, Meier L, Fraley C. Least angle and ℓ1 penalized regression: a review. Stat Surv. 2008;2:61-93.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 169]  [Cited by in F6Publishing: 174]  [Article Influence: 10.9]  [Reference Citation Analysis (0)]
16.  Carroll RJ, Pederson S. On robustness in the logistic regression model. J R Stat Soc Ser B Methodol. 1993;55:693-706.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 27]  [Cited by in F6Publishing: 29]  [Article Influence: 4.8]  [Reference Citation Analysis (0)]
17.  Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority over-sampling technique. J Artifi Intell Res. 2002;16:321-57.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10955]  [Cited by in F6Publishing: 11108]  [Article Influence: 504.9]  [Reference Citation Analysis (0)]
18.  Byeon H. Predicting the depression of the South Korean elderly using SMOTE and an imbalanced binary dataset. Int J Adv Comput Sci Appl. 2021;12:74-79.  [PubMed]  [DOI]  [Cited in This Article: ]
19.  Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Comput. 2006;18:1527-1554.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 9547]  [Cited by in F6Publishing: 3102]  [Article Influence: 172.3]  [Reference Citation Analysis (0)]
20.  Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15:1929-1958.  [PubMed]  [DOI]  [Cited in This Article: ]
21.  Bouwmans T, Javed S, Sultana M, Jung SK. Deep neural network concepts for background subtraction: A systematic review and comparative evaluation. Neural Netw. 2019;117:8-66.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 163]  [Cited by in F6Publishing: 173]  [Article Influence: 34.6]  [Reference Citation Analysis (0)]
22.  Byeon H. Is deep learning better than machine learning to predict benign laryngeal disorders? Int J Adv Comput Sci Appl. 2021;12:112-117.  [PubMed]  [DOI]  [Cited in This Article: ]
23.  Breiman L. Bagging predictors. Mach Learn. 1996;24:123-140.  [PubMed]  [DOI]  [Cited in This Article: ]
24.  Fernandez-Delgado M, Cernadas E, Barro S, Amorim D. Do we need hundreds of classifiers to solve real world classification problems? J Mach Learn Res. 2014;15:3133-3181.  [PubMed]  [DOI]  [Cited in This Article: ]
25.  Juul Rasmussen I, Rasmussen KL, Nordestgaard BG, Tybjærg-Hansen A, Frikke-Schmidt R. Impact of cardiovascular risk factors and genetics on 10-year absolute risk of dementia: risk charts for targeted prevention. Eur Heart J. 2020;41:4024-4033.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 41]  [Article Influence: 13.7]  [Reference Citation Analysis (0)]
26.  Wang HX, MacDonald SW, Dekhtyar S, Fratiglioni L. Association of lifelong exposure to cognitive reserve-enhancing factors with dementia risk: A community-based cohort study. PLoS Med. 2017;14:e1002251.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 112]  [Cited by in F6Publishing: 117]  [Article Influence: 16.7]  [Reference Citation Analysis (0)]