Systematic Reviews
Copyright ©The Author(s) 2025.
World J Diabetes. Apr 15, 2025; 16(4): 101310
Published online Apr 15, 2025. doi: 10.4239/wjd.v16.i4.101310
Table 1 Characteristics of included studies
Year
Ref.
Country
Design
Type of model (D/V)
Collecting time
Sample size (D/V)
Outcome measure
Age (SD) (years)
2019Ning et al[11]ChinaCase-control studyDJanuary 2017-June 2018452/-DPN occurs53.62 ± 12.79
2020Metsker et al[22]RussiaCross-sectional studyD, VJuly 2010-August 20174340/1085DPN occurs-
2021Wu et al[17]ChinaProspective cohort studyD, VSeptember 2018-July 2019460/152DPN occurs65
2021Fan et al[21]ChinaReal-world studyD, VJanuary 2010-December 2015132/33DPN occurs-
2022Zhang et al[18]ChinaCase-control studyD, VFebruary 2017-May 2021519/259DPN occursD: 57.76 ± 12.95; V: 58.97 ± 12.49
2022Li et al[10]ChinaRetrospective cohort studyD, V2010-201911265/3755DPN occurs60.3 ± 10.9
2023Tian et al[20]ChinaCross-sectional studyD, VJanuary 2019-October 20203297/1426DPN occurs-
2023Li et al[15]ChinaRetrospective cohort studyD, VD: January 2017-December 2020; V: January 2019-December 20203012/901DPN occursD: 57.12 ± 12.23; V: 56.60 ± 12.03
2023Lian et al[16]ChinaRetrospective cohort studyD, VFebruary 2020-July 2022895/383DPN occurs64.5 (55.0–72.0)
2023Liu et al[19]ChinaRetrospective cohort studyD, VSeptember 2010-September 202095604/462DPN occurs52.4 ± 12.2
2023Wang et al[12]ChinaCase-control studyDMarch 2021-March 2023500/-DPN occurs56.8 ± 10.22
2023Zhang et al[13]ChinaCase-control studyDApril 2019-May 2020323/-DPN occurs55.36 ± 11.25
2023Gelaw et al[14]EthiopiaProspective cohort studyDJanuary 2005-December 2021808/-DPN occurs45.6 ± 3.1
2022Baskozos et al[23]Multi centerCross-sectional studyD, V2012-2019935/295Painful or painless DPN occursD: 68 (60-74); V: 69 (63-77)
Table 2 Characteristics of studies included in the development and validation of the model
Ref.
Modeling method
Variable selection methods
Methods for handling continuous variables
Missing data
Predictors in the final model
Model performance
Model presentationInternal validationExternal validation
Quantity
Processing method
Discrimination
Calibration
Ning et al[11]Logistic regressionMonofactor analysisMaintaining continuity--Duration of DM/FPG/FINS/HbA1c/HOMA-IR/Vaspin/Omentin-1AUC = 0.789 (0.741-0.873)Calibration curveNomogramBootstrapNone
Metsker et al[22]Artificial neural network/support vector machine/decision tree/linear regression/logistic regression---Delete, replaceUnsatisfactory control of glycemia/systemic inflammation/renal dyslipidemia/dyslipidemia/macroangiopathy(1) AUC = 0.8922; (2) AUC = 0.8644; (3) AUC = 0.8988; (4) AUC = 0.8926; and (5) AUC = 0.8941NoneLIME explanation5-fold cross-validationNone
Wu et al[17]Logistic regressionLASSO regressionMaintaining continuity19-FBG/PBG/LDL-C/age/TC/BMI/HbA1cD: (1) AUC = 0.656; (2) AUC = 0.724; (3) AUC = 0.731; and (4) AUC = 0.713. V: (1) AUC = 0.629; (2) AUC = 0.712; (3) AUC = 0.813; and (4) AUC = 0.830Hosmer-Lemeshow test/Calibration PlotNomogramNoneGeographical
Fan et al[21]Machine learningMonofactor analysisCategorizing continuous variables--Age/duration of DM/duration of unadjusted hypoglycemic treatment (≥ 1 year)/number of insulin species/total cost of hypoglycemic drugs/number of hypoglycemic drugs/gender/genetic history of diabetes/dyslipidemia(1) XF: AUC = 0.847 ± 0.081; (2) CHAID: AUC = 0.787 ± 0.081; (3) QUEST: AUC = 0.720 ± 0.06; and (4) D: AUC = 0.859 ± 0.05NoneVariable ImportanceBootstrapNone
Zhang et al[18]Logistic regressionMonofactor analysisMaintaining continuity--Age/gender/duration of DM/BMI/uric acid/HbA1c/FT3D: AUC = 0.763; V: AUC = 0.755Calibration curveNomogramBootstrapNone
Li et al[10]Logistic regressionLASSO regressionMaintaining continuity--Sex/age/DR/duration of DM/WBC/eosinophil fraction/lymphocyte count/HbA1c/GSP/TC/TG/HDL/LDL/ApoA1/ApoBD: AUC = 0.858 (0.851-0.865); V: AUC = 0.852 (0.840-0.865)Hosmer-Lemeshow Test/Calibration curveNomogramBootstrapNone
Tian et al[20]Logistic regressionLASSO regressionCategorizing continuous variables--Advanced age of grading/smoking/insomnia/sweating/loose teeth/dry skin/purple tongueD: AUC = 0.727; V: AUC = 0.744Calibration curveNomogram5-fold cross-validationNone
Li et al[15]Logistic regressionLASSO regressionMaintaining continuity--Age/25(OH)D3/duration of T2DM/HDL/HbA1c/FBGD: AUC = 0.8256 (0.8104-0.8408); V: AUC = 0.8608 (0.8376-0.8840)Hosmer-lemeshow test/Calibration curveNomogramBootstrapGeographical
Lian et al[16]Logistic regression machine learning-Maintaining continuity10Delete, multiple imputation, or leave unprocessedAge/ALT/ALB/TBIL/UREA/TC/HbA1c/APTT/24-hUTP/urine protein concentration/duration of DM/neutrophil-to-lymphocyte Ratio/HOMA-IRAUC = 0.818NoneThe Shapley additive explanations10-fold cross-validationNone
Liu et al[19]Β coefficient-Maintaining continuity--Age/smoking/BMI/duration of DM/HbA1c/low HDL-c/high TG/hypertension/DR/DKD/CVDAUC = 0.831 (0.794-0.868)None-NoneGeographical
Wang et al[12]Logistic regressionMonofactor analysisMaintaining continuity--Age/duration of DM/HbA1c/TG/2 hours CP/T3AUC = 0.938 (0.918-0.958)Hosmer-lemeshow test/Calibration curveNomogramBootstrapNone
Zhang et al[13]Logistic regressionLASSO regressionMaintaining continuity--Age/smoking/dyslipidemia/HbA1c/glucose variability parametersAUC = 0.647 (0.585-0.708)Hosmer-lemeshow test/Calibration curveNomogramBootstrapNone
Gelaw et al[14]Logistic regression/machine learningLASSO regressionCategorizing continuous variables-Multiple ImputationHypertension/FBG/other comorbidities/Alcohol consumption/Physical activity/type of DM treatment/WBC/RBC(1) AUC = 0.732 (0.69-0.773); and (2) AUC = 0.702 (0.658-0.746)Hosmer-lemeshow testNomogramBootstrapNone
Baskozos et al[23]Machine learning---Multiple imputationQuality of life (EQ5D)/lifestyle (smoking, alcohol consumption)/demographics (age, gender)/personality and psychology traits (anxiety, depression, personality traits)/biochemical (HbA1c)/clinical variables (BMI, hospital stay and trauma at young age)(1) AUC = 0.8184 (0.8167-0.8201); (2) AUC = 0.8188 (0.8171-0.8205); and (3) AUC = 0.8123 (0.8107-0.8140)Calibration curveThe adaptive regression splines classifier10-fold cross-validationGeographical
Table 3 Comparison among different performances included in the model
Ref.
Discrimination
Sensibility
Specificity
Precision
F1 score
Recall rate
Accuracy
PPV
NPV
Ning et al[11]AUC = 0.789 (0.741-0.873)
Metsker et al[22](1) AUC = 0.8922; (2) AUC = 0.8644; (3) AUC = 0.8988; (4) AUC = 0.8926; and (5) AUC = 0.8941(1) ANN = 0.6736; (2) SVM = 0.6817; (3) Decision tree = 0.6526; (4) Linear Regression = 0.6777; and (5) Logistic Regression = 0.6826(1) ANN = 0.7342; (2) SVM = 0.7210; (3) Decision tree = 0.6865; (4) Linear Regression = 0.7299; and (5) Logistic Regression = 0.7232(1) ANN = 0.8090; (2) SVM = 0.7655; (3) Decision tree = 0.7302; (4) Linear Regression = 0.7911; and (5) Logistic Regression = 0.7693(1) ANN = 0.7471; (2) SVM = 0.7443; (3) Decision tree = 0.7039; (4) Linear Regression = 0.7472; and (5) Logistic Regression = 0.7384
Wu et al[17]D: (1) AUC = 0.656; (2) AUC = 0.724; (3) AUC = 0.731; and (4) AUC = 0.713; V; (1) AUC = 0.629; (2) AUC = 0.712; (3) AUC = 0.813; and (4) AUC = 0.830
Fan et al[21](1) XF: AUC = 0.847 ± 0.081; (2) CHAID: AUC = 0.787 ± 0.081; (3) QUEST: AUC = 0.720 ± 0.06; and (4) D: AUC = 0.859 ± 0.05(1) XF: 0.783 ± 0.080; (2) CHAID: 0.757 ± 0.054; (3) QUEST: 0.766 ± 0.056; and (4) D: 0.843 ± 0.038(1) XF: 0.642 ± 0.123; (2) CHAID: 0.680 ± 0.143; (3) QUEST: 0.716 ± 0.186; and (4) D: 0.775 ± 0.092(1) XF: 0.882 ± 0.073; (2) CHAID: 0.807 ± 0.070; (3) QUEST: 0.805 ± 0.057; and (4) D: 0.885 ± 0.055
Zhang et al[18]D: AUC = 0.763; V: AUC = 0.755
Li et al[10]D: AUC = 0.858 (0.851-0.865); V: AUC = 0.852 (0.840-0.865)0.740.874
Tian et al[20]D: AUC = 0.727; V: AUC = 0.744
Li et al[15]D: AUC = 0.8256 (0.8104-0.8408); V: AUC = 0.8608 (0.8376-0.8840)
Lian et al[16]LR: 0.683 (0.586, 0.737); (2) KNN: 0.671 (0.607, 0.739); (3) DT: 0.679 (0.636, 0.759); (4) NB: 0.589 (0.543, 0.634); (5) RF: 0.736 (0.686,0.765); and (6) XGBoost: 0.764 (0.679, 0.801)(1) LR: 0.687 ± 0.056; (2) KNN: 0.858 ± 0.070; (3) DT: 0.695 ± 0.032; (4) NB: 0.784 ± 0.087; (5) RF: 0.769 ± 0.026; and (6) XGBoost: 0.765 ± 0.040(1) LR: 0.672 ± 0.056; (2) KNN: 0.559 ± 0.070; (3) DT: 0.669 ± 0.042; (4) NB: 0.378 ± 0.071; (5) RF: 0.719 ± 0.027; and (6) XGBoost: 0.736 ± 0.050(1) LR: 0.659 ± 0.062; (2) KNN: 0.419 ± 0.073; (3) DT: 0.648 ± 0.067; (4) NB: 0.253 ± 0.061; (5) RF: 0.677 ± 0.040; and (6) XGBoost: 0.711 ± 0.066(1) LR: 0.679 ± 0.052; (2) KNN: 0.674 ± 0.039; (3) DT: 0.682 ± 0.032; (4) NB: 0.590 ± 0.029; (5) RF: 0.736 ± 0.021; and (6) XGBoost: 0.746 ± 0.041
Liu et al[19]AUC = 0.831 (0.794-0.868)
Wang et al[12]AUC = 0.938 (0.918-0.958)0.8460.668
Zhang et al[13]AUC = 0.647 (0.585-0.708)
Gelaw et al[14](1) AUC = 0.732 (0.69-0.773); and (2) AUC = 0.702 (0.658-0.746)(1): 0.652; and (2): 0.7209(1): 0.717; and (2): 0.577(1) 0.384; and (2) 0.315(1) 0.884; and (2) 0.884
Baskozos et al[23](1) AUC = 0.8184 (0.8167-0.8201); (2) AUC = 0.8188 (0.8171-0.8205); and (3) AUC = 0.8123 (0.8107-0.8140)
Table 4 Risk of bias and applicability assessment
Ref.Risk of bias
Applicability
Overall
Participants
Predictors
Outcome
Analysis
Participants
Predictors
Outcome
Risk of bias
Applicability
Ning et al[11]+-+--++--
Metsker et al[22]--+-+++-+
Wu et al[17]+++-+++-+
Fan et al[21]--+-+++-+
Zhang et al[18]+---+++-+
Li et al[10]--+-+++-+
Tian et al[20]-?--+----
Li et al[15]--+-+++-+
Lian et al[16]-++-+++-+
Liu et al[19]----+++-+
Wang et al[12]+++-+++-+
Zhang et al[13]+-+--++--
Gelaw et al[14]+++-+++-+
Baskozos et al[23]--+-+++-+