Published online Feb 27, 2024. doi: 10.4254/wjh.v16.i2.193
Peer-review started: November 2, 2023
First decision: December 1, 2023
Revised: December 27, 2023
Accepted: February 4, 2024
Article in press: February 4, 2024
Published online: February 27, 2024
Processing time: 116 Days and 14.9 Hours
The landscape of liver transplant (LT) candidates has evolved, with an aging and increasingly morbid population, often linked to metabolic-associated fatty liver disease (MAFLD). MAFLD’s rise as a cause of cirrhosis raises concerns about a subsequent increase in major adverse cardiovascular events (MACE) post-LT, a critical complication negatively impacting prognosis. This study is prompted by the growing incidence of post-LT MACE, particularly within the first 6 months, and the complex interplay of traditional and nontraditional cardiovascular risk factors in this vulnerable population. The prevalence shift toward MAFLD as a leading indication for LT necessitates a thorough pre-LT cardiac assessment, demanding a reconsideration of existing noninvasive strategies’ reliability. The pressing need for an alternative approach to predict post-LT MACE accurately propels the exploration of machine learning as a transformative tool to navigate the challenges posed by conventional models.
Motivating this research is the imperative to address the limitations of current cardiovascular risk stratification models for LT candidates, especially those with end-stage liver disease. Traditional models exhibit constraints related to assumptions of linear relationships and limited variables, leading to unreliable predictions. The inadequacy of existing noninvasive strategies and the absence of effective models for accurate cardiovascular risk stratification in LT candidates underscore the urgency for a paradigm shift. The study is driven by the aspiration to introduce machine learning as an innovative and more effective approach, leveraging its capacity to discern intricate patterns and relationships within datasets. The ultimate goal is to revolutionize risk prediction, enabling clinicians to identify high-risk individuals with precision, thus optimizing patient care strategies.
The primary objective of this study is to assess the feasibility and accuracy of implementing a machine learning model to predict MACE post-LT. Focusing on a specific regional cohort, the study aims to revolutionize risk assessment by moving beyond the limitations of conventional statistical models. Realizing this objective involves scrutinizing the potential of machine learning techniques to forecast post-LT MACE with enhanced precision. By leveraging advanced computational models, the research seeks to provide a comprehensive evaluation of the predictive capabilities, enabling the early identification of individuals at elevated risk. The ultimate significance lies in facilitating early intervention strategies and refining patient care in the context of the evolving landscape of LT candidates.
This retrospective cohort study, approved by the Research Ethics Committee, delves into the cardiovascular risks following LT. Employing a comprehensive approach, medical records from Irmandade Santa Casa de Misericórdia de Porto Alegre were scrutinized for patients undergoing their first LT between 2001 and 2011 due to cirrhosis. Rigorous inclusion and exclusion criteria were applied, focusing on patients above 18 years of age with complete records, cardiac evaluation pre-LT, and no retransplantation. Data encompassed pre-LT, perioperative, and post-LT periods, with the primary outcome being in-hospital MACE. Statistical analyses, including frequency, means, standard deviation, Pearson’s χ2 test, and linear model analysis of variance, were executed using R software. The study introduces a machine learning paradigm, leveraging the XGBoost model, known for handling imbalanced datasets. Feature engineering involved a two-step imputation process, incorporating patient demographics, medical history, and cardiac evaluations. Model training incorporated regularization and early-stop techniques, aiming to prevent overfitting. Hyperparameter optimization using the Optuna package and performance evaluation metrics, including area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve, ensured robustness. Calibration, model explanation through Shapley additive explanations values, and adherence to the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis statement further enriched the methodological rigor, ultimately culminating in web deployment and code availability for transparency and accessibility.
The study involved 662 LT patients, with 82 exclusions based on specific criteria. The final dataset included 537 samples, with 23 in-hospital MACE cases. The XGBoost model demonstrated substantial predictive capability, achieving an AUROC of 0.89. Precision, recall, and F1-score for the negative class were 0.89, 0.80, and 0.84, respectively. The overall incidence of MACE was 4.46%, with observed rates for stroke, new-onset heart failure, severe arrhythmia, and myocardial infarction. The model achieved optimal calibration using the isotonic method with a Brier score of 0.100. Feature importance analysis revealed key predictors, including negative noninvasive cardiac stress testing, use of a nonselective beta-blocker, direct bilirubin levels, blood type O, and dynamic alterations on myocardial perfusion scintigraphy. The findings contribute a valuable machine learning model for predicting post-LT MACE, offering insights into specific risk factors and enhancing precision in identifying at-risk individuals. Remaining challenges involve addressing potential variability in feature impact across patients and further validation in diverse cohorts.
This study pioneers a novel approach in assessing in-hospital post-LT MACE. The research introduces a machine learning-based risk stratification model, surpassing the predictive performance of existing models, particularly demonstrating an impressive area under the curve of 0.89 using the XGBoost model. The optimized clinical model considers recipient-related factors and provides valuable insights into predicting MACE, crucial for addressing the leading cause of post-LT mortality. The use of machine learning techniques, specifically XGBoost, brings substantial improvements over traditional models, enhancing risk stratification accuracy. This study highlights the importance of comprehensive pre-LT evaluation, considering a wide array of cardiovascular risk factors.
Future research should focus on refining and expanding the machine learning model’s application, considering external validation in diverse patient populations and healthcare settings. Addressing ethical implications and ensuring transparency in model application are imperative for integrating machine learning predictions into clinical practice. The study suggests the need for continued exploration into the biological significance of identified predictors, such as the intriguing correlation between blood type O and reduced MACE risk. The model’s implementation in a user-friendly MACE prediction calculator opens avenues for prospective impact assessment, counseling, shared decision-making, and risk reduction strategies in the growing landscape of LT procedures. External validation and application in various transplantation-capable centers will enhance understanding of the model’s broader utility.