Basic Study
Copyright ©The Author(s) 2022. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Exp Med. Mar 20, 2022; 12(2): 16-25
Published online Mar 20, 2022. doi: 10.5493/wjem.v12.i2.16
Machine learning algorithm using publicly available echo database for simplified “visual estimation” of left ventricular ejection fraction
Michael Blaivas, Laura Blaivas
Michael Blaivas, Department of Medicine, University of South Carolina School of Medicine, Roswell, GA 30076, United States
Laura Blaivas, Department of Environmental Science, Michigan State University, Roswell, Georgia 30076, United States
Author contributions: Blaivas M contributed ultrasound data; Blaivas M and Blaivas L designed the research, sorted, cleaned ultrasound data, designed deep learning architecture, trained the algorithm, performed statistical analysis using Python scripts and wrote the manuscript; Blaivas L performed coding in Python computer language.
Institutional review board statement: Completed, see previously uploaded document.
Conflict-of-interest statement: Blaivas M consults for Anavasi Diagnostics, EthosMedical, HERO Medical and Sonosim.
Data sharing statement: Data was acquired from a public database following approval of application and is available to researchers from the source.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Michael Blaivas, MD, Attending Doctor, Professor, Department of Medicine, University of South Carolina School of Medicine, PO Box 769209, Roswell, GA 30076, United States. mike@blaivas.org
Received: October 11, 2021
Peer-review started: October 11, 2021
First decision: December 9, 2021
Revised: December 14, 2021
Accepted: March 6, 2022
Article in press: March 6, 2022
Published online: March 20, 2022
Processing time: 155 Days and 14.9 Hours
ARTICLE HIGHLIGHTS
Research background

Deep learning has been explored in medical ultrasound image analysis for several years and some applications have focused on evaluation of cardiac function. To date, most academic research and commercial deep learning ventures to automate left ventricular ejection calculation have resulted in image quality dependent highly complex algorithms which require multiple views from the apical window. Research into alternative approaches have been limited.

Research motivation

To explore a deep learning approach modeling visual ejection fraction estimation, thereby modeling the approach taken by highly skill electrocardiographers with decades of experience. If possible, such an approach could work with less than ideal images and be less computationally burdensome, both ideal for point of care ultrasound applications, where experts are unlikely to be present.

Research objectives

To develop a deep learning algorithm capable of visual estimation of left ventricular ejection fraction.

Research methods

Long short term memory structure using a VGG16 convolutional neural network capable of bidirectionality was employed for video analysis of cardiac function. The algorithm was trained on a publicly available echo database with ejection fraction calculations made at a comprehensive echocardiography laboratory. After training, the algorithm was tested on a data subset specifically set aside prior to training.

Research results

The algorithm performed well in comparison to baseline data for correlation between echocardiographers calculating ejection fraction and gold standards. It outperformed some previously published algorithms for agreement.

Research conclusions

Deep learning based visual ejection fraction estimation is feasible and could be improved with further refinement and higher quality databases.

Research perspectives

Further research is needed to explore the impact of higher quality video for training and with a more diverse ultrasound machine source.