Published online Mar 20, 2022. doi: 10.5493/wjem.v12.i2.16
Peer-review started: October 11, 2021
First decision: December 9, 2021
Revised: December 14, 2021
Accepted: March 6, 2022
Article in press: March 6, 2022
Published online: March 20, 2022
Processing time: 155 Days and 14.9 Hours
Deep learning has been explored in medical ultrasound image analysis for several years and some applications have focused on evaluation of cardiac function. To date, most academic research and commercial deep learning ventures to automate left ventricular ejection calculation have resulted in image quality dependent highly complex algorithms which require multiple views from the apical window. Research into alternative approaches have been limited.
To explore a deep learning approach modeling visual ejection fraction estimation, thereby modeling the approach taken by highly skill electrocardiographers with decades of experience. If possible, such an approach could work with less than ideal images and be less computationally burdensome, both ideal for point of care ultrasound applications, where experts are unlikely to be present.
To develop a deep learning algorithm capable of visual estimation of left ventricular ejection fraction.
Long short term memory structure using a VGG16 convolutional neural network capable of bidirectionality was employed for video analysis of cardiac function. The algorithm was trained on a publicly available echo database with ejection fraction calculations made at a comprehensive echocardiography laboratory. After training, the algorithm was tested on a data subset specifically set aside prior to training.
The algorithm performed well in comparison to baseline data for correlation between echocardiographers calculating ejection fraction and gold standards. It outperformed some previously published algorithms for agreement.
Deep learning based visual ejection fraction estimation is feasible and could be improved with further refinement and higher quality databases.
Further research is needed to explore the impact of higher quality video for training and with a more diverse ultrasound machine source.