1
|
Chachadi K, Nirmala SR, Netrakar PG. Automated Coronary Artery Segmentation with 3D PSPNET using Global Processing and Patch Based Methods on CCTA Images. Cardiovasc Eng Technol 2025:10.1007/s13239-025-00775-0. [PMID: 39979546 DOI: 10.1007/s13239-025-00775-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Accepted: 02/04/2025] [Indexed: 02/22/2025]
Abstract
The prevalence of coronary artery disease (CAD) has become the major cause of death across the world in recent years. The accurate segmentation of coronary artery is important in clinical diagnosis and treatment of coronary artery disease (CAD) such as stenosis detection and plaque analysis. Deep learning techniques have been shown to assist medical experts in diagnosing diseases using biomedical imaging. There are many methods which employ 2D DL models for medical image segmentation. The 2D Pyramid Scene Parsing Neural Network (PSPNet) has potential in this domain but not explored for the segmentation of coronary arteries from 3D Coronary Computed Tomography Angiography (CCTA) images. The contribution of present research work is to propose the modification of 2D PSPNet into 3D PSPNet for segmenting the coronary arteries from 3D CCTA images. The innovative factor is to evaluate the network performance by employing Global processing and Patch based processing methods. The experimental results achieved a Dice Similarity Coefficient (DSC) of 0.76 for Global process method and 0.73 for Patch based method using a subset of 200 images from the ImageCAS dataset.
Collapse
Affiliation(s)
| | - S R Nirmala
- KLE Technological University, Hubballi, Karnataka, India
| | | |
Collapse
|
2
|
Wang X, Wu Z, Zhou Y, Shu H, Coatrieux JL, Feng Q, Chen Y. Topology-oriented foreground focusing network for semi-supervised coronary artery segmentation. Med Image Anal 2025; 101:103465. [PMID: 39978013 DOI: 10.1016/j.media.2025.103465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 04/30/2024] [Accepted: 01/09/2025] [Indexed: 02/22/2025]
Abstract
Automatic coronary artery (CA) segmentation on coronary-computed tomography angiography (CCTA) images is critical for coronary-related disease diagnosis and pre-operative planning. However, such segmentation remains a challenging task due to the difficulty in maintaining the topological consistency of CA, interference from irrelevant tubular structures, and insufficient labeled data. In this study, we propose a novel semi-supervised topology-oriented foreground focusing network (TOFF-Net) to comprehensively address such challenges. Specifically, we first propose an explicit vascular connectivity preservation (VCP) loss to capture the topological information and effectively strengthen vascular connectivity. Then, we propose an irrelevant vessels removal (IVR) module, which aims to integrate local CA details and global CA distribution, thereby eliminating interference of irrelevant vessels. Moreover, we propose a foreground label migration and focusing (FLMF) module with Pioneer-Imitator learning as a semi-supervised strategy to exploit the unlabeled data. The FLMF can effectively guide the attention of TOFF-Net to the foreground. Extensive results on our in-house dataset and two public datasets demonstrate that our TOFF-Net achieves state-of-the-art CA segmentation performance with high topological consistency and few false-positive irrelevant tubular structures. The results also reveal that our TOFF-Net presents considerable potential for parsing other types of vessels.
Collapse
Affiliation(s)
- Xiangxin Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, 210096, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Nanjing, 210096, China; Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing (Southeast University), Nanjing, 210096, China
| | - Zhan Wu
- School of Computer Science and Engineering, Southeast University, Nanjing, 210096, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Nanjing, 210096, China; Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing (Southeast University), Nanjing, 210096, China
| | - Yujia Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China.
| | - Huazhong Shu
- School of Computer Science and Engineering, Southeast University, Nanjing, 210096, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Nanjing, 210096, China; Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing (Southeast University), Nanjing, 210096, China
| | - Jean-Louis Coatrieux
- Laboratoire Traitement du Signal et de l'Image, Université de Rennes 1, Rennes, France
| | - Qianjin Feng
- School of Computer Science and Engineering, Southeast University, Nanjing, 210096, China; School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China.
| | - Yang Chen
- School of Computer Science and Engineering, Southeast University, Nanjing, 210096, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Nanjing, 210096, China; Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing (Southeast University), Nanjing, 210096, China.
| |
Collapse
|
3
|
Wang J, Chen Q, Jiang X, Zhang Z, Tang Z. Segmentation of coronary artery and calcification using prior knowledge based deep learning framework. Med Phys 2025. [PMID: 39878608 DOI: 10.1002/mp.17642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Revised: 01/03/2025] [Accepted: 01/05/2025] [Indexed: 01/31/2025] Open
Abstract
BACKGROUND Computed tomography angiography (CTA) is used to screen for coronary artery calcification. As the coronary artery has complicated structure and tiny lumen, manual screening is a time-consuming task. Recently, many deep learning methods have been proposed for the segmentation (SEG) of coronary artery and calcification, however, they often neglect leveraging related anatomical prior knowledge, resulting in low accuracy and instability. PURPOSE This study aims to build a deep learning based SEG framework, which leverages anatomical prior knowledge of coronary artery and calcification, to improve the SEG accuracy. Moreover, based on the SEG results, this study also try to reveal the predictive ability of the volume ratio of coronary artery and calcification for rotational atherectomy (RA). METHODS We present a new SEG framework, which is composed of four modules: the variational autoencoder based centerline extraction (CE) module, the self-attention (SA) module, the logic operation (LO) module, and the SEG module. Specifically, the CE module is used to crop a series of 3D CTA patches along the coronary artery, from which the continuous property of vessels can be utilized by the SA module to produce vessel-related features. According to the spatial relations between coronary artery lumen and calcification regions, the LO module with logic union and intersection is designed to refine the vessel-related features into lumen- and calcification-related features, based on which SEG results can be produced by the SEG module. RESULTS Experimental results demonstrate that our framework outperforms the state-of-the-art methods on CTA image dataset of 72 patients with statistical significance. Ablation experiments confirm that the proposed modules have positive impacts to the SEG results. Moreover, based on the volume ratio of segmented coronary artery and calcification, the prediction accuracy of RA is 0.75. CONCLUSIONS Integrating anatomical prior knowledge of coronary artery and calcification into the deep learning based SEG framework can effectively enhance the performance. Moreover, the volume ratio of segmented coronary artery and calcification is a good predictive factor for RA.
Collapse
Affiliation(s)
- Jinda Wang
- Senior Department of Cardiology, the Sixth Medical Center of PLA General Hospital, Beijing, China
| | - Qian Chen
- College of Future Technology, Peking University, Beijing, China
| | - Xingyu Jiang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Zeyu Zhang
- Senior Department of Cardiology, the Sixth Medical Center of PLA General Hospital, Beijing, China
| | - Zhenyu Tang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| |
Collapse
|
4
|
Srivastava V, Kumar R, Wani MY, Robinson K, Ahmad A. Role of artificial intelligence in early diagnosis and treatment of infectious diseases. Infect Dis (Lond) 2025; 57:1-26. [PMID: 39540872 DOI: 10.1080/23744235.2024.2425712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2024] [Revised: 09/19/2024] [Accepted: 10/30/2024] [Indexed: 11/16/2024] Open
Abstract
Infectious diseases remain a global health challenge, necessitating innovative approaches for their early diagnosis and effective treatment. Artificial Intelligence (AI) has emerged as a transformative force in healthcare, offering promising solutions to address this challenge. This review article provides a comprehensive overview of the pivotal role AI can play in the early diagnosis and treatment of infectious diseases. It explores how AI-driven diagnostic tools, including machine learning algorithms, deep learning, and image recognition systems, enhance the accuracy and efficiency of disease detection and surveillance. Furthermore, it delves into the potential of AI to predict disease outbreaks, optimise treatment strategies, and personalise interventions based on individual patient data and how AI can be used to gear up the drug discovery and development (D3) process.The ethical considerations, challenges, and limitations associated with the integration of AI in infectious disease management are also examined. By harnessing the capabilities of AI, healthcare systems can significantly improve their preparedness, responsiveness, and outcomes in the battle against infectious diseases.
Collapse
Affiliation(s)
- Vartika Srivastava
- Department of Clinical Microbiology and Infectious Diseases, School of Pathology, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
- Department of Inflammation and Immunity, Lerner Research Institute, Cleveland Clinic, Cleveland, Ohio, USA
| | - Ravinder Kumar
- Department of Pathology, College of Medicine, University of Tennessee Health Science Center, Memphis, Tennessee, USA
| | - Mohmmad Younus Wani
- Department of Chemistry, College of Science, University of Jeddah, Jeddah, Saudi Arabia
| | - Keven Robinson
- Division of Pulmonary, Allergy, Critical Care, and Sleep Medicine, Department of Medicine, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania, USA
| | - Aijaz Ahmad
- Department of Clinical Microbiology and Infectious Diseases, School of Pathology, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
- Division of Pulmonary, Allergy, Critical Care, and Sleep Medicine, Department of Medicine, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
5
|
Wen C, Li B, Yang Y, Feng Y, Liu J, Zhang L, Zhang Y, Li N, Liu J, Wang L, Zhang M, Liu Y. WITHDRAWN: Coronary artery segmentation based on ACMA-Net and unscented Kalman filter algorithm. Comput Biol Med 2024:108615. [PMID: 38910075 DOI: 10.1016/j.compbiomed.2024.108615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 03/30/2024] [Accepted: 05/11/2024] [Indexed: 06/25/2024]
Abstract
This article has been withdrawn at the request of the author(s) and/or editor. The Publisher apologizes for any inconvenience this may cause. The full Elsevier Policy on Article Withdrawal can be found at https://www.elsevier.com/about/policies/article-withdrawal.
Collapse
Affiliation(s)
- Chuanqi Wen
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, 100124, China.
| | - Bao Li
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, 100124, China
| | - Yang Yang
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, 100124, China
| | - Yili Feng
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, 100124, China
| | - Jincheng Liu
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, 100124, China
| | - Liyuan Zhang
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, 100124, China
| | - Yanping Zhang
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, 100124, China
| | - Na Li
- Shandong First Medical University & Shandong Academy of Medical Sciences, Taian, 271016, China
| | - Jian Liu
- Department of Cardiology, Peking University People's Hospital, Beijing, 100444, China
| | - Lihua Wang
- Department of Radiology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Zhejiang, 310003, China
| | - Mingzi Zhang
- Department of Biomedical Sciences, Macquarie Medical School, Macquarie University, Sydney, Australia
| | - Youjun Liu
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, 100124, China
| |
Collapse
|
6
|
Fang J, Xing A, Chen Y, Zhou F. SeqCorr-EUNet: A sequence correction dual-flow network for segmentation and quantification of anterior segment OCT image. Comput Biol Med 2024; 171:108143. [PMID: 38364662 DOI: 10.1016/j.compbiomed.2024.108143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 01/16/2024] [Accepted: 02/12/2024] [Indexed: 02/18/2024]
Abstract
The accurate segmentation of AS-OCT images is a prerequisite for the morphological details analysis of anterior segment structure and the extraction of clinical biological parameters, which play an essential role in the diagnosis, evaluation, and preoperative prognosis management of many ophthalmic diseases. Manually marking the boundaries of the anterior segment tissue is time-consuming and error-prone, with inherent speckle noise, various artifacts, and some low-quality scanned images further increasing the difficulty of the segmentation task. In this work, we propose a novel model called SeqCorr-EUNet with a dual-flow architecture based on convolutional gated recursive sequence correction for semantic segmentation and quantification of AS-OCT images. An EfficientNet encoder is employed to enhance the intra-slice features extraction ability of semantic segmentation flow. The sequence correction flow based on ConvGRU is introduced to extract inter-slice features from consecutive adjacent slices. Spatio-temporal information is fused to correct the morphological details of pre-segmentation results. And the channel attention gate is inserted into the skip-connection between encoder and decoder to enrich the contextual information and suppress the noise of irrelevant regions. Based on the segmentation results of the anterior segment structures, we achieved automatic extraction of essential clinical parameters, 3D reconstruction of the anterior chamber structure, and measurement of anterior chamber volume. The proposed SeqCorr-EUNet has been evaluated on the public AS-OCT dataset. The experimental results show that our method is competitive compared with the existing methods and significantly improves the segmentation and quantification performance of low-quality imaging structures in AS-OCT images.
Collapse
Affiliation(s)
- Jing Fang
- School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230601, Anhui, China.
| | - Aoyu Xing
- School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230601, Anhui, China.
| | - Ying Chen
- Department of Ophthalmology, Hospital of University of Science and Technology of China, Hefei, 230026, Anhui, China.
| | - Fang Zhou
- School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230601, Anhui, China.
| |
Collapse
|
7
|
Zhang X, Sun K, Wu D, Xiong X, Liu J, Yao L, Li S, Wang Y, Feng J, Shen D. An Anatomy- and Topology-Preserving Framework for Coronary Artery Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:723-733. [PMID: 37756173 DOI: 10.1109/tmi.2023.3319720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
Coronary artery segmentation is critical for coronary artery disease diagnosis but challenging due to its tortuous course with numerous small branches and inter-subject variations. Most existing studies ignore important anatomical information and vascular topologies, leading to less desirable segmentation performance that usually cannot satisfy clinical demands. To deal with these challenges, in this paper we propose an anatomy- and topology-preserving two-stage framework for coronary artery segmentation. The proposed framework consists of an anatomical dependency encoding (ADE) module and a hierarchical topology learning (HTL) module for coarse-to-fine segmentation, respectively. Specifically, the ADE module segments four heart chambers and aorta, and thus five distance field maps are obtained to encode distance between chamber surfaces and coarsely segmented coronary artery. Meanwhile, ADE also performs coronary artery detection to crop region-of-interest and eliminate foreground-background imbalance. The follow-up HTL module performs fine segmentation by exploiting three hierarchical vascular topologies, i.e., key points, centerlines, and neighbor connectivity using a multi-task learning scheme. In addition, we adopt a bottom-up attention interaction (BAI) module to integrate the feature representations extracted across hierarchical topologies. Extensive experiments on public and in-house datasets show that the proposed framework achieves state-of-the-art performance for coronary artery segmentation.
Collapse
|
8
|
Wang W, Xia Q, Yan Z, Hu Z, Chen Y, Zheng W, Wang X, Nie S, Metaxas D, Zhang S. AVDNet: Joint coronary artery and vein segmentation with topological consistency. Med Image Anal 2024; 91:102999. [PMID: 37862866 DOI: 10.1016/j.media.2023.102999] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 09/28/2023] [Accepted: 10/10/2023] [Indexed: 10/22/2023]
Abstract
Coronary CT angiography (CCTA) is an effective and non-invasive method for coronary artery disease diagnosis. Extracting an accurate coronary artery tree from CCTA image is essential for centerline extraction, plaque detection, and stenosis quantification. In practice, data quality varies. Sometimes, the arteries and veins have similar intensities and locate closely, which may confuse segmentation algorithms, even deep learning based ones, to obtain accurate arteries. However, it is not always feasible to re-scan the patient for better image quality. In this paper, we propose an artery and vein disentanglement network (AVDNet) for robust and accurate segmentation by incorporating the coronary vein into the segmentation task. This is the first work to segment coronary artery and vein at the same time. The AVDNet consists of an image based vessel recognition network (IVRN) and a topology based vessel refinement network (TVRN). IVRN learns to segment the arteries and veins, while TVRN learns to correct the segmentation errors based on topology consistency. We also design a novel inverse distance weighted dice (IDD) loss function to recover more thin vessel branches and preserve the vascular boundaries. Extensive experiments are conducted on a multi-center dataset of 700 patients. Quantitative and qualitative results demonstrate the effectiveness of the proposed method by comparing it with state-of-the-art methods and different variants. Prediction results of the AVDNet on the Automated Segmentation of Coronary Artery Challenge dataset are avaliabel at https://github.com/WennyJJ/Coronary-Artery-Vein-Segmentation for follow-up research.
Collapse
Affiliation(s)
- Wenji Wang
- SenseTime Research, Beijing, 100080, China.
| | - Qing Xia
- SenseTime Research, Beijing, 100080, China.
| | | | | | - Yinan Chen
- SenseTime Research, Beijing, 100080, China
| | - Wen Zheng
- Center for Coronary Artery Disease, Beijing Anzhen Hospital, Capital Medical University, Beijing, 100029, China
| | - Xiao Wang
- Center for Coronary Artery Disease, Beijing Anzhen Hospital, Capital Medical University, Beijing, 100029, China
| | - Shaoping Nie
- Center for Coronary Artery Disease, Beijing Anzhen Hospital, Capital Medical University, Beijing, 100029, China
| | - Dimitris Metaxas
- Department of Computer Science, Rutgers University, NJ, 08854, USA
| | - Shaoting Zhang
- SenseTime Research, Beijing, 100080, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200032, China
| |
Collapse
|
9
|
Xu B, Yang J, Hong P, Fan X, Sun Y, Zhang L, Yang B, Xu L, Avolio A. Coronary artery segmentation in CCTA images based on multi-scale feature learning. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:973-991. [PMID: 38943423 DOI: 10.3233/xst-240093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/01/2024]
Abstract
BACKGROUND Coronary artery segmentation is a prerequisite in computer-aided diagnosis of Coronary Artery Disease (CAD). However, segmentation of coronary arteries in Coronary Computed Tomography Angiography (CCTA) images faces several challenges. The current segmentation approaches are unable to effectively address these challenges and existing problems such as the need for manual interaction or low segmentation accuracy. OBJECTIVE A Multi-scale Feature Learning and Rectification (MFLR) network is proposed to tackle the challenges and achieve automatic and accurate segmentation of coronary arteries. METHODS The MFLR network introduces a multi-scale feature extraction module in the encoder to effectively capture contextual information under different receptive fields. In the decoder, a feature correction and fusion module is proposed, which employs high-level features containing multi-scale information to correct and guide low-level features, achieving fusion between the two-level features to further improve segmentation performance. RESULTS The MFLR network achieved the best performance on the dice similarity coefficient, Jaccard index, Recall, F1-score, and 95% Hausdorff distance, for both in-house and public datasets. CONCLUSION Experimental results demonstrate the superiority and good generalization ability of the MFLR approach. This study contributes to the accurate diagnosis and treatment of CAD, and it also informs other segmentation applications in medicine.
Collapse
Affiliation(s)
- Bu Xu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Jinzhong Yang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Peng Hong
- Software College, Northeastern University, Shenyang, China
| | - Xiaoxue Fan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yu Sun
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Department of Radiology, General Hospital of North Theater Command, Shenyang, China
| | - Libo Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Department of Radiology, General Hospital of North Theater Command, Shenyang, China
| | - Benqiang Yang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Department of Radiology, General Hospital of North Theater Command, Shenyang, China
| | - Lisheng Xu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Medical Image Computing, Ministry of Education, Shenyang, China
- Engineering Research Center of Medical Imaging and Intelligent Analysis, Ministry of Education, Shenyang, China
| | - Alberto Avolio
- Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| |
Collapse
|
10
|
Wu W, Xie H, Zhang S, Gu L. Exhaustive matching of 3D/2D coronary artery structure based on imperfect segmentations. Int J Comput Assist Radiol Surg 2024; 19:109-117. [PMID: 37330451 DOI: 10.1007/s11548-023-02933-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Accepted: 04/21/2023] [Indexed: 06/19/2023]
Abstract
PURPOSE The 3D/2D coronary artery registration technique has been developed for the guidance of the percutaneous coronary intervention. It introduces the absent 3D structural information by fusing the pre-operative computed tomography angiography (CTA) volume with the intra-operative X-ray coronary angiography (XCA) image. To conduct the registration, an accurate matching of the coronary artery structures extracted from the two imaging modalities is an essential step. METHODS In this study, we propose an exhaustive matching algorithm to solve this problem. First, by recognizing the fake bifurcations in the XCA image caused by projection and concatenating the fractured centerline fragments, the original XCA topological structure is restored. Then, the vessel segments in the two imaging modalities are removed orderly, which generates all the potential structures to simulate the imperfect segmentation results. Finally, the CTA and XCA structures are compared pairwise, and the matching result is obtained by searching for the structure pair with the minimum similarity score. RESULTS The experiments were conducted based on a clinical dataset collected from 46 patients and comprising of 240 CTA/XCA data pairs. And the results show that the proposed method is very effective, which achieves an accuracy of 0.960 for recognizing the fake bifurcations in the XCA image and an accuracy of 0.896 for matching the CTA/XCA vascular structures. CONCLUSION The proposed exhaustive structure matching algorithm is simple and straightforward without any impractical assumption or time-consuming computations. With this method, the influence of the imperfect segmentations is eliminated and the accurate matching could be achieved efficiently. This lays a good foundation for the subsequent 3D/2D coronary artery registration task.
Collapse
Affiliation(s)
- Wei Wu
- Department of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Instituted of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Hongzhi Xie
- Department of Cardiology, Peking Union Medical College Hospital, Peking, 100005, China.
| | - Shuyang Zhang
- Department of Cardiology, Peking Union Medical College Hospital, Peking, 100005, China
| | - Lixu Gu
- Department of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
- Instituted of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
11
|
Zhang M, Wu Y, Zhang H, Qin Y, Zheng H, Tang W, Arnold C, Pei C, Yu P, Nan Y, Yang G, Walsh S, Marshall DC, Komorowski M, Wang P, Guo D, Jin D, Wu Y, Zhao S, Chang R, Zhang B, Lu X, Qayyum A, Mazher M, Su Q, Wu Y, Liu Y, Zhu Y, Yang J, Pakzad A, Rangelov B, Estepar RSJ, Espinosa CC, Sun J, Yang GZ, Gu Y. Multi-site, Multi-domain Airway Tree Modeling. Med Image Anal 2023; 90:102957. [PMID: 37716199 DOI: 10.1016/j.media.2023.102957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/07/2023] [Accepted: 09/04/2023] [Indexed: 09/18/2023]
Abstract
Open international challenges are becoming the de facto standard for assessing computer vision and image analysis algorithms. In recent years, new methods have extended the reach of pulmonary airway segmentation that is closer to the limit of image resolution. Since EXACT'09 pulmonary airway segmentation, limited effort has been directed to the quantitative comparison of newly emerged algorithms driven by the maturity of deep learning based approaches and extensive clinical efforts for resolving finer details of distal airways for early intervention of pulmonary diseases. Thus far, public annotated datasets are extremely limited, hindering the development of data-driven methods and detailed performance evaluation of new algorithms. To provide a benchmark for the medical imaging community, we organized the Multi-site, Multi-domain Airway Tree Modeling (ATM'22), which was held as an official challenge event during the MICCAI 2022 conference. ATM'22 provides large-scale CT scans with detailed pulmonary airway annotation, including 500 CT scans (300 for training, 50 for validation, and 150 for testing). The dataset was collected from different sites and it further included a portion of noisy COVID-19 CTs with ground-glass opacity and consolidation. Twenty-three teams participated in the entire phase of the challenge and the algorithms for the top ten teams are reviewed in this paper. Both quantitative and qualitative results revealed that deep learning models embedded with the topological continuity enhancement achieved superior performance in general. ATM'22 challenge holds as an open-call design, the training data and the gold standard evaluation are available upon successful registration via its homepage (https://atm22.grand-challenge.org/).
Collapse
Affiliation(s)
- Minghui Zhang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China; Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, 200240, China; Department of Automation, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yangqian Wu
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China; Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, 200240, China; Department of Automation, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Hanxiao Zhang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yulei Qin
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Hao Zheng
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Wen Tang
- InferVision Medical Technology Co., Ltd., Beijing, China
| | | | - Chenhao Pei
- InferVision Medical Technology Co., Ltd., Beijing, China
| | - Pengxin Yu
- InferVision Medical Technology Co., Ltd., Beijing, China
| | - Yang Nan
- Imperial College London, London, UK
| | | | | | | | | | - Puyang Wang
- Alibaba DAMO Academy, 969 West Wen Yi Road, Hangzhou, Zhejiang, China
| | - Dazhou Guo
- Alibaba DAMO Academy USA, 860 Washington Street, 8F, NY, USA
| | - Dakai Jin
- Alibaba DAMO Academy USA, 860 Washington Street, 8F, NY, USA
| | - Ya'nan Wu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Shuiqing Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Runsheng Chang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Boyu Zhang
- A.I R&D Center, Sanmed Biotech Inc., No. 266 Tongchang Road, Xiangzhou District, Zhuhai, Guangdong, China
| | - Xing Lu
- A.I R&D Center, Sanmed Biotech Inc., T220 Trade st. SanDiego, CA, USA
| | - Abdul Qayyum
- ENIB, UMR CNRS 6285 LabSTICC, Brest, 29238, France
| | - Moona Mazher
- Department of Computer Engineering and Mathematics, University Rovira I Virgili, Tarragona, Spain
| | - Qi Su
- Shanghai Jiao Tong University, Shanghai, China
| | - Yonghuang Wu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Ying'ao Liu
- University of Science and Technology of China, Hefei, Anhui, China
| | | | - Jiancheng Yang
- Dianei Technology, Shanghai, China; EPFL, Lausanne, Switzerland
| | - Ashkan Pakzad
- Medical Physics and Biomedical Engineering Department, University College London, London, UK
| | - Bojidar Rangelov
- Center for Medical Image Computing, University College London, London, UK
| | | | | | - Jiayuan Sun
- Department of Respiratory and Critical Care Medicine, Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai, China.
| | - Guang-Zhong Yang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Yun Gu
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China; Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, 200240, China; Department of Automation, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
12
|
Lu W, Zhang X, Yan G, Ma G. The Differences of Quantitative Flow Ratio in Coronary Artery Stenosis with or without Atrial Fibrillation. J Interv Cardiol 2023; 2023:7278343. [PMID: 37868769 PMCID: PMC10589068 DOI: 10.1155/2023/7278343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 09/21/2023] [Accepted: 10/04/2023] [Indexed: 10/24/2023] Open
Abstract
Quantitative flow ratio (QFR) is a new method for the assessment of the extent of coronary artery stenosis. But it may be obscured by the cardiac remodeling and abnormal blood flow of the coronary artery when encountering atrial fibrillation (AF). The present study aimed to examine the impact of these changed structures and blood flow of coronary arteries on QFR results in AF patients. Methods and Results. We evaluated QFR in 223 patients (112 patients with AF; 111 non-AF patients served as controls) who had undergone percutaneous coronary intervention (PCI) due to severe stenoses in coronary arteries. QFR of the target coronary was determined according to the flow rate of the contrast agent. Results showed that AF patients had significantly higher QFR values than control (0.792 ± 0.118 vs. 0.685 ± 0.167, p < 0.001). We further analyzed local QFR around the stenoses (0.858 ± 0.304 vs. 0.756 ± 0.146, p=0.002), residual QFR (0.958 ± 0.055 vs. 0.929 ± 0.093, p=0.005), and index QFR (0.807 ± 0.108 vs. 0.713 ± 0.152, p < 0.001) in these two groups of patients with and without AF. Further analysis revealed that QFR in AF patients was negatively correlated with coronary flow velocity (R = -0.22, p=0.02) and area of stenosis (R = -0.70, p < 0.001) but positively correlated with the minimum lumen area (MLA) (R = 0.47, p < 0.001). Conclusion. AF patients with coronary artery stenosis have higher QFR values, which are associated with decreased blood flow velocity, smaller stenosis, and larger MLA in AF patients upon cardiac remodeling.
Collapse
Affiliation(s)
- Wenbin Lu
- Department of Cardiology, ZhongDa Hospital Affiliated with Southeast University, China
| | - Xiaoguo Zhang
- Department of Cardiology, ZhongDa Hospital Affiliated with Southeast University, China
| | - Gaoliang Yan
- Department of Cardiology, ZhongDa Hospital Affiliated with Southeast University, China
| | - Genshan Ma
- Department of Cardiology, ZhongDa Hospital Affiliated with Southeast University, China
| |
Collapse
|
13
|
Zeng A, Wu C, Lin G, Xie W, Hong J, Huang M, Zhuang J, Bi S, Pan D, Ullah N, Khan KN, Wang T, Shi Y, Li X, Xu X. ImageCAS: A large-scale dataset and benchmark for coronary artery segmentation based on computed tomography angiography images. Comput Med Imaging Graph 2023; 109:102287. [PMID: 37634975 DOI: 10.1016/j.compmedimag.2023.102287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Revised: 05/03/2023] [Accepted: 08/03/2023] [Indexed: 08/29/2023]
Abstract
Cardiovascular disease (CVD) accounts for about half of non-communicable diseases. Vessel stenosis in the coronary artery is considered to be the major risk of CVD. Computed tomography angiography (CTA) is one of the widely used noninvasive imaging modalities in coronary artery diagnosis due to its superior image resolution. Clinically, segmentation of coronary arteries is essential for the diagnosis and quantification of coronary artery disease. Recently, a variety of works have been proposed to address this problem. However, on one hand, most works rely on in-house datasets, and only a few works published their datasets to the public which only contain tens of images. On the other hand, their source code have not been published, and most follow-up works have not made comparison with existing works, which makes it difficult to judge the effectiveness of the methods and hinders the further exploration of this challenging yet critical problem in the community. In this paper, we propose a large-scale dataset for coronary artery segmentation on CTA images. In addition, we have implemented a benchmark in which we have tried our best to implement several typical existing methods. Furthermore, we propose a strong baseline method which combines multi-scale patch fusion and two-stage processing to extract the details of vessels. Comprehensive experiments show that the proposed method achieves better performance than existing works on the proposed large-scale dataset. The benchmark and the dataset are published at https://github.com/XiaoweiXu/ImageCAS-A-Large-Scale-Dataset-and-Benchmark-for-Coronary-Artery-Segmentation-based-on-CT.
Collapse
Affiliation(s)
- An Zeng
- School of Computer Science, Guangdong University of Technology, Guangzhou, China
| | - Chunbiao Wu
- School of Computer Science, Guangdong University of Technology, Guangzhou, China
| | - Guisen Lin
- Department of Radiology, Shenzhen Children's Hospital, Shenzhen, China
| | - Wen Xie
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Jin Hong
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Meiping Huang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Jian Zhuang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Shanshan Bi
- Department of Computer Science and Engineering, Missouri University of Science and Technology, Rolla, MO, United States
| | - Dan Pan
- Department of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Najeeb Ullah
- Department of Computer Science, University of Engineering and Technology, Mardan, KP, Pakistan
| | - Kaleem Nawaz Khan
- Department of Computer Science, University of Engineering and Technology, Mardan, KP, Pakistan
| | - Tianchen Wang
- Department of Computer Science and Engineering, University of Notre Dame, Indiana, United States
| | - Yiyu Shi
- Department of Computer Science and Engineering, University of Notre Dame, Indiana, United States
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong Special Administrative Region, China
| | - Xiaowei Xu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| |
Collapse
|
14
|
Wang Q, Xu L, Wang L, Yang X, Sun Y, Yang B, Greenwald SE. Automatic coronary artery segmentation of CCTA images using UNet with a local contextual transformer. Front Physiol 2023; 14:1138257. [PMID: 37675283 PMCID: PMC10478234 DOI: 10.3389/fphys.2023.1138257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 08/01/2023] [Indexed: 09/08/2023] Open
Abstract
Coronary artery segmentation is an essential procedure in the computer-aided diagnosis of coronary artery disease. It aims to identify and segment the regions of interest in the coronary circulation for further processing and diagnosis. Currently, automatic segmentation of coronary arteries is often unreliable because of their small size and poor distribution of contrast medium, as well as the problems that lead to over-segmentation or omission. To improve the performance of convolutional-neural-network (CNN) based coronary artery segmentation, we propose a novel automatic method, DR-LCT-UNet, with two innovative components: the Dense Residual (DR) module and the Local Contextual Transformer (LCT) module. The DR module aims to preserve unobtrusive features through dense residual connections, while the LCT module is an improved Transformer that focuses on local contextual information, so that coronary artery-related information can be better exploited. The LCT and DR modules are effectively integrated into the skip connections and encoder-decoder of the 3D segmentation network, respectively. Experiments on our CorArtTS2020 dataset show that the dice similarity coefficient (DSC), Recall, and Precision of the proposed method reached 85.8%, 86.3% and 85.8%, respectively, outperforming 3D-UNet (taken as the reference among the 6 other chosen comparison methods), by 2.1%, 1.9%, and 2.1%.
Collapse
Affiliation(s)
- Qianjin Wang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Lisheng Xu
- College of Medicine and Biological and Information Engineering, Northeastern University, Shenyang, China
| | - Lu Wang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Xiaofan Yang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Yu Sun
- College of Medicine and Biological and Information Engineering, Northeastern University, Shenyang, China
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
- Key Laboratory of Cardiovascular Imaging and Research of Liaoning Province, Shenyang, China
| | - Benqiang Yang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
- Key Laboratory of Cardiovascular Imaging and Research of Liaoning Province, Shenyang, China
| | - Stephen E. Greenwald
- Blizard Institute, Barts and the London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| |
Collapse
|
15
|
Zhao G, Liang K, Pan C, Zhang F, Wu X, Hu X, Yu Y. Graph Convolution Based Cross-Network Multiscale Feature Fusion for Deep Vessel Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:183-195. [PMID: 36112564 DOI: 10.1109/tmi.2022.3207093] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Vessel segmentation is widely used to help with vascular disease diagnosis. Vessels reconstructed using existing methods are often not sufficiently accurate to meet clinical use standards. This is because 3D vessel structures are highly complicated and exhibit unique characteristics, including sparsity and anisotropy. In this paper, we propose a novel hybrid deep neural network for vessel segmentation. Our network consists of two cascaded subnetworks performing initial and refined segmentation respectively. The second subnetwork further has two tightly coupled components, a traditional CNN-based U-Net and a graph U-Net. Cross-network multi-scale feature fusion is performed between these two U-shaped networks to effectively support high-quality vessel segmentation. The entire cascaded network can be trained from end to end. The graph in the second subnetwork is constructed according to a vessel probability map as well as appearance and semantic similarities in the original CT volume. To tackle the challenges caused by the sparsity and anisotropy of vessels, a higher percentage of graph nodes are distributed in areas that potentially contain vessels while a higher percentage of edges follow the orientation of potential nearby vessels. Extensive experiments demonstrate our deep network achieves state-of-the-art 3D vessel segmentation performance on multiple public and in-house datasets.
Collapse
|
16
|
Bhatele KR, Jha A, Tiwari D, Bhatele M, Sharma S, Mithora MR, Singhal S. COVID-19 Detection: A Systematic Review of Machine and Deep Learning-Based Approaches Utilizing Chest X-Rays and CT Scans. Cognit Comput 2022; 16:1-38. [PMID: 36593991 PMCID: PMC9797382 DOI: 10.1007/s12559-022-10076-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 11/15/2022] [Indexed: 12/30/2022]
Abstract
This review study presents the state-of-the-art machine and deep learning-based COVID-19 detection approaches utilizing the chest X-rays or computed tomography (CT) scans. This study aims to systematically scrutinize as well as to discourse challenges and limitations of the existing state-of-the-art research published in this domain from March 2020 to August 2021. This study also presents a comparative analysis of the performance of four majorly used deep transfer learning (DTL) models like VGG16, VGG19, ResNet50, and DenseNet over the COVID-19 local CT scans dataset and global chest X-ray dataset. A brief illustration of the majorly used chest X-ray and CT scan datasets of COVID-19 patients utilized in state-of-the-art COVID-19 detection approaches are also presented for future research. The research databases like IEEE Xplore, PubMed, and Web of Science are searched exhaustively for carrying out this survey. For the comparison analysis, four deep transfer learning models like VGG16, VGG19, ResNet50, and DenseNet are initially fine-tuned and trained using the augmented local CT scans and global chest X-ray dataset in order to observe their performance. This review study summarizes major findings like AI technique employed, type of classification performed, used datasets, results in terms of accuracy, specificity, sensitivity, F1 score, etc., along with the limitations, and future work for COVID-19 detection in tabular manner for conciseness. The performance analysis of the four majorly used deep transfer learning models affirms that Visual Geometry Group 19 (VGG19) model delivered the best performance over both COVID-19 local CT scans dataset and global chest X-ray dataset.
Collapse
Affiliation(s)
| | - Anand Jha
- RJIT BSF Academy, Tekanpur, Gwalior India
| | | | | | | | | | | |
Collapse
|
17
|
Gharleghi R, Chen N, Sowmya A, Beier S. Towards automated coronary artery segmentation: A systematic review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:107015. [PMID: 35914439 DOI: 10.1016/j.cmpb.2022.107015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 07/03/2022] [Accepted: 07/06/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Vessel segmentation is the first processing stage of 3D medical images for both clinical and research use. Current segmentation methods are tedious and time consuming, requiring significant manual correction and hence are infeasible to use in large data sets. METHODS Here, we review and analyse available coronary artery segmentation methods, focusing on fully automated methods capable of handling the rapidly growing medical images available. All manuscripts published since 2010 are systematically reviewed, categorised into different groups based on the approach taken, and characteristics of the different approaches as well as trends over the past decade are explored. RESULTS The manuscripts were divided intro three broad categories, consisting of region growing, voxelwise prediction and partitioning approaches. The most common approach overall was region growing, particularly using active contour models, however these have had a sharp fall in popularity in recent years with convolutional neural networks becoming significantly more popular. CONCLUSIONS The systematic review of current coronary artery segmentation methods shows interesting trends, with rising popularity of machine learning methods, a focus on efficient methods, and falling popularity of computationally expensive processing steps such as vesselness and multiplanar reformation.
Collapse
Affiliation(s)
- Ramtin Gharleghi
- School of Mechanical and Manufacturing Engineering, UNSW, Sydney NSW 2053, Australia.
| | - Nanway Chen
- School of Mechanical and Manufacturing Engineering, UNSW, Sydney NSW 2053, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, UNSW, Sydney NSW 2053, Australia; Tyree Foundation Institute of Health Engineering (Tyree IHealthE), Sydney, Australia
| | - Susann Beier
- School of Mechanical and Manufacturing Engineering, UNSW, Sydney NSW 2053, Australia
| |
Collapse
|
18
|
Emin Sahin M. Deep learning-based approach for detecting COVID-19 in chest X-rays. Biomed Signal Process Control 2022; 78:103977. [PMID: 35855833 PMCID: PMC9279305 DOI: 10.1016/j.bspc.2022.103977] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 06/16/2022] [Accepted: 07/11/2022] [Indexed: 12/15/2022]
Abstract
Today, 2019 Coronavirus (COVID-19) infections are a major health concern worldwide. Therefore, detecting COVID-19 in X-ray images is crucial for diagnosis, evaluation, and treatment. Furthermore, expressing diagnostic uncertainty in a report is a challenging duty but unavoidable task for radiologists. This study proposes a novel CNN (Convolutional Neural Network) model for automatic COVID-19 identification utilizing chest X-ray images. The proposed CNN model is designed to be a reliable diagnostic tool for two-class categorization (COVID and Normal). In addition to the proposed model, different architectures, including the pre-trained MobileNetv2 and ResNet50 models, are evaluated for this COVID-19 dataset (13,824 X-ray images) and our suggested model is compared to these existing COVID-19 detection algorithms in terms of accuracy. Experimental results show that our proposed model identifies patients with COVID-19 disease with 96.71 percent accuracy, 91.89 percent F1-score. Our proposed approach CNN’s experimental results show that it outperforms the most advanced algorithms currently available. This model can assist clinicians in making informed judgments on how to diagnose COVID-19, as well as make test kits more accessible.
Collapse
Affiliation(s)
- M Emin Sahin
- Department of Computer Engineering, Yozgat Bozok University, Turkey
| |
Collapse
|
19
|
Liao J, Huang L, Qu M, Chen B, Wang G. Artificial Intelligence in Coronary CT Angiography: Current Status and Future Prospects. Front Cardiovasc Med 2022; 9:896366. [PMID: 35783834 PMCID: PMC9247240 DOI: 10.3389/fcvm.2022.896366] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 05/18/2022] [Indexed: 12/28/2022] Open
Abstract
Coronary heart disease (CHD) is the leading cause of mortality in the world. Early detection and treatment of CHD are crucial. Currently, coronary CT angiography (CCTA) has been the prior choice for CHD screening and diagnosis, but it cannot meet the clinical needs in terms of examination quality, the accuracy of reporting, and the accuracy of prognosis analysis. In recent years, artificial intelligence (AI) has developed rapidly in the field of medicine; it played a key role in auxiliary diagnosis, disease mechanism analysis, and prognosis assessment, including a series of studies related to CHD. In this article, the application and research status of AI in CCTA were summarized and the prospects of this field were also described.
Collapse
Affiliation(s)
- Jiahui Liao
- Department of Radiology, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
- School of Biomedical Engineering, Guangzhou Xinhua University, Guangzhou, China
| | - Lanfang Huang
- Department of Radiology, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
| | - Meizi Qu
- Department of Radiology, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
| | - Binghui Chen
- Department of Radiology, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
- *Correspondence: Binghui Chen
| | - Guojie Wang
- Department of Radiology, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
- Guojie Wang
| |
Collapse
|
20
|
Algarni M, Al-Rezqi A, Saeed F, Alsaeedi A, Ghabban F. Multi-constraints based deep learning model for automated segmentation and diagnosis of coronary artery disease in X-ray angiographic images. PeerJ Comput Sci 2022; 8:e993. [PMID: 35721418 PMCID: PMC9202622 DOI: 10.7717/peerj-cs.993] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 05/05/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND The detection of coronary artery disease (CAD) from the X-ray coronary angiography is a crucial process which is hindered by various issues such as presence of noise, insufficient contrast of the input images along with the uncertainties caused by the motion due to respiration and variation of angles of vessels. METHODS In this article, an Automated Segmentation and Diagnosis of Coronary Artery Disease (ASCARIS) model is proposed in order to overcome the prevailing challenges in detection of CAD from the X-ray images. Initially, the preprocessing of the input images was carried out by using the modified wiener filter for the removal of both internal and external noise pixels from the images. Then, the enhancement of contrast was carried out by utilizing the optimized maximum principal curvature to preserve the edge information thereby contributing to increasing the segmentation accuracy. Further, the binarization of enhanced images was executed by the means of OTSU thresholding. The segmentation of coronary arteries was performed by implementing the Attention-based Nested U-Net, in which the attention estimator was incorporated to overcome the difficulties caused by intersections and overlapped arteries. The increased segmentation accuracy was achieved by performing angle estimation. Finally, the VGG-16 based architecture was implemented to extract threefold features from the segmented image to perform classification of X-ray images into normal and abnormal classes. RESULTS The experimentation of the proposed ASCARIS model was carried out in the MATLAB R2020a simulation tool and the evaluation of the proposed model was compared with several existing approaches in terms of accuracy, sensitivity, specificity, revised contrast to noise ratio, mean square error, dice coefficient, Jaccard similarity, Hausdorff distance, Peak signal-to-noise ratio (PSNR), segmentation accuracy and ROC curve. DISCUSSION The results obtained conclude that the proposed model outperforms the existing approaches in all the evaluation metrics thereby achieving optimized classification of CAD. The proposed method removes the large number of background artifacts and obtains a better vascular structure.
Collapse
Affiliation(s)
- Mona Algarni
- College of Computer Science and Engineering, Taibah University, Medina, Saudi Arabia
- Computer Science and Artificial Intelligence Department, University of Prince Mugrin, Medina, Saudi Arabia
| | - Abdulkader Al-Rezqi
- College of Medicine, King Saud bin Abdulaziz University, Jeddah, Saudi Arabia
| | - Faisal Saeed
- College of Computer Science and Engineering, Taibah University, Medina, Saudi Arabia
- School of Computing and Digital Technology, University of Birmingham, Birmingham, United Kingdom
| | - Abdullah Alsaeedi
- College of Computer Science and Engineering, Taibah University, Medina, Saudi Arabia
| | - Fahad Ghabban
- College of Computer Science and Engineering, Taibah University, Medina, Saudi Arabia
| |
Collapse
|
21
|
Li Y, Zhao H, Gan T, Liu Y, Zou L, Xu T, Chen X, Fan C, Wu M. Automated Multi-View Multi-Modal Assessment of COVID-19 Patients Using Reciprocal Attention and Biomedical Transform. Front Public Health 2022; 10:886958. [PMID: 35692335 PMCID: PMC9174692 DOI: 10.3389/fpubh.2022.886958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 04/20/2022] [Indexed: 11/13/2022] Open
Abstract
Automated severity assessment of coronavirus disease 2019 (COVID-19) patients can help rationally allocate medical resources and improve patients' survival rates. The existing methods conduct severity assessment tasks mainly on a unitary modal and single view, which is appropriate to exclude potential interactive information. To tackle the problem, in this paper, we propose a multi-view multi-modal model to automatically assess the severity of COVID-19 patients based on deep learning. The proposed model receives multi-view ultrasound images and biomedical indices of patients and generates comprehensive features for assessment tasks. Also, we propose a reciprocal attention module to acquire the underlying interactions between multi-view ultrasound data. Moreover, we propose biomedical transform module to integrate biomedical data with ultrasound data to produce multi-modal features. The proposed model is trained and tested on compound datasets, and it yields 92.75% for accuracy and 80.95% for recall, which is the best performance compared to other state-of-the-art methods. Further ablation experiments and discussions conformably indicate the feasibility and advancement of the proposed model.
Collapse
Affiliation(s)
- Yanhan Li
- Electronic Information School, Wuhan University, Wuhan, China
| | - Hongyun Zhao
- Department of Gastroenterology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Ultrasound Molecular Imaging, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Tian Gan
- Department of Ultrasound, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Yang Liu
- School of Economics and Management, Wuhan University, Wuhan, China
| | - Lian Zou
- Electronic Information School, Wuhan University, Wuhan, China
| | - Ting Xu
- Department of Ultrasound, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Xuan Chen
- Beijing Genomics Institute (BGI) Research, Shenzhen, China
| | - Cien Fan
- Electronic Information School, Wuhan University, Wuhan, China
- *Correspondence: Cien Fan
| | - Meng Wu
- Department of Ultrasound, Zhongnan Hospital of Wuhan University, Wuhan, China
- Meng Wu
| |
Collapse
|
22
|
A Practical Deep Learning Model in Differentiating Pneumonia-Type Lung Carcinoma from Pneumonia on CT Images: ResNet Added with Attention Mechanism. JOURNAL OF ONCOLOGY 2022; 2022:8906259. [PMID: 35251178 PMCID: PMC8890890 DOI: 10.1155/2022/8906259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 12/26/2021] [Accepted: 12/31/2021] [Indexed: 11/29/2022]
Abstract
Objective We aim to develop a deep neural network model to differentiate pneumonia-type lung carcinoma from pneumonia based on chest CT scanning and evaluate its performance. Materials and Methods We retrospectively analyzed 131 patients diagnosed with pneumonia-type lung carcinoma and 171 patients with pneumonia treated in Beijing Hospital from October 2019 to February 2021. The average age was 68 (±15) years old, and the proportion of men (162/302) was slightly more than that of women (140/302). In this study, a deep learning based model UNet was applied to extract lesion areas from chest CT images. Lesion areas were extracted and classified by a designed spatial attention mechanism network. The model AUC and diagnostic accuracy were analyzed based on the results of the model. We analyzed the accuracy rate, sensitivity, and specificity and compared the results of the model to the junior and senior radiologists and radiologists based on the model. Results The model has a good efficiency in detecting pneumonia-like lesions (6.31 seconds/case). The model accuracy rate, sensitivity, and specificity were 74.20%, 60.37%, and 89.36%, respectively. The junior radiologist's accuracy rate, sensitivity, and specificity were 61.00%, 48.08%, and 75.00%, respectively. The senior radiologist's accuracy rate, sensitivity, and specificity were 65.00%, 51.92%, and 79.17%, respectively. The results of junior radiologists based on the model were improved (76.00% for accuracy rate, 62.75% for sensitivity, and 89.80% for specificity). The results of senior radiologists based on the model were also improved (78.00% for accuracy rate, 64.71% for sensitivity, and 91.84% for specificity) and the diagnostic accuracy of which was statistically higher than other groups (P < 0.05). Based on the lesion texture diversity and the lesion boundary ambiguity, the algorithm produced false-positive samples (13.51%). Conclusion This deep learning model could detect pneumonia-type lung carcinoma and differentiate it from pneumonia accurately and efficiently.
Collapse
|
23
|
Li X, Bala R, Monga V. Robust Deep 3D Blood Vessel Segmentation Using Structural Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1271-1284. [PMID: 34990361 DOI: 10.1109/tip.2021.3139241] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep learning has enabled significant improvements in the accuracy of 3D blood vessel segmentation. Open challenges remain in scenarios where labeled 3D segmentation maps for training are severely limited, as is often the case in practice, and in ensuring robustness to noise. Inspired by the observation that 3D vessel structures project onto 2D image slices with informative and unique edge profiles, we propose a novel deep 3D vessel segmentation network guided by edge profiles. Our network architecture comprises a shared encoder and two decoders that learn segmentation maps and edge profiles jointly. 3D context is mined in both the segmentation and edge prediction branches by employing bidirectional convolutional long-short term memory (BCLSTM) modules. 3D features from the two branches are concatenated to facilitate learning of the segmentation map. As a key contribution, we introduce new regularization terms that: a) capture the local homogeneity of 3D blood vessel volumes in the presence of biomarkers; and b) ensure performance robustness to domain-specific noise by suppressing false positive responses. Experiments on benchmark datasets with ground truth labels reveal that the proposed approach outperforms state-of-the-art techniques on standard measures such as DICE overlap and mean Intersection-over-Union. The performance gains of our method are even more pronounced when training is limited. Furthermore, the computational cost of our network inference is among the lowest compared with state-of-the-art.
Collapse
|
24
|
Singh P, Bose SS. A quantum-clustering optimization method for COVID-19 CT scan image segmentation. EXPERT SYSTEMS WITH APPLICATIONS 2021; 185:115637. [PMID: 34334964 PMCID: PMC8316646 DOI: 10.1016/j.eswa.2021.115637] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 04/25/2021] [Accepted: 07/18/2021] [Indexed: 06/12/2023]
Abstract
The World Health Organization (WHO) has declared Coronavirus Disease 2019 (COVID-19) as one of the highly contagious diseases and considered this epidemic as a global health emergency. Therefore, medical professionals urgently need an early diagnosis method for this new type of disease as soon as possible. In this research work, a new early screening method for the investigation of COVID-19 pneumonia using chest CT scan images has been introduced. For this purpose, a new image segmentation method based on K-means clustering algorithm (KMC) and novel fast forward quantum optimization algorithm (FFQOA) is proposed. The proposed method, called FFQOAK (FFQOA+KMC), initiates by clustering gray level values with the KMC algorithm and generating an optimal segmented image with the FFQOA. The main objective of the proposed FFQOAK is to segment the chest CT scan images so that infected regions can be accurately detected. The proposed method is verified and validated with different chest CT scan images of COVID-19 patients. The segmented images obtained using FFQOAK method are compared with various benchmark image segmentation methods. The proposed method achieves mean squared error, peak signal-to-noise ratio, Jaccard similarity coefficient and correlation coefficient of 712.30, 19.61, 0.90 and 0.91 in case of four experimental sets, namely Experimental_Set_1, Experimental_Set_2, Experimental_Set_3 and Experimental_Set_4, respectively. These four performance evaluation metrics show the effectiveness of FFQOAK method over these existing methods.
Collapse
Affiliation(s)
- Pritpal Singh
- Institute of Theoretical Physics, Jagiellonian University, ul.Łojasiewicza 11, Kraków 30-348, Poland
| | - Surya Sekhar Bose
- Department of Mathematics, Madras Institute of Technology, MIT Rd, Radha Nagar, Chromepet, Chennai, Tamil Nadu 600044, India
| |
Collapse
|
25
|
Mu D, Bai J, Chen W, Yu H, Liang J, Yin K, Li H, Qing Z, He K, Yang HY, Zhang J, Yin Y, McLellan HW, Schoepf UJ, Zhang B. Calcium Scoring at Coronary CT Angiography Using Deep Learning. Radiology 2021; 302:309-316. [PMID: 34812674 DOI: 10.1148/radiol.2021211483] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Background Separate noncontrast CT to quantify the coronary artery calcium (CAC) score often precedes coronary CT angiography (CTA). Quantifying CAC scores directly at CTA would eliminate the additional radiation produced at CT but remains challenging. Purpose To quantify CAC scores automatically from a single CTA scan. Materials and Methods In this retrospective study, a deep learning method to quantify CAC scores automatically from a single CTA scan was developed on training and validation sets of 292 patients and 73 patients collected from March 2019 to July 2020. Virtual noncontrast scans obtained with a spectral CT scanner were used to develop the algorithm to alleviate tedious manual annotation of calcium regions. The proposed method was validated on an independent test set of 240 CTA scans collected from three different CT scanners from August 2020 to November 2020 using the Pearson correlation coefficient, the coefficient of determination, or r2, and the Bland-Altman plot against the semiautomatic Agatston score at noncontrast CT. The cardiovascular risk categorization performance was evaluated using weighted κ based on the Agatston score (CAC score risk categories: 0-10, 11-100, 101-400, and >400). Results Two hundred forty patients (mean age, 60 years ± 11 [standard deviation]; 146 men) were evaluated. The positive correlation between the automatic deep learning CTA and semiautomatic noncontrast CT CAC score was excellent (Pearson correlation = 0.96; r2 = 0.92). The risk categorization agreement based on deep learning CTA and noncontrast CT CAC scores was excellent (weighted κ = 0.94 [95% CI: 0.91, 0.97]), with 223 of 240 scans (93%) categorized correctly. All patients who were miscategorized were in the direct neighboring risk groups. The proposed method's differences from the noncontrast CT CAC score were not statistically significant with regard to scanner (P = .15), sex (P = .051), and section thickness (P = .67). Conclusion A deep learning automatic calcium scoring method accurately quantified coronary artery calcium from CT angiography images and categorized risk. © RSNA, 2021 See also the editorial by Goldfarb and Cao et al in this issue.
Collapse
Affiliation(s)
- Dan Mu
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - Junjie Bai
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - Wenping Chen
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - Hongming Yu
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - Jing Liang
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - Kejie Yin
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - Hui Li
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - Zhao Qing
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - Kelei He
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - Hao-Yu Yang
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - Jinyao Zhang
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - Youbing Yin
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - Hunter W McLellan
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - U Joseph Schoepf
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| | - Bing Zhang
- From the Department of Radiology, Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, China (D.M., W.C., H.Y., J.L., K.Y., H.L., Z.Q., B.Z.); Keya Medical, Shenzhen, China (J.B., H.Y.Y., J.Z., Y.Y.); Medical School of Nanjing University, Nanjing, China (K.H.); National Institutes of Healthcare Data Science at Nanjing University, Nanjing, China (K.H.); University of South Carolina School of Medicine-Columbia, Columbia, SC (H.W.M.); Division of Cardiovascular Imaging, Medical University of South Carolina, Charleston, SC (U.J.S.); Institute of Brain Science, Nanjing University, Nanjing 210008, China (B.Z.)
| |
Collapse
|
26
|
Qi Y, Xu H, He Y, Li G, Li Z, Kong Y, Coatrieux JL, Shu H, Yang G, Tu S. Examinee-Examiner Network: Weakly Supervised Accurate Coronary Lumen Segmentation Using Centerline Constraint. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:9429-9441. [PMID: 34757906 DOI: 10.1109/tip.2021.3125490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Accurate coronary lumen segmentation on coronary-computed tomography angiography (CCTA) images is crucial for quantification of coronary stenosis and the subsequent computation of fractional flow reserve. Many factors including difficulty in labeling coronary lumens, various morphologies in stenotic lesions, thin structures and small volume ratio with respect to the imaging field complicate the task. In this work, we fused the continuity topological information of centerlines which are easily accessible, and proposed a novel weakly supervised model, Examinee-Examiner Network (EE-Net), to overcome the challenges in automatic coronary lumen segmentation. First, the EE-Net was proposed to address the fracture in segmentation caused by stenoses by combining the semantic features of lumens and the geometric constraints of continuous topology obtained from the centerlines. Then, a Centerline Gaussian Mask Module was proposed to deal with the insensitiveness of the network to the centerlines. Subsequently, a weakly supervised learning strategy, Examinee-Examiner Learning, was proposed to handle the weakly supervised situation with few lumen labels by using our EE-Net to guide and constrain the segmentation with customized prior conditions. Finally, a general network layer, Drop Output Layer, was proposed to adapt to the class imbalance by dropping well-segmented regions and weights the classes dynamically. Extensive experiments on two different data sets demonstrated that our EE-Net has good continuity and generalization ability on coronary lumen segmentation task compared with several widely used CNNs such as 3D-UNet. The results revealed our EE-Net with great potential for achieving accurate coronary lumen segmentation in patients with coronary artery disease. Code at http://github.com/qiyaolei/Examinee-Examiner-Network.
Collapse
|
27
|
Gu L, Cai XC. Fusing 2D and 3D convolutional neural networks for the segmentation of aorta and coronary arteries from CT images. Artif Intell Med 2021; 121:102189. [PMID: 34763804 DOI: 10.1016/j.artmed.2021.102189] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 09/23/2021] [Accepted: 09/29/2021] [Indexed: 11/26/2022]
Abstract
Automated segmentation of three-dimensional medical images is of great importance for the detection and quantification of certain diseases such as stenosis in the coronary arteries. Many 2D and 3D deep learning models, especially deep convolutional neural networks (CNNs), have achieved state-of-the-art segmentation performance on 3D medical images. Yet, there is a trade-off between the field of view and the utilization of inter-slice information when using pure 2D or 3D CNNs for 3D segmentation, which compromises the segmentation accuracy. In this paper, we propose a two-stage strategy that retains the advantages of both 2D and 3D CNNs and apply the method for the segmentation of the human aorta and coronary arteries, with stenosis, from computed tomography (CT) images. In the first stage, a 2D CNN, which can extract large-field-of-view information, is used to segment the aorta and coronary arteries simultaneously in a slice-by-slice fashion. Then, in the second stage, a 3D CNN is applied to extract the inter-slice information to refine the segmentation of the coronary arteries in certain subregions not resolved well in the first stage. We show that the 3D network of the second stage can improve the continuity between slices and reduce the missed detection rate of the 2D CNN. Compared with directly using a 3D CNN, the two-stage approach can alleviate the class imbalance problem caused by the large non-coronary artery (aorta and background) and the small coronary artery and reduce the training time because the vast majority of negative voxels are excluded in the first stage. To validate the efficacy of our method, extensive experiments are carried out to compare with other approaches based on pure 2D or 3D CNNs and those based on hybrid 2D-3D CNNs.
Collapse
Affiliation(s)
- Linyan Gu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Shenzhen Key Laboratory for Exascale Engineering and Scientific Computing, Shenzhen 518000, China.
| | - Xiao-Chuan Cai
- Faculty of Science and Technology, University of Macau, Avenida da Universidade, Taipa, Macao, China.
| |
Collapse
|
28
|
Segmentation of Coronary Arteries Images Using Spatio-temporal Feature Fusion Network with Combo Loss. Cardiovasc Eng Technol 2021; 13:407-418. [PMID: 34734373 DOI: 10.1007/s13239-021-00588-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 10/19/2021] [Indexed: 10/19/2022]
Abstract
PURPOSE Coronary heart disease is a serious disease that endangers human health and life. In recent years, the incidence and mortality of coronary heart disease have increased rapidly. The quantification of the coronary artery is critical in diagnosing coronary heart disease. METHODS In this paper, we improve the coronary arteries segmentation performance from two aspects of network model and algorithm. We proposed a U-shaped network based on spatio-temporal feature fusion structure to segment coronary arteries from 2D slices of computed tomography angiography (CTA) heart images. The spatio-temporal feature combines features of multiple levels and different receptive fields separately to get more precise boundaries. It is easy to cause over-segmented for the small proportion of coronary arteries in CTA images. For this reason, a combo loss function was designed to deal with the notorious imbalance between inputs and outputs that plague learning models. Input imbalance refers to the class imbalance in the input training samples. The output imbalance refers to the imbalance between the false positive and false negative of the inference model. The two imbalances in training and inference were divided and conquered with our combo loss function. Specifically, a gradient harmonizing mechanism (GHM) loss was employed to balance the gradient contribution of the input samples and at the same time punish false positives/negatives using another sensitivity-precision loss term to learn better model parameters gradually. RESULTS Compared with some existing methods, our proposed method improves the segmentation accuracy significantly, achieving the mean Dice coefficient of 0.87. In addition, accurate results can be obtained with little data using our method. Code is available at: https://github.com/Ariel97-star/FFNet-CoronaryArtery-Segmentation . CONCLUSIONS Our method can intelligently capture coronary artery structure and achieve accurate flow reserve fraction (FFR) analysis. Through a series of steps such as CPR curved reconstruction, the detection of coronary heart disease can be achieved without affecting the patient's body. In addition, our work can be used as an effective means to assist in the detection of stenosis. In the screening of coronary heart disease among high-risk cardiovascular factors, automatic detection of luminal stenosis can be performed based on the application of later algorithm transformation.
Collapse
|
29
|
Jeon B. Deep Recursive Bayesian Tracking for Fully Automatic Centerline Extraction of Coronary Arteries in CT Images. SENSORS (BASEL, SWITZERLAND) 2021; 21:6087. [PMID: 34577293 PMCID: PMC8471768 DOI: 10.3390/s21186087] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 09/07/2021] [Accepted: 09/08/2021] [Indexed: 11/17/2022]
Abstract
Extraction of coronary arteries in coronary computed tomography (CT) angiography is a prerequisite for the quantification of coronary lesions. In this study, we propose a tracking method combining a deep convolutional neural network (DNN) and particle filtering method to identify the trajectories from the coronary ostium to each distal end from 3D CT images. The particle filter, as a non-linear approximator, is an appropriate tracking framework for such thin and elongated structures; however, the robust 'vesselness' measurement is essential for extracting coronary centerlines. Importantly, we employed the DNN to robustly measure the vesselness using patch images, and we integrated softmax values to the likelihood function in our particle filtering framework. Tangent patches represent cross-sections of coronary arteries of circular shapes. Thus, 2D tangent patches are assumed to include enough features of coronary arteries, and the use of 2D patches significantly reduces computational complexity. Because coronary vasculature has multiple bifurcations, we also modeled a method to detect branching sites by clustering the particle locations. The proposed method is compared with three commercial workstations and two conventional methods from the academic literature.
Collapse
Affiliation(s)
- Byunghwan Jeon
- School of Computer Science, Kyungil University, Gyeongsan 38428, Korea
| |
Collapse
|
30
|
Liu M, Lv W, Yin B, Ge Y, Wei W. The human-AI scoring system: A new method for CT-based assessment of COVID-19 severity. Technol Health Care 2021; 30:1-10. [PMID: 34486996 DOI: 10.3233/thc-213199] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Chest computed tomography (CT) plays an important role in the diagnosis and assessment of coronavirus disease 2019 (COVID-19). OBJECTIVE To evaluate the value of an artificial intelligence (AI) scoring system for radiologically assessing the severity of COVID-19. MATERIALS AND METHODS Chest CT images of 81 patients (61 of normal type and 20 of severe type) with confirmed COVID-19 were used. The test data were anonymized. The scores achieved by four methods (junior radiologists; AI scoring system; human-AI segmentation system; human-AI scoring system) were compared with that by two experienced radiologists (reference score). The mean absolute errors (MAEs) between the four methods and experienced radiologists were calculated separately. The Wilcoxon test is used to predict the significance of the severity of COVID-19. Then use Spearman correlation analysis ROC analysis was used to evaluate the performance of different scores. RESULTS The AI score had a relatively low MAE (1.67-2.21). Score of human-AI scoring system had the lowest MAE (1.67), a diagnostic value almost equal to reference score (r= 0.97), and a strongest correlation with clinical severity (r= 0.59, p< 0.001). The AUCs of reference score, score of junior radiologists, AI score, score of human-AI segmentation system, and score of human-AI scoring system were 0.874, 0.841, 0.852, 0.857 and 0.865, respectively. CONCLUSION The human-AI scoring system can help radiologists to improve the accuracy of COVID-19 severity assessment.
Collapse
Affiliation(s)
- Mingzhu Liu
- Department of Radiology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Weifu Lv
- Department of Radiology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Baocai Yin
- Anhui iFlytek Healthcare Information Technology Co., Ltd, Hefei, Anhui, China
| | | | - Wei Wei
- Department of Radiology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| |
Collapse
|
31
|
Wan T, Chen J, Zhang Z, Li D, Qin Z. Automatic vessel segmentation in X-ray angiogram using spatio-temporal fully-convolutional neural network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102646] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
32
|
Yang G, Zhang H, Firmin D, Li S. Recent advances in artificial intelligence for cardiac imaging. Comput Med Imaging Graph 2021; 90:101928. [PMID: 33965746 DOI: 10.1016/j.compmedimag.2021.101928] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Affiliation(s)
- Guang Yang
- National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK; Cardiovascular Research Centre, Royal Brompton Hospital, SW3 6NP, London, UK.
| | - Heye Zhang
- School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, 510006, China.
| | - David Firmin
- National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK; Cardiovascular Research Centre, Royal Brompton Hospital, SW3 6NP, London, UK
| | - Shuo Li
- Department of Medical Imaging, Western University, London, ON, Canada; Digital Imaging Group, London, ON, Canada
| |
Collapse
|
33
|
Montazeri M, ZahediNasab R, Farahani A, Mohseni H, Ghasemian F. Machine Learning Models for Image-Based Diagnosis and Prognosis of COVID-19: Systematic Review. JMIR Med Inform 2021; 9:e25181. [PMID: 33735095 PMCID: PMC8074953 DOI: 10.2196/25181] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 12/31/2020] [Accepted: 01/16/2021] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND Accurate and timely diagnosis and effective prognosis of the disease is important to provide the best possible care for patients with COVID-19 and reduce the burden on the health care system. Machine learning methods can play a vital role in the diagnosis of COVID-19 by processing chest x-ray images. OBJECTIVE The aim of this study is to summarize information on the use of intelligent models for the diagnosis and prognosis of COVID-19 to help with early and timely diagnosis, minimize prolonged diagnosis, and improve overall health care. METHODS A systematic search of databases, including PubMed, Web of Science, IEEE, ProQuest, Scopus, bioRxiv, and medRxiv, was performed for COVID-19-related studies published up to May 24, 2020. This study was performed in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. All original research articles describing the application of image processing for the prediction and diagnosis of COVID-19 were considered in the analysis. Two reviewers independently assessed the published papers to determine eligibility for inclusion in the analysis. Risk of bias was evaluated using the Prediction Model Risk of Bias Assessment Tool. RESULTS Of the 629 articles retrieved, 44 articles were included. We identified 4 prognosis models for calculating prediction of disease severity and estimation of confinement time for individual patients, and 40 diagnostic models for detecting COVID-19 from normal or other pneumonias. Most included studies used deep learning methods based on convolutional neural networks, which have been widely used as a classification algorithm. The most frequently reported predictors of prognosis in patients with COVID-19 included age, computed tomography data, gender, comorbidities, symptoms, and laboratory findings. Deep convolutional neural networks obtained better results compared with non-neural network-based methods. Moreover, all of the models were found to be at high risk of bias due to the lack of information about the study population, intended groups, and inappropriate reporting. CONCLUSIONS Machine learning models used for the diagnosis and prognosis of COVID-19 showed excellent discriminative performance. However, these models were at high risk of bias, because of various reasons such as inadequate information about study participants, randomization process, and the lack of external validation, which may have resulted in the optimistic reporting of these models. Hence, our findings do not recommend any of the current models to be used in practice for the diagnosis and prognosis of COVID-19.
Collapse
Affiliation(s)
- Mahdieh Montazeri
- Medical Informatics Research Center, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran
| | - Roxana ZahediNasab
- Computer Engineering Department, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
| | - Ali Farahani
- Computer Engineering Department, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
| | - Hadis Mohseni
- Computer Engineering Department, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
| | - Fahimeh Ghasemian
- Computer Engineering Department, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
| |
Collapse
|
34
|
Coronary Centerline Extraction from CCTA Using 3D-UNet. FUTURE INTERNET 2021. [DOI: 10.3390/fi13040101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The mesh-type coronary model, obtained from three-dimensional reconstruction using the sequence of images produced by computed tomography (CT), can be used to obtain useful diagnostic information, such as extracting the projection of the lumen (planar development along an artery). In this paper, we have focused on automated coronary centerline extraction from cardiac computed tomography angiography (CCTA) proposing a 3D version of U-Net architecture, trained with a novel loss function and with augmented patches. We have obtained promising results for accuracy (between 90–95%) and overlap (between 90–94%) with various network training configurations on the data from the Rotterdam Coronary Artery Centerline Extraction benchmark. We have also demonstrated the ability of the proposed network to learn despite the huge class imbalance and sparse annotation present in the training data.
Collapse
|
35
|
Tian F, Gao Y, Fang Z, Gu J. Automatic coronary artery segmentation algorithm based on deep learning and digital image processing. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02197-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
36
|
Irfan M, Iftikhar MA, Yasin S, Draz U, Ali T, Hussain S, Bukhari S, Alwadie AS, Rahman S, Glowacz A, Althobiani F. Role of Hybrid Deep Neural Networks (HDNNs), Computed Tomography, and Chest X-rays for the Detection of COVID-19. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:3056. [PMID: 33809665 PMCID: PMC8002268 DOI: 10.3390/ijerph18063056] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 02/28/2021] [Accepted: 03/04/2021] [Indexed: 12/28/2022]
Abstract
COVID-19 syndrome has extensively escalated worldwide with the induction of the year 2020 and has resulted in the illness of millions of people. COVID-19 patients bear an elevated risk once the symptoms deteriorate. Hence, early recognition of diseased patients can facilitate early intervention and avoid disease succession. This article intends to develop a hybrid deep neural networks (HDNNs), using computed tomography (CT) and X-ray imaging, to predict the risk of the onset of disease in patients suffering from COVID-19. To be precise, the subjects were classified into 3 categories namely normal, Pneumonia, and COVID-19. Initially, the CT and chest X-ray images, denoted as 'hybrid images' (with resolution 1080 × 1080) were collected from different sources, including GitHub, COVID-19 radiography database, Kaggle, COVID-19 image data collection, and Actual Med COVID-19 Chest X-ray Dataset, which are open source and publicly available data repositories. The 80% hybrid images were used to train the hybrid deep neural network model and the remaining 20% were used for the testing purpose. The capability and prediction accuracy of the HDNNs were calculated using the confusion matrix. The hybrid deep neural network showed a 99% classification accuracy on the test set data.
Collapse
Affiliation(s)
- Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran 61441, Saudi Arabia; (M.I.); (A.S.A.); (S.R.)
| | - Muhammad Aksam Iftikhar
- Department of Computer Science, Lahore Campus, COMSATS University Islamabad, Lahore 54000, Pakistan;
| | - Sana Yasin
- Department of Computer Science, University of OKara, Okara 56130, Pakistan;
| | - Umar Draz
- Department of Computer Science, University of Sahiwal, Sahiwal 57000, Pakistan; (U.D.); (S.H.)
| | - Tariq Ali
- Computer Science Department, Sahiwal Campus, COMSATS University Islamabad, Sahiwal 57000, Pakistan
| | - Shafiq Hussain
- Department of Computer Science, University of Sahiwal, Sahiwal 57000, Pakistan; (U.D.); (S.H.)
| | - Sarah Bukhari
- Department of Computer Science, National Fertilizer Corporation Institute of Engineering and Technology, Multan 60000, Pakistan;
| | - Abdullah Saeed Alwadie
- Electrical Engineering Department, College of Engineering, Najran University, Najran 61441, Saudi Arabia; (M.I.); (A.S.A.); (S.R.)
| | - Saifur Rahman
- Electrical Engineering Department, College of Engineering, Najran University, Najran 61441, Saudi Arabia; (M.I.); (A.S.A.); (S.R.)
| | - Adam Glowacz
- Department of Automatic Control and Robotics, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Science and Technology, 30-059 Kraków, Poland;
| | - Faisal Althobiani
- Faculty of Maritime Studies, King Abdulaziz University, Jeddah 21577, Saudi Arabia;
| |
Collapse
|
37
|
Joseph Raj AN, Zhu H, Khan A, Zhuang Z, Yang Z, Mahesh VGV, Karthik G. ADID-UNET-a segmentation model for COVID-19 infection from lung CT scans. PeerJ Comput Sci 2021; 7:e349. [PMID: 33816999 PMCID: PMC7924694 DOI: 10.7717/peerj-cs.349] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 12/07/2020] [Indexed: 05/23/2023]
Abstract
Currently, the new coronavirus disease (COVID-19) is one of the biggest health crises threatening the world. Automatic detection from computed tomography (CT) scans is a classic method to detect lung infection, but it faces problems such as high variations in intensity, indistinct edges near lung infected region and noise due to data acquisition process. Therefore, this article proposes a new COVID-19 pulmonary infection segmentation depth network referred as the Attention Gate-Dense Network- Improved Dilation Convolution-UNET (ADID-UNET). The dense network replaces convolution and maximum pooling function to enhance feature propagation and solves gradient disappearance problem. An improved dilation convolution is used to increase the receptive field of the encoder output to further obtain more edge features from the small infected regions. The integration of attention gate into the model suppresses the background and improves prediction accuracy. The experimental results show that the ADID-UNET model can accurately segment COVID-19 lung infected areas, with performance measures greater than 80% for metrics like Accuracy, Specificity and Dice Coefficient (DC). Further when compared to other state-of-the-art architectures, the proposed model showed excellent segmentation effects with a high DC and F1 score of 0.8031 and 0.82 respectively.
Collapse
Affiliation(s)
- Alex Noel Joseph Raj
- Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China
| | - Haipeng Zhu
- Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China
| | - Asiya Khan
- School of Engineering, Computing and Mathematics, University of Plymouth, Plymouth, UK
| | - Zhemin Zhuang
- Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China
| | - Zengbiao Yang
- Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China
| | - Vijayalakshmi G. V. Mahesh
- Department of Electronics and Communication, BMS Institute of Technology and Management, Bangalore, India
| | - Ganesan Karthik
- COVID CARE - Institute of Orthopedics and Traumatology, Madras Medical College, Chennai, India
| |
Collapse
|
38
|
Meng Q, Liu W, Gao P, Zhang J, Sun A, Ding J, Liu H, Lei Z. Novel Deep Learning Technique Used in Management and Discharge of Hospitalized Patients with COVID-19 in China. Ther Clin Risk Manag 2020; 16:1195-1201. [PMID: 33324064 PMCID: PMC7733409 DOI: 10.2147/tcrm.s280726] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2020] [Accepted: 11/23/2020] [Indexed: 12/15/2022] Open
Abstract
Purpose The low sensitivity and false-negative results of nucleic acid testing greatly affect its performance in diagnosing and discharging patients with coronavirus disease (COVID-19). Chest computed tomography (CT)-based evaluation of pneumonia may indicate a need for isolation. Therefore, this radiologic modality plays an important role in managing patients with suspected COVID-19. Meanwhile, deep learning (DL) technology has been successful in detecting various imaging features of chest CT. This study applied a novel DL technique to standardize the discharge criteria of COVID-19 patients with consecutive negative respiratory pathogen nucleic acid test results at a “square cabin” hospital. Patients and Methods DL was used to evaluate the chest CT scans of 270 hospitalized COVID-19 patients who had two consecutive negative nucleic acid tests (sampling interval >1 day). The CT scans evaluated were obtained after the patients’ second negative test result. The standard criterion determined by DL for patient discharge was a total volume ratio of lesion to lung <50%. Results The mean number of days between hospitalization and DL was 14.3 (± 2.4). The average intersection over union was 0.7894. Two hundred and thirteen (78.9%) patients exhibited pneumonia, of whom 54.0% (115/213) had mild interstitial fibrosis. Twenty-one, 33, and 4 cases exhibited vascular enlargement, pleural thickening, and mediastinal lymphadenopathy, respectively. Of the latter, 18.8% (40/213) had a total volume ratio of lesions to lung ≥50% according to our severity scale and were monitored continuously in the hospital. Three cases had a positive follow-up nucleic acid test during hospitalization. None of the 230 discharged cases later tested positive or exhibited pneumonia progression. Conclusion The novel DL enables the accurate management of hospitalized patients with COVID-19 and can help avoid cluster transmission or exacerbation in patients with false-negative acid test.
Collapse
Affiliation(s)
- Qingcheng Meng
- Department of Radiology, The Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, People's Republic of China
| | - Wentao Liu
- Department of Radiology, The Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, People's Republic of China
| | - Pengrui Gao
- Department of Radiology, The Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, People's Republic of China
| | - Jiaqi Zhang
- Yizhun Medical AI Co. Ltd, Beijing, People's Republic of China
| | - Anlan Sun
- Yizhun Medical AI Co. Ltd, Beijing, People's Republic of China
| | - Jia Ding
- Yizhun Medical AI Co. Ltd, Beijing, People's Republic of China
| | - Hao Liu
- Yizhun Medical AI Co. Ltd, Beijing, People's Republic of China
| | - Ziqiao Lei
- Department of Radiology, The Wuhan Union Hospital, Wuhan, People's Republic of China
| |
Collapse
|
39
|
Li L, Qin L, Xu Z, Yin Y, Wang X, Kong B, Bai J, Lu Y, Fang Z, Song Q, Cao K, Liu D, Wang G, Xu Q, Fang X, Zhang S, Xia J, Xia J. Using Artificial Intelligence to Detect COVID-19 and Community-acquired Pneumonia Based on Pulmonary CT: Evaluation of the Diagnostic Accuracy. Radiology 2020; 296:E65-E71. [PMID: 32191588 PMCID: PMC7233473 DOI: 10.1148/radiol.2020200905] [Citation(s) in RCA: 927] [Impact Index Per Article: 185.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Background Coronavirus disease 2019 (COVID-19) has widely spread all over the world since the beginning of 2020. It is desirable to develop automatic and accurate detection of COVID-19 using chest CT. Purpose To develop a fully automatic framework to detect COVID-19 using chest CT and evaluate its performance. Materials and Methods In this retrospective and multicenter study, a deep learning model, the COVID-19 detection neural network (COVNet), was developed to extract visual features from volumetric chest CT scans for the detection of COVID-19. CT scans of community-acquired pneumonia (CAP) and other non-pneumonia abnormalities were included to test the robustness of the model. The datasets were collected from six hospitals between August 2016 and February 2020. Diagnostic performance was assessed with the area under the receiver operating characteristic curve, sensitivity, and specificity. Results The collected dataset consisted of 4352 chest CT scans from 3322 patients. The average patient age (±standard deviation) was 49 years ± 15, and there were slightly more men than women (1838 vs 1484, respectively; P = .29). The per-scan sensitivity and specificity for detecting COVID-19 in the independent test set was 90% (95% confidence interval [CI]: 83%, 94%; 114 of 127 scans) and 96% (95% CI: 93%, 98%; 294 of 307 scans), respectively, with an area under the receiver operating characteristic curve of 0.96 (P < .001). The per-scan sensitivity and specificity for detecting CAP in the independent test set was 87% (152 of 175 scans) and 92% (239 of 259 scans), respectively, with an area under the receiver operating characteristic curve of 0.95 (95% CI: 0.93, 0.97). Conclusion A deep learning model can accurately detect coronavirus 2019 and differentiate it from community-acquired pneumonia and other lung conditions. © RSNA, 2020 Online supplemental material is available for this article.
Collapse
Affiliation(s)
| | | | - Zeguo Xu
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Youbing Yin
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Xin Wang
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Bin Kong
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Junjie Bai
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Yi Lu
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Zhenghan Fang
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Qi Song
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Kunlin Cao
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Daliang Liu
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Guisheng Wang
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Qizhong Xu
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Xisheng Fang
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Shiqin Zhang
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | - Juan Xia
- From the Department of Radiology, Wuhan Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L., Z. X., X. F., S. Z., J.X.), Jianghan University Affiliated Huangpi People's Hospital, Wuhan, Hubei, China 430301 (L. L.), Department of Radiology, Wuhan Pulmonary Hospital, Wuhan, Hubei, China 430030 (L. Q.), Keya Medical Technology Co., Ltd, Shenzhen, Guangdong, China 518116 (Y. Y., X. W., B. K., J. B., Y. L., Z. F, Q. S., K. C.), Department of Radiology, Liaocheng People’s Hospital, Shandong, China, 252000 (D. L.), Department of CT, The Third Medical Center of Chinese PLA General Hospital, Beijing, China 100039 (G. W.), and Department of Radiology, Shenzhen Second People’s Hospital/the First Affiliated Hospital of Shenzhen University Health Science Center, Shenzhen, China 518035 (Q. X., J. X.)
| | | |
Collapse
|
40
|
Zhao FJ, Fan SQ, Ren JF, von Deneen KM, He XW, Chen XL. Machine learning for diagnosis of coronary artery disease in computed tomography angiography: A survey. Artif Intell Med Imaging 2020; 1:31-39. [DOI: 10.35711/aimi.v1.i1.31] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 06/12/2020] [Accepted: 06/16/2020] [Indexed: 02/06/2023] Open
Abstract
Coronary artery disease (CAD) has become a major illness endangering human health. It mainly manifests as atherosclerotic plaques, especially vulnerable plaques without obvious symptoms in the early stage. Once a rupture occurs, it will lead to severe coronary stenosis, which in turn may trigger a major adverse cardiovascular event. Computed tomography angiography (CTA) has become a standard diagnostic tool for early screening of coronary plaque and stenosis due to its advantages in high resolution, noninvasiveness, and three-dimensional imaging. However, manual examination of CTA images by radiologists has been proven to be tedious and time-consuming, which might also lead to intra- and interobserver errors. Nowadays, many machine learning algorithms have enabled the (semi-)automatic diagnosis of CAD by extracting quantitative features from CTA images. This paper provides a survey of these machine learning algorithms for the diagnosis of CAD in CTA images, including coronary artery extraction, coronary plaque detection, vulnerable plaque identification, and coronary stenosis assessment. Most included articles were published within this decade and are found in the Web of Science. We wish to give readers a glimpse of the current status, challenges, and perspectives of these machine learning-based analysis methods for automatic CAD diagnosis.
Collapse
Affiliation(s)
- Feng-Jun Zhao
- School of Information Science and Technology, Northwest University, Xi’an 710069, Shaanxi Province, China
- Xi’an Key Lab of Radiomics and Intelligent Perception, Northwest University, Xi’an 710069, Shaanxi Province, China
| | - Si-Qi Fan
- School of Information Science and Technology, Northwest University, Xi’an 710069, Shaanxi Province, China
- Xi’an Key Lab of Radiomics and Intelligent Perception, Northwest University, Xi’an 710069, Shaanxi Province, China
| | - Jing-Fang Ren
- School of Information Science and Technology, Northwest University, Xi’an 710069, Shaanxi Province, China
- Xi’an Key Lab of Radiomics and Intelligent Perception, Northwest University, Xi’an 710069, Shaanxi Province, China
| | - Karen M von Deneen
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Xiao-Wei He
- School of Information Science and Technology, Northwest University, Xi’an 710069, Shaanxi Province, China
- Xi’an Key Lab of Radiomics and Intelligent Perception, Northwest University, Xi’an 710069, Shaanxi Province, China
| | - Xue-Li Chen
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| |
Collapse
|