1
|
Mercier C, Faisan S, Pron A, Girard N, Auzias G, Chonavel T, Rousseau F. Intersection-based slice motion estimation for fetal brain imaging. Comput Biol Med 2025; 190:110005. [PMID: 40112563 DOI: 10.1016/j.compbiomed.2025.110005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 03/04/2025] [Accepted: 03/05/2025] [Indexed: 03/22/2025]
Abstract
Fetal MRI offers a broad spectrum of applications, including the investigation of fetal brain development and facilitation of early diagnosis. However, image quality is often compromised by motion artifacts arising from both maternal and fetal movement. To mitigate these artifacts, fetal MRI typically employs ultrafast acquisition sequences. This results in the acquisition of three (or more) orthogonal stacks along different spatial axes. Nonetheless, inter-slice motion can still occur. If left uncorrected, such motion can introduce artifacts in the reconstructed 3D volume. Existing motion-correction approaches often rely on a two-step iterative process involving registration followed by reconstruction. They tend to detect and remove a large number of misaligned slices, resulting in poor reconstruction quality. This paper proposes a novel reconstruction-independent method for motion correction. Our approach benefits from the intersection of orthogonal slices and estimates motion for each slice by minimizing the difference between the intensity profiles along their intersections. To address potential misalignments, we present an innovative machine learning-based classifier for identifying misaligned slices. The parameters of these slices are then corrected using a multistart optimization approach. Quantitative evaluation on simulated datasets demonstrates very low registration errors. Qualitative analysis on real data further highlights the effectiveness of our approach compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Chloe Mercier
- IMT Atlantique, Lab-STICC UMR CNRS 6285, Brest, France.
| | - Sylvain Faisan
- ICube Laboratory, University of Strasbourg, CNRS, Strasbourg, France.
| | - Alexandre Pron
- Aix-Marseille Université, CNRS, Institut de Neurosciences de la Timone, Marseille, France.
| | - Nadine Girard
- Aix-Marseille Université, CNRS, Institut de Neurosciences de la Timone, Marseille, France.
| | - Guillaume Auzias
- Aix-Marseille Université, CNRS, Institut de Neurosciences de la Timone, Marseille, France.
| | | | | |
Collapse
|
2
|
Uus A, Neves Silva S, Aviles Verdera J, Payette K, Hall M, Colford K, Luis A, Sousa H, Ning Z, Roberts T, McElroy S, Deprez M, Hajnal J, Rutherford M, Story L, Hutter J. Scanner-based real-time three-dimensional brain + body slice-to-volume reconstruction for T2-weighted 0.55-T low-field fetal magnetic resonance imaging. Pediatr Radiol 2025; 55:556-569. [PMID: 39853394 PMCID: PMC11882667 DOI: 10.1007/s00247-025-06165-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 01/02/2025] [Accepted: 01/04/2025] [Indexed: 01/26/2025]
Abstract
BACKGROUND Motion correction methods based on slice-to-volume registration (SVR) for fetal magnetic resonance imaging (MRI) allow reconstruction of three-dimensional (3-D) isotropic images of the fetal brain and body. However, all existing SVR methods are confined to research settings, which limits clinical integration. Furthermore, there have been no reported SVR solutions for low-field 0.55-T MRI. OBJECTIVE Integration of automated SVR motion correction methods directly into fetal MRI scanning process via the Gadgetron framework to enable automated T2-weighted (T2W) 3-D fetal brain and body reconstruction in the low-field 0.55-T MRI scanner within the duration of the scan. MATERIALS AND METHODS A deep learning fully automated pipeline was developed for T2W 3-D rigid and deformable (D/SVR) reconstruction of the fetal brain and body of 0.55-T T2W datasets. Next, it was integrated into 0.55-T low-field MRI scanner environment via a Gadgetron workflow that enables launching of the reconstruction process directly during scanning in real-time. RESULTS During prospective testing on 12 cases (22-40 weeks gestational age), the fetal brain and body reconstructions were available on average 6:42 ± 3:13 min after the acquisition of the final stack and could be assessed and archived on the scanner console during the ongoing fetal MRI scan. The output image data quality was rated as good to acceptable for interpretation. The retrospective testing of the pipeline on 83 0.55-T datasets demonstrated stable reconstruction quality for low-field MRI. CONCLUSION The proposed pipeline allows scanner-based prospective T2W 3-D motion correction for low-field 0.55-T fetal MRI via direct online integration into the scanner environment.
Collapse
Affiliation(s)
- Alena Uus
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK.
- Research Department of Imaging Physics and Engineering, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.
| | - Sara Neves Silva
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
| | - Jordina Aviles Verdera
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
| | - Kelly Payette
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
| | - Megan Hall
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
- Department of Women & Children's Health, King's College London, London, UK
| | - Kathleen Colford
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
| | - Aysha Luis
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
- Clinical Scientific Computing, Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - Helena Sousa
- Research Department of Imaging Physics and Engineering, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Zihan Ning
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
| | - Thomas Roberts
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
- Clinical Scientific Computing, Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - Sarah McElroy
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
- MR Research Collaborations, Siemens (United Kingdom), Camberley, UK
| | - Maria Deprez
- Research Department of Imaging Physics and Engineering, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Joseph Hajnal
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
- Research Department of Imaging Physics and Engineering, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Mary Rutherford
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
| | - Lisa Story
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
- Department of Women & Children's Health, King's College London, London, UK
| | - Jana Hutter
- Research Department of Early Life Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
- Smart Imaging Lab, Radiological Institute, Universitätsklinikum Erlangen, Erlangen, Germany
- Research Department of Imaging Physics and Engineering, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
3
|
Xu H, Shi W, Sun J, Zheng T, Xu X, Sun C, Yi S, Wang G, Wu D. A motion assessment method for reference stack selection in fetal brain MRI reconstruction based on tensor rank approximation. NMR IN BIOMEDICINE 2024; 37:e5248. [PMID: 39231762 DOI: 10.1002/nbm.5248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 07/12/2024] [Accepted: 08/08/2024] [Indexed: 09/06/2024]
Abstract
Slice-to-volume registration and super-resolution reconstruction are commonly used to generate 3D volumes of the fetal brain from 2D stacks of slices acquired in multiple orientations. A critical initial step in this pipeline is to select one stack with the minimum motion among all input stacks as a reference for registration. An accurate and unbiased motion assessment (MA) is thus crucial for successful selection. Here, we presented an MA method that determines the minimum motion stack based on 3D low-rank approximation using CANDECOMP/PARAFAC (CP) decomposition. Compared to the current 2D singular value decomposition (SVD) based method that requires flattening stacks into matrices to obtain ranks, in which the spatial information is lost, the CP-based method can factorize 3D stack into low-rank and sparse components in a computationally efficient manner. The difference between the original stack and its low-rank approximation was proposed as the motion indicator. Experiments on linearly and randomly simulated motion illustrated that CP demonstrated higher sensitivity in detecting small motion with a lower baseline bias, and achieved a higher assessment accuracy of 95.45% in identifying the minimum motion stack, compared to the SVD-based method with 58.18%. CP also showed superior motion assessment capabilities in real-data evaluations. Additionally, combining CP with the existing SRR-SVR pipeline significantly improved 3D volume reconstruction. The results indicated that our proposed CP showed superior performance compared to SVD-based methods with higher sensitivity to motion, assessment accuracy, and lower baseline bias, and can be used as a prior step to improve fetal brain reconstruction.
Collapse
Affiliation(s)
- Haoan Xu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Wen Shi
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Jiwei Sun
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Tianshu Zheng
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Xinyi Xu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Cong Sun
- Department of Radiology, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Sun Yi
- MR Collaboration, Siemens Healthcare China, Shanghai, China
| | - Guangbin Wang
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Dan Wu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| |
Collapse
|
4
|
Azampour MF, Mach K, Fatemizadeh E, Demiray B, Westenfelder K, Steiger K, Eiber M, Wendler T, Kainz B, Navab N. Multitask Weakly Supervised Generative Network for MR-US Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3780-3793. [PMID: 38829753 DOI: 10.1109/tmi.2024.3400899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2024]
Abstract
Registering pre-operative modalities, such as magnetic resonance imaging or computed tomography, to ultrasound images is crucial for guiding clinicians during surgeries and biopsies. Recently, deep-learning approaches have been proposed to increase the speed and accuracy of this registration problem. However, all of these approaches need expensive supervision from the ultrasound domain. In this work, we propose a multitask generative framework that needs weak supervision only from the pre-operative imaging domain during training. To perform a deformable registration, the proposed framework translates a magnetic resonance image to the ultrasound domain while preserving the structural content. To demonstrate the efficacy of the proposed method, we tackle the registration problem of pre-operative 3D MR to transrectal ultrasonography images as necessary for targeted prostate biopsies. We use an in-house dataset of 600 patients, divided into 540 for training, 30 for validation, and the remaining for testing. An expert manually segmented the prostate in both modalities for validation and test sets to assess the performance of our framework. The proposed framework achieves a 3.58 mm target registration error on the expert-selected landmarks, 89.2% in the Dice score, and 1.81 mm 95th percentile Hausdorff distance on the prostate masks in the test set. Our experiments demonstrate that the proposed generative model successfully translates magnetic resonance images into the ultrasound domain. The translated image contains the structural content and fine details due to an ultrasound-specific two-path design of the generative model. The proposed framework enables training learning-based registration methods while only weak supervision from the pre-operative domain is available.
Collapse
|
5
|
Wang X, Zhang Z, Xu S, Luo X, Zhang B, Wu XJ. Contrastive learning based method for X-ray and CT registration under surgical equipment occlusion. Comput Biol Med 2024; 180:108946. [PMID: 39106676 DOI: 10.1016/j.compbiomed.2024.108946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 07/25/2024] [Accepted: 07/25/2024] [Indexed: 08/09/2024]
Abstract
Deep learning-based 3D/2D surgical navigation registration techniques achieved excellent results. However, these methods are limited by the occlusion of surgical equipment resulting in poor accuracy. We designed a contrastive learning method that treats occluded and unoccluded X-rays as positive samples, maximizing the similarity between the positive samples and reducing interference from occlusion. The designed registration model has Transformer's residual connection (ResTrans), which enhances the long-sequence mapping capability, combined with the contrast learning strategy, ResTrans can adaptively retrieve the valid features in the global range to ensure the performance in the case of occlusion. Further, a learning-based region of interest (RoI) fine-tuning method is designed to refine the misalignment. We conducted experiments on occluded X-rays that contained different surgical devices. The experiment results show that the mean target registration error (mTRE) of ResTrans is 3.25 mm and the running time is 1.59 s. Compared with the state-of-the-art (SOTA) 3D/2D registration methods, our method offers better performance on occluded 3D/2D registration tasks.
Collapse
Affiliation(s)
- Xiyuan Wang
- School of Electronics and Information Engineering at University of Science and Technology Suzhou, SuZhou, 215009, China
| | - Zhancheng Zhang
- School of Electronics and Information Engineering at University of Science and Technology Suzhou, SuZhou, 215009, China.
| | - Shaokang Xu
- School of Electronics and Information Engineering at University of Science and Technology Suzhou, SuZhou, 215009, China; Shanghai Jirui Maestro Surgical Technology Co, ShangHai, 200000, China
| | - Xiaoqing Luo
- Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, School of Artificial Intelligence and Computer Science at Jiangnan University, WuXi, 214122, China
| | - Baocheng Zhang
- Department of Orthopaedics, General Hospital of Central Theater Command of PLA, WuHan, 430012, China
| | - Xiao-Jun Wu
- Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, School of Artificial Intelligence and Computer Science at Jiangnan University, WuXi, 214122, China
| |
Collapse
|
6
|
Pei Y, Zhao F, Zhong T, Ma L, Liao L, Wu Z, Wang L, Zhang H, Wang L, Li G. PETS-Nets: Joint Pose Estimation and Tissue Segmentation of Fetal Brains Using Anatomy-Guided Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1006-1017. [PMID: 37874705 DOI: 10.1109/tmi.2023.3327295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2023]
Abstract
Fetal Magnetic Resonance Imaging (MRI) is challenged by fetal movements and maternal breathing. Although fast MRI sequences allow artifact free acquisition of individual 2D slices, motion frequently occurs in the acquisition of spatially adjacent slices. Motion correction for each slice is thus critical for the reconstruction of 3D fetal brain MRI. In this paper, we propose a novel multi-task learning framework that adopts a coarse-to-fine strategy to jointly learn the pose estimation parameters for motion correction and tissue segmentation map of each slice in fetal MRI. Particularly, we design a regression-based segmentation loss as a deep supervision to learn anatomically more meaningful features for pose estimation and segmentation. In the coarse stage, a U-Net-like network learns the features shared for both tasks. In the refinement stage, to fully utilize the anatomical information, signed distance maps constructed from the coarse segmentation are introduced to guide the feature learning for both tasks. Finally, iterative incorporation of the signed distance maps further improves the performance of both regression and segmentation progressively. Experimental results of cross-validation across two different fetal datasets acquired with different scanners and imaging protocols demonstrate the effectiveness of the proposed method in reducing the pose estimation error and obtaining superior tissue segmentation results simultaneously, compared with state-of-the-art methods.
Collapse
|
7
|
Vece CD, Lous ML, Dromey B, Vasconcelos F, David AL, Peebles D, Stoyanov D. Ultrasound Plane Pose Regression: Assessing Generalized Pose Coordinates in the Fetal Brain. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2024; 6:41-52. [PMID: 38881728 PMCID: PMC7616102 DOI: 10.1109/tmrb.2023.3328638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/18/2024]
Abstract
In obstetric ultrasound (US) scanning, the learner's ability to mentally build a three-dimensional (3D) map of the fetus from a two-dimensional (2D) US image represents a significant challenge in skill acquisition. We aim to build a US plane localization system for 3D visualization, training, and guidance without integrating additional sensors. This work builds on top of our previous work, which predicts the six-dimensional (6D) pose of arbitrarily oriented US planes slicing the fetal brain with respect to a normalized reference frame using a convolutional neural network (CNN) regression network. Here, we analyze in detail the assumptions of the normalized fetal brain reference frame and quantify its accuracy with respect to the acquisition of transventricular (TV) standard plane (SP) for fetal biometry. We investigate the impact of registration quality in the training and testing data and its subsequent effect on trained models. Finally, we introduce data augmentations and larger training sets that improve the results of our previous work, achieving median errors of 2.97 mm and 6.63° for translation and rotation, respectively.
Collapse
Affiliation(s)
- Chiara Di Vece
- EPSRC Center for Interventional and Surgical Sciences and the Department of Computer Science, University College London, WC1E 6DB London, U.K
| | - Maela Le Lous
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Brian Dromey
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Francisco Vasconcelos
- EPSRC Center for Interventional and Surgical Sciences and the Department of Computer Science, University College London, WC1E 6DB London, U.K
| | - Anna L David
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Donald Peebles
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Danail Stoyanov
- EPSRC Center for Interventional and Surgical Sciences and the Department of Computer Science, University College London, WC1E 6DB London, U.K
| |
Collapse
|
8
|
Vahedifard F, Ai HA, Supanich MP, Marathu KK, Liu X, Kocak M, Ansari SM, Akyuz M, Adepoju JO, Adler S, Byrd S. Automatic Ventriculomegaly Detection in Fetal Brain MRI: A Step-by-Step Deep Learning Model for Novel 2D-3D Linear Measurements. Diagnostics (Basel) 2023; 13:2355. [PMID: 37510099 PMCID: PMC10378043 DOI: 10.3390/diagnostics13142355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/07/2023] [Accepted: 07/09/2023] [Indexed: 07/30/2023] Open
Abstract
In this study, we developed an automated workflow using a deep learning model (DL) to measure the lateral ventricle linearly in fetal brain MRI, which are subsequently classified into normal or ventriculomegaly, defined as a diameter wider than 10 mm at the level of the thalamus and choroid plexus. To accomplish this, we first trained a UNet-based deep learning model to segment the brain of a fetus into seven different tissue categories using a public dataset (FeTA 2022) consisting of fetal T2-weighted images. Then, an automatic workflow was developed to perform lateral ventricle measurement at the level of the thalamus and choroid plexus. The test dataset included 22 cases of normal and abnormal T2-weighted fetal brain MRIs. Measurements performed by our AI model were compared with manual measurements performed by a general radiologist and a neuroradiologist. The AI model correctly classified 95% of fetal brain MRI cases into normal or ventriculomegaly. It could measure the lateral ventricle diameter in 95% of cases with less than a 1.7 mm error. The average difference between measurements was 0.90 mm in AI vs. general radiologists and 0.82 mm in AI vs. neuroradiologists, which are comparable to the difference between the two radiologists, 0.51 mm. In addition, the AI model also enabled the researchers to create 3D-reconstructed images, which better represent real anatomy than 2D images. When a manual measurement is performed, it could also provide both the right and left ventricles in just one cut, instead of two. The measurement difference between the general radiologist and the algorithm (p = 0.9827), and between the neuroradiologist and the algorithm (p = 0.2378), was not statistically significant. In contrast, the difference between general radiologists vs. neuroradiologists was statistically significant (p = 0.0043). To the best of our knowledge, this is the first study that performs 2D linear measurement of ventriculomegaly with a 3D model based on an artificial intelligence approach. The paper presents a step-by-step approach for designing an AI model based on several radiological criteria. Overall, this study showed that AI can automatically calculate the lateral ventricle in fetal brain MRIs and accurately classify them as abnormal or normal.
Collapse
Affiliation(s)
- Farzan Vahedifard
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - H Asher Ai
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Mark P Supanich
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Kranthi K Marathu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Xuchu Liu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Mehmet Kocak
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Shehbaz M Ansari
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Melih Akyuz
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Jubril O Adepoju
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Seth Adler
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Sharon Byrd
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| |
Collapse
|
9
|
Uus AU, Egloff Collado A, Roberts TA, Hajnal JV, Rutherford MA, Deprez M. Retrospective motion correction in foetal MRI for clinical applications: existing methods, applications and integration into clinical practice. Br J Radiol 2023; 96:20220071. [PMID: 35834425 PMCID: PMC7614695 DOI: 10.1259/bjr.20220071] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 04/27/2022] [Accepted: 05/11/2022] [Indexed: 01/07/2023] Open
Abstract
Foetal MRI is a complementary imaging method to antenatal ultrasound. It provides advanced information for detection and characterisation of foetal brain and body anomalies. Even though modern single shot sequences allow fast acquisition of 2D slices with high in-plane image quality, foetal MRI is intrinsically corrupted by motion. Foetal motion leads to loss of structural continuity and corrupted 3D volumetric information in stacks of slices. Furthermore, the arbitrary and constantly changing position of the foetus requires dynamic readjustment of acquisition planes during scanning.
Collapse
Affiliation(s)
- Alena U. Uus
- Department of Biomedical Engineering, School Biomedical Engineering and Imaging Sciences, King’s College London, St. Thomas' Hospital, London, United Kingdom
| | - Alexia Egloff Collado
- Centre for the Developing Brain, School Biomedical Engineering and Imaging Sciences, King’s College London, St. Thomas' Hospital, London, United Kingdom
| | | | | | - Mary A. Rutherford
- Centre for the Developing Brain, School Biomedical Engineering and Imaging Sciences, King’s College London, St. Thomas' Hospital, London, United Kingdom
| | - Maria Deprez
- Department of Biomedical Engineering, School Biomedical Engineering and Imaging Sciences, King’s College London, St. Thomas' Hospital, London, United Kingdom
| |
Collapse
|
10
|
Meshaka R, Gaunt T, Shelmerdine SC. Artificial intelligence applied to fetal MRI: A scoping review of current research. Br J Radiol 2023; 96:20211205. [PMID: 35286139 PMCID: PMC10321262 DOI: 10.1259/bjr.20211205] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 02/02/2022] [Accepted: 03/04/2022] [Indexed: 12/17/2022] Open
Abstract
Artificial intelligence (AI) is defined as the development of computer systems to perform tasks normally requiring human intelligence. A subset of AI, known as machine learning (ML), takes this further by drawing inferences from patterns in data to 'learn' and 'adapt' without explicit instructions meaning that computer systems can 'evolve' and hopefully improve without necessarily requiring external human influences. The potential for this novel technology has resulted in great interest from the medical community regarding how it can be applied in healthcare. Within radiology, the focus has mostly been for applications in oncological imaging, although new roles in other subspecialty fields are slowly emerging.In this scoping review, we performed a literature search of the current state-of-the-art and emerging trends for the use of artificial intelligence as applied to fetal magnetic resonance imaging (MRI). Our search yielded several publications covering AI tools for anatomical organ segmentation, improved imaging sequences and aiding in diagnostic applications such as automated biometric fetal measurements and the detection of congenital and acquired abnormalities. We highlight our own perceived gaps in this literature and suggest future avenues for further research. It is our hope that the information presented highlights the varied ways and potential that novel digital technology could make an impact to future clinical practice with regards to fetal MRI.
Collapse
Affiliation(s)
- Riwa Meshaka
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, Great Ormond Street, London, UK
| | - Trevor Gaunt
- Department of Radiology, University College London Hospitals NHS Foundation Trust, London, UK
| | | |
Collapse
|
11
|
Ciceri T, Squarcina L, Pigoni A, Ferro A, Montano F, Bertoldo A, Persico N, Boito S, Triulzi FM, Conte G, Brambilla P, Peruzzo D. Geometric Reliability of Super-Resolution Reconstructed Images from Clinical Fetal MRI in the Second Trimester. Neuroinformatics 2023; 21:549-563. [PMID: 37284977 PMCID: PMC10406722 DOI: 10.1007/s12021-023-09635-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/20/2023] [Indexed: 06/08/2023]
Abstract
Fetal Magnetic Resonance Imaging (MRI) is an important noninvasive diagnostic tool to characterize the central nervous system (CNS) development, significantly contributing to pregnancy management. In clinical practice, fetal MRI of the brain includes the acquisition of fast anatomical sequences over different planes on which several biometric measurements are manually extracted. Recently, modern toolkits use the acquired two-dimensional (2D) images to reconstruct a Super-Resolution (SR) isotropic volume of the brain, enabling three-dimensional (3D) analysis of the fetal CNS.We analyzed 17 fetal MR exams performed in the second trimester, including orthogonal T2-weighted (T2w) Turbo Spin Echo (TSE) and balanced Fast Field Echo (b-FFE) sequences. For each subject and type of sequence, three distinct high-resolution volumes were reconstructed via NiftyMIC, MIALSRTK, and SVRTK toolkits. Fifteen biometric measurements were assessed both on the acquired 2D images and SR reconstructed volumes, and compared using Passing-Bablok regression, Bland-Altman plot analysis, and statistical tests.Results indicate that NiftyMIC and MIALSRTK provide reliable SR reconstructed volumes, suitable for biometric assessments. NiftyMIC also improves the operator intraclass correlation coefficient on the quantitative biometric measures with respect to the acquired 2D images. In addition, TSE sequences lead to more robust fetal brain reconstructions against intensity artifacts compared to b-FFE sequences, despite the latter exhibiting more defined anatomical details.Our findings strengthen the adoption of automatic toolkits for fetal brain reconstructions to perform biometry evaluations of fetal brain development over common clinical MR at an early pregnancy stage.
Collapse
Affiliation(s)
- Tommaso Ciceri
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy
- Department of Information Engineering, University of Padua, Padua, Italy
| | - Letizia Squarcina
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
| | - Alessandro Pigoni
- Social and Affective Neuroscience Group, IMT School for Advanced Studies Lucca, Lucca, Italy
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Adele Ferro
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Florian Montano
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy
| | - Alessandra Bertoldo
- Department of Information Engineering, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
| | - Nicola Persico
- Department of Woman, Child and Newborn, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Simona Boito
- Department of Woman, Child and Newborn, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Fabio Maria Triulzi
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
- Department of Services and Preventive Medicine, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Giorgio Conte
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
- Department of Services and Preventive Medicine, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Paolo Brambilla
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy.
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy.
| | - Denis Peruzzo
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy
| |
Collapse
|
12
|
Vahedifard F, Adepoju JO, Supanich M, Ai HA, Liu X, Kocak M, Marathu KK, Byrd SE. Review of deep learning and artificial intelligence models in fetal brain magnetic resonance imaging. World J Clin Cases 2023; 11:3725-3735. [PMID: 37383127 PMCID: PMC10294149 DOI: 10.12998/wjcc.v11.i16.3725] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 01/30/2023] [Accepted: 05/06/2023] [Indexed: 06/02/2023] Open
Abstract
Central nervous system abnormalities in fetuses are fairly common, happening in 0.1% to 0.2% of live births and in 3% to 6% of stillbirths. So initial detection and categorization of fetal Brain abnormalities are critical. Manually detecting and segmenting fetal brain magnetic resonance imaging (MRI) could be time-consuming, and susceptible to interpreter experience. Artificial intelligence (AI) algorithms and machine learning approaches have a high potential for assisting in the early detection of these problems, improving the diagnosis process and follow-up procedures. The use of AI and machine learning techniques in fetal brain MRI was the subject of this narrative review paper. Using AI, anatomic fetal brain MRI processing has investigated models to predict specific landmarks and segmentation automatically. All gestation age weeks (17-38 wk) and different AI models (mainly Convolutional Neural Network and U-Net) have been used. Some models' accuracy achieved 95% and more. AI could help preprocess and post-process fetal images and reconstruct images. Also, AI can be used for gestational age prediction (with one-week accuracy), fetal brain extraction, fetal brain segmentation, and placenta detection. Some fetal brain linear measurements, such as Cerebral and Bone Biparietal Diameter, have been suggested. Classification of brain pathology was studied using diagonal quadratic discriminates analysis, K-nearest neighbor, random forest, naive Bayes, and radial basis function neural network classifiers. Deep learning methods will become more powerful as more large-scale, labeled datasets become available. Having shared fetal brain MRI datasets is crucial because there aren not many fetal brain pictures available. Also, physicians should be aware of AI's function in fetal brain MRI, particularly neuroradiologists, general radiologists, and perinatologists.
Collapse
Affiliation(s)
- Farzan Vahedifard
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Jubril O Adepoju
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Mark Supanich
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Chicago, IL 606012, United States
| | - Hua Asher Ai
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Chicago, IL 606012, United States
| | - Xuchu Liu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Mehmet Kocak
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Kranthi K Marathu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Sharon E Byrd
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| |
Collapse
|
13
|
Xu J, Moyer D, Gagoski B, Iglesias JE, Grant PE, Golland P, Adalsteinsson E. NeSVoR: Implicit Neural Representation for Slice-to-Volume Reconstruction in MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1707-1719. [PMID: 37018704 PMCID: PMC10287191 DOI: 10.1109/tmi.2023.3236216] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Reconstructing 3D MR volumes from multiple motion-corrupted stacks of 2D slices has shown promise in imaging of moving subjects, e. g., fetal MRI. However, existing slice-to-volume reconstruction methods are time-consuming, especially when a high-resolution volume is desired. Moreover, they are still vulnerable to severe subject motion and when image artifacts are present in acquired slices. In this work, we present NeSVoR, a resolution-agnostic slice-to-volume reconstruction method, which models the underlying volume as a continuous function of spatial coordinates with implicit neural representation. To improve robustness to subject motion and other image artifacts, we adopt a continuous and comprehensive slice acquisition model that takes into account rigid inter-slice motion, point spread function, and bias fields. NeSVoR also estimates pixel-wise and slice-wise variances of image noise and enables removal of outliers during reconstruction and visualization of uncertainty. Extensive experiments are performed on both simulated and in vivo data to evaluate the proposed method. Results show that NeSVoR achieves state-of-the-art reconstruction quality while providing two to ten-fold acceleration in reconstruction times over the state-of-the-art algorithms.
Collapse
|
14
|
Cordero-Grande L, Ortuno-Fisac JE, Del Hoyo AA, Uus A, Deprez M, Santos A, Hajnal JV, Ledesma-Carbayo MJ. Fetal MRI by Robust Deep Generative Prior Reconstruction and Diffeomorphic Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:810-822. [PMID: 36288233 DOI: 10.1109/tmi.2022.3217725] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Magnetic resonance imaging of whole fetal body and placenta is limited by different sources of motion affecting the womb. Usual scanning techniques employ single-shot multi-slice sequences where anatomical information in different slices may be subject to different deformations, contrast variations or artifacts. Volumetric reconstruction formulations have been proposed to correct for these factors, but they must accommodate a non-homogeneous and non-isotropic sampling, so regularization becomes necessary. Thus, in this paper we propose a deep generative prior for robust volumetric reconstructions integrated with a diffeomorphic volume to slice registration method. Experiments are performed to validate our contributions and compare with ifdefined tmiformat R2.5a state of the art method methods in the literature in a cohort of 72 fetal datasets in the range of 20-36 weeks gestational age. Results suggest improved image resolution Quantitative as well as radiological assessment suggest improved image quality and more accurate prediction of gestational age at scan is obtained when comparing to a state of the art reconstruction method methods. In addition, gestational age prediction results from our volumetric reconstructions compare favourably are competitive with existing brain-based approaches, with boosted accuracy when integrating information of organs other than the brain. Namely, a mean absolute error of 0.618 weeks ( R2=0.958 ) is achieved when combining fetal brain and trunk information.
Collapse
|
15
|
Shi W, Xu H, Sun C, Sun J, Li Y, Xu X, Zheng T, Zhang Y, Wang G, Wu D. AFFIRM: Affinity Fusion-Based Framework for Iteratively Random Motion Correction of Multi-Slice Fetal Brain MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:209-219. [PMID: 36129858 DOI: 10.1109/tmi.2022.3208277] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Multi-slice magnetic resonance images of the fetal brain are usually contaminated by severe and arbitrary fetal and maternal motion. Hence, stable and robust motion correction is necessary to reconstruct high-resolution 3D fetal brain volume for clinical diagnosis and quantitative analysis. However, the conventional registration-based correction has a limited capture range and is insufficient for detecting relatively large motions. Here, we present a novel Affinity Fusion-based Framework for Iteratively Random Motion (AFFIRM) correction of the multi-slice fetal brain MRI. It learns the sequential motion from multiple stacks of slices and integrates the features between 2D slices and reconstructed 3D volume using affinity fusion, which resembles the iterations between slice-to-volume registration and volumetric reconstruction in the regular pipeline. The method accurately estimates the motion regardless of brain orientations and outperforms other state-of-the-art learning-based methods on the simulated motion-corrupted data, with a 48.4% reduction of mean absolute error for rotation and 61.3% for displacement. We then incorporated AFFIRM into the multi-resolution slice-to-volume registration and tested it on the real-world fetal MRI scans at different gestation stages. The results indicated that adding AFFIRM to the conventional pipeline improved the success rate of fetal brain super-resolution reconstruction from 77.2% to 91.9%.
Collapse
|
16
|
Xu J, Moyer D, Grant PE, Golland P, Iglesias JE, Adalsteinsson E. SVoRT: Iterative Transformer for Slice-to-Volume Registration in Fetal Brain MRI. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13436:3-13. [PMID: 37103480 PMCID: PMC10129054 DOI: 10.1007/978-3-031-16446-0_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/28/2023]
Abstract
Volumetric reconstruction of fetal brains from multiple stacks of MR slices, acquired in the presence of almost unpredictable and often severe subject motion, is a challenging task that is highly sensitive to the initialization of slice-to-volume transformations. We propose a novel slice-to-volume registration method using Transformers trained on synthetically transformed data, which model multiple stacks of MR slices as a sequence. With the attention mechanism, our model automatically detects the relevance between slices and predicts the transformation of one slice using information from other slices. We also estimate the underlying 3D volume to assist slice-to-volume registration and update the volume and transformations alternately to improve accuracy. Results on synthetic data show that our method achieves lower registration error and better reconstruction quality compared with existing state-of-the-art methods. Experiments with real-world MRI data are also performed to demonstrate the ability of the proposed model to improve the quality of 3D reconstruction under severe fetal motion.
Collapse
Affiliation(s)
- Junshen Xu
- Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
| | - Daniel Moyer
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - P Ellen Grant
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Polina Golland
- Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Juan Eugenio Iglesias
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
- Harvard Medical School, Boston, MA, USA
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA
| | - Elfar Adalsteinsson
- Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
- Institute for Medical Engineering and Science, MIT, Cambridge, MA, USA
| |
Collapse
|
17
|
Moser F, Huang R, Papież BW, Namburete AIL. BEAN: Brain Extraction and Alignment Network for 3D Fetal Neurosonography. Neuroimage 2022; 258:119341. [PMID: 35654376 DOI: 10.1016/j.neuroimage.2022.119341] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 04/08/2022] [Accepted: 05/28/2022] [Indexed: 01/18/2023] Open
Abstract
Brain extraction (masking of extra-cerebral tissues) and alignment are fundamental first steps of most neuroimage analysis pipelines. The lack of automated solutions for 3D ultrasound (US) has therefore limited its potential as a neuroimaging modality for studying fetal brain development using routinely acquired scans. In this work, we propose a convolutional neural network (CNN) that accurately and consistently aligns and extracts the fetal brain from minimally pre-processed 3D US scans. Our multi-task CNN, Brain Extraction and Alignment Network (BEAN), consists of two independent branches: 1) a fully-convolutional encoder-decoder branch for brain extraction of unaligned scans, and 2) a two-step regression-based branch for similarity alignment of the brain to a common coordinate space. BEAN was tested on 356 fetal head 3D scans spanning the gestational range of 14 to 30 weeks, significantly outperforming all current alternatives for fetal brain extraction and alignment. BEAN achieved state-of-the-art performance for both tasks, with a mean Dice Similarity Coefficient (DSC) of 0.94 for the brain extraction masks, and a mean DSC of 0.93 for the alignment of the target brain masks. The presented experimental results show that brain structures such as the thalamus, choroid plexus, cavum septum pellucidum, and Sylvian fissure, are consistently aligned throughout the dataset and remain clearly visible when the scans are averaged together. The BEAN implementation and related code can be found under www.github.com/felipemoser/kelluwen.
Collapse
Affiliation(s)
- Felipe Moser
- Oxford Machine Learning in Neuroimaging laboratory, OMNI, Department of Computer Science, University of Oxford, Oxford, UK.
| | - Ruobing Huang
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | -
- Nuffield Department of Women's and Reproductive Health, John Radcliffe Hospital, University of Oxford, Oxford, UK
| | - Bartłomiej W Papież
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK; Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK
| | - Ana I L Namburete
- Oxford Machine Learning in Neuroimaging laboratory, OMNI, Department of Computer Science, University of Oxford, Oxford, UK; Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, United Kingdom
| |
Collapse
|
18
|
Deep learning-based plane pose regression in obstetric ultrasound. Int J Comput Assist Radiol Surg 2022; 17:833-839. [PMID: 35489005 PMCID: PMC9110476 DOI: 10.1007/s11548-022-02609-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 03/10/2022] [Indexed: 01/16/2023]
Abstract
Purpose In obstetric ultrasound (US) scanning, the learner’s ability to mentally build a three-dimensional (3D) map of the fetus from a two-dimensional (2D) US image represents a major challenge in skill acquisition. We aim to build a US plane localisation system for 3D visualisation, training, and guidance without integrating additional sensors. Methods We propose a regression convolutional neural network (CNN) using image features to estimate the six-dimensional pose of arbitrarily oriented US planes relative to the fetal brain centre. The network was trained on synthetic images acquired from phantom 3D US volumes and fine-tuned on real scans. Training data was generated by slicing US volumes into imaging planes in Unity at random coordinates and more densely around the standard transventricular (TV) plane. Results With phantom data, the median errors are 0.90 mm/1.17\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^\circ $$\end{document}∘ and 0.44 mm/1.21\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^\circ $$\end{document}∘ for random planes and planes close to the TV one, respectively. With real data, using a different fetus with the same gestational age (GA), these errors are 11.84 mm/25.17\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^\circ $$\end{document}∘. The average inference time is 2.97 ms per plane. Conclusion The proposed network reliably localises US planes within the fetal brain in phantom data and successfully generalises pose regression for an unseen fetal brain from a similar GA as in training. Future development will expand the prediction to volumes of the whole fetus and assess its potential for vision-based, freehand US-assisted navigation when acquiring standard fetal planes. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-022-02609-z.
Collapse
|
19
|
Yu Y, Chen Z, Zhuang Y, Yi H, Han L, Chen K, Lin J. A guiding approach of Ultrasound scan for accurately obtaining standard diagnostic planes of fetal brain malformation. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:1243-1260. [PMID: 36155489 DOI: 10.3233/xst-221278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
BACKGROUND Standard planes (SPs) are crucial for the diagnosis of fetal brain malformation. However, it is very time-consuming and requires extensive experiences to acquire the SPs accurately due to the large difference in fetal posture and the complexity of SPs definitions. OBJECTIVE This study aims to present a guiding approach that could assist sonographer to obtain the SPs more accurately and more quickly. METHODS To begin with, sonographer uses the 3D probe to scan the fetal head to obtain 3D volume data, and then we used affine transformation to calibrate 3D volume data to the standard body position and established the corresponding 3D head model in 'real time'. When the sonographer uses the 2D probe to scan a plane, the position of current plane can be clearly show in 3D head model by our RLNet (regression location network), which can conduct the sonographer to obtain the three SPs more accurately. When the three SPs are located, the sagittal plane and the coronal planes can be automatically generated according to the spatial relationship with the three SPs. RESULTS Experimental results conducted on 3200 2D US images show that the RLNet achieves average angle error of the transthalamic plane was 3.91±2.86°, which has a obvious improvement compared other published data. The automatically generated coronal and sagittal SPs conform the diagnostic criteria and the diagnostic requirements of fetal brain malformation. CONCLUSIONS A guiding scanning method based deep learning for ultrasonic brain malformation screening is firstly proposed and it has a pragmatic value for future clinical application.
Collapse
Affiliation(s)
- Yalan Yu
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Zhong Chen
- Department of US, General Hospital of Western Theater Command, Chengdu, China
| | - Yan Zhuang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Heng Yi
- Department of US, General Hospital of Western Theater Command, Chengdu, China
| | - Lin Han
- College of Biomedical Engineering, Sichuan University, Chengdu, China
- Haihong Intellimage Medical Technology (Tianjin) Co., Ltd, Tianjin, China
| | - Ke Chen
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Jiangli Lin
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| |
Collapse
|
20
|
Zheng S, Yang X, Wang Y, Ding M, Hou W. Unsupervised Cross-Modality Domain Adaptation Network for CNN-Based X-ray to CT. IEEE J Biomed Health Inform 2021; 26:2637-2647. [PMID: 34914602 DOI: 10.1109/jbhi.2021.3135890] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
2D/3D registration that achieves high accuracy and real-time computation is one of the enabling technologies for radiotherapy and image-guided surgeries. Recently, the Convolutional Neural Network(CNN) has been explored to significantly improve the accuracy and efficiency of 2D/3D registration. A pair of intraoperative 2-D x-ray images and synthetic data from pre-operative volume are often required to model the nonconvex mappings between registration parameters and image residual. However, a large clinical dataset collection with accurate poses for x-ray images can be very challenging or even impractical, while exclusive training on synthetic data can frequently cause performance degradation when tested on x-rays. Thus, we propose to train a model on source domain (i.e., synthetic data) to build appearance-pose relationship first and then use an unsupervised cross-modality domain adaptation network (UCMDAN) to adapt the model to target domain (i.e., X-rays) through adversarial learning. We propose to narrow the significant domain gap by alignment in both pixel and feature space. In particular, the image appearance transformation and domain-invariance feature learning by multiple aspects are conducted synergistically. Extensive experiments on CT and CBCT dataset show that the proposed UCMDAN outperforms the existing state-of-the-art domain adaptation approaches.
Collapse
|
21
|
Wang Y, Du W, Wang H, Zhao Y. Intelligent Generation Method of Innovative Structures Based on Topology Optimization and Deep Learning. MATERIALS 2021; 14:ma14247680. [PMID: 34947275 PMCID: PMC8706216 DOI: 10.3390/ma14247680] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 12/04/2021] [Accepted: 12/11/2021] [Indexed: 12/23/2022]
Abstract
Computer-aided design has been widely used in structural calculation and analysis, but there are still challenges in generating innovative structures intelligently. Aiming at this issue, a new method was proposed to realize the intelligent generation of innovative structures based on topology optimization and deep learning. Firstly, a large number of structural models obtained from topology optimization under different optimization parameters were extracted to produce the training set images, and the training set labels were defined as the corresponding load cases. Then, the boundary equilibrium generative adversarial networks (BEGAN) deep learning algorithm was applied to generate numerous innovative structures. Finally, the generated structures were evaluated by a series of evaluation indexes, including innovation, aesthetics, machinability, and mechanical performance. Combined with two engineering cases, the application process of the above method is described here in detail. Furthermore, the 3D reconstruction and additive manufacturing techniques were applied to manufacture the structural models. The research results showed that the proposed approach of structural generation based on topology optimization and deep learning is feasible, and can not only generate innovative structures but also optimize the material consumption and mechanical performance further.
Collapse
Affiliation(s)
- Yingqi Wang
- Institute of Steel and Spatial Structures, College of Civil Engineering and Architecture, Henan University, Kaifeng 475004, China; (Y.W.); (H.W.); (Y.Z.)
| | - Wenfeng Du
- Institute of Steel and Spatial Structures, College of Civil Engineering and Architecture, Henan University, Kaifeng 475004, China; (Y.W.); (H.W.); (Y.Z.)
- Henan Provincial Research Center of Engineering Technology on Assembly Buildings, Kaifeng 475004, China
- Correspondence:
| | - Hui Wang
- Institute of Steel and Spatial Structures, College of Civil Engineering and Architecture, Henan University, Kaifeng 475004, China; (Y.W.); (H.W.); (Y.Z.)
| | - Yannan Zhao
- Institute of Steel and Spatial Structures, College of Civil Engineering and Architecture, Henan University, Kaifeng 475004, China; (Y.W.); (H.W.); (Y.Z.)
| |
Collapse
|
22
|
|
23
|
Hoffmann M, Abaci Turk E, Gagoski B, Morgan L, Wighton P, Tisdall MD, Reuter M, Adalsteinsson E, Grant PE, Wald LL, van der Kouwe AJW. Rapid head-pose detection for automated slice prescription of fetal-brain MRI. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2021; 31:1136-1154. [PMID: 34421216 PMCID: PMC8372849 DOI: 10.1002/ima.22563] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 01/29/2021] [Accepted: 02/09/2021] [Indexed: 06/13/2023]
Abstract
In fetal-brain MRI, head-pose changes between prescription and acquisition present a challenge to obtaining the standard sagittal, coronal and axial views essential to clinical assessment. As motion limits acquisitions to thick slices that preclude retrospective resampling, technologists repeat ~55-second stack-of-slices scans (HASTE) with incrementally reoriented field of view numerous times, deducing the head pose from previous stacks. To address this inefficient workflow, we propose a robust head-pose detection algorithm using full-uterus scout scans (EPI) which take ~5 seconds to acquire. Our ~2-second procedure automatically locates the fetal brain and eyes, which we derive from maximally stable extremal regions (MSERs). The success rate of the method exceeds 94% in the third trimester, outperforming a trained technologist by up to 20%. The pipeline may be used to automatically orient the anatomical sequence, removing the need to estimate the head pose from 2D views and reducing delays during which motion can occur.
Collapse
Affiliation(s)
- Malte Hoffmann
- Department of Radiology, Massachusetts General HospitalBostonMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Esra Abaci Turk
- Fetal‐Neonatal Neuroimaging and Developmental Science Center, Boston Children's HospitalBostonMassachusettsUSA
- Electrical Engineering and Computer ScienceMassachusetts Institute of TechnologyCambridgeMassachusettsUSA
| | - Borjan Gagoski
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
- Fetal‐Neonatal Neuroimaging and Developmental Science Center, Boston Children's HospitalBostonMassachusettsUSA
| | - Leah Morgan
- Department of Radiology, Massachusetts General HospitalBostonMassachusettsUSA
| | - Paul Wighton
- Department of Radiology, Massachusetts General HospitalBostonMassachusettsUSA
| | - Matthew Dylan Tisdall
- Radiology, Perelman School of MedicineUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Martin Reuter
- Department of Radiology, Massachusetts General HospitalBostonMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
- German Center for Neurodegenerative DiseasesBonnGermany
| | - Elfar Adalsteinsson
- Electrical Engineering and Computer ScienceMassachusetts Institute of TechnologyCambridgeMassachusettsUSA
- Institute for Medical Engineering and ScienceMassachusetts Institute of TechnologyCambridgeMassachusettsUSA
| | - Patricia Ellen Grant
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
- Fetal‐Neonatal Neuroimaging and Developmental Science Center, Boston Children's HospitalBostonMassachusettsUSA
| | - Lawrence L. Wald
- Department of Radiology, Massachusetts General HospitalBostonMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - André J. W. van der Kouwe
- Department of Radiology, Massachusetts General HospitalBostonMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| |
Collapse
|
24
|
Wei W, Haishan X, Alpers J, Rak M, Hansen C. A deep learning approach for 2D ultrasound and 3D CT/MR image registration in liver tumor ablation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 206:106117. [PMID: 34022696 DOI: 10.1016/j.cmpb.2021.106117] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 04/10/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Liver tumor ablation is often guided by ultrasound (US). Due to poor image quality, intraoperative US is fused with preoperative computed tomography or magnetic tomography (CT/MR) images to provide visual guidance. As of today, the underlying 2D US to 3D CT/MR registration problem remains a very challenging task. METHODS We propose a novel pipeline to address this registration problem. Contrary to previous work, we do not formulate the problem as a regression task, which - for the given registration problem - achieves a low performance regarding accuracy and robustness due to the limited US soft-tissue contrast and the inter-patient variability on liver vessels. Instead, we first estimate the US probe angle roughly by using a classification network. Given this coarse initialization, we then improve the registration by formulation of the problem as a segmentation task, estimating the US plane in the 3D CT/MR through segmentation. RESULTS We benchmark our approach on 1035 clinical images from 52 patients, yielding average registration errors of 11.6° and 4.7 mm, which outperforms the state of the art SVR method[1]. CONCLUSION Our results show the efficiency of the proposed registration pipeline, which has potential to improve the robustness and accuracy of intraoperative patient registration.
Collapse
Affiliation(s)
- Wei Wei
- Faculty of Computer Science & Research Campus STIMULATE, University of Magdeburg, Germany
| | - Xu Haishan
- Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, China
| | - Julian Alpers
- Faculty of Computer Science & Research Campus STIMULATE, University of Magdeburg, Germany
| | - Marko Rak
- Faculty of Computer Science & Research Campus STIMULATE, University of Magdeburg, Germany
| | - Christian Hansen
- Faculty of Computer Science & Research Campus STIMULATE, University of Magdeburg, Germany.
| |
Collapse
|
25
|
Yan G, Liao Y, Li K, Zhang X, Zheng W, Zhang Y, Zou Y, Chen D, Wu D. Diffusion MRI Based Myometrium Tractography for Detection of Placenta Accreta Spectrum Disorder. J Magn Reson Imaging 2021; 55:255-264. [PMID: 34155718 DOI: 10.1002/jmri.27794] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 05/29/2021] [Accepted: 06/02/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Prenatal diagnosis of placenta accreta spectrum (PAS) disorders is difficult. Magnetic resonance imaging (MRI) has been shown to be a useful supplementary method to ultrasound. PURPOSE To investigate diffusion MRI (dMRI) based tractography as a tool for detecting PAS disorders, and to evaluate its performance compared with anatomical MRI. STUDY TYPE Prospective. POPULATION Forty-seven pregnant women in the third trimester with risk factors for PAS. FIELD STRENGTH/SEQUENCE Using fast imaging employing steady-state acquisition and high-angular resolution dMRI at 1.5 Tesla. ASSESSMENT Diagnosis of PAS was performed by three radiologists based on the dMRI-based feature of myometrial fiber discontinuity and on commonly used anatomical features including presence of dark band, discontinuous myometrium and bladder wall interruption. We evaluated the sensitivity, specificity, accuracy, and area-under-the-curve (AUC) of the individual features and established an integrated model with random forest analysis. STATISTICAL TESTS Maternal age and gestational age at scan were compared between PAS and control group using a t-test, and childbearing history was compared using a chi-squared test. The random forest model was employed to combine the anatomical and dMRI features with 5-fold cross-validation, and the weight of each feature was normalized to evaluate its importance in predicting PAS. RESULTS Based on surgical pathology reports, 16 out of 47 patients had confirmed PAS. The anatomical feature of dark bands and tractography marker achieved the highest AUC of 0.842 for predicting PAS, and the integrated anatomical and tractography features further improved the AUC of 0.880 with an accuracy of 87.2%. The tractography feature contributed most (30.1%) to the integrated model. DATA CONCLUSION Myometrial tractography demonstrated superior performance in detecting PAS. Moreover, the combination of dMRI-based tractography and anatomical MRI could potentially improve the diagnosis of PAS disorders in clinical practice. LEVEL OF EVIDENCE 2 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
- Guohui Yan
- Department of Radiology, Women's Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yuhao Liao
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Kui Li
- Department of Radiology, Women's Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Xiaodan Zhang
- Department of Radiology, Women's Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Weizeng Zheng
- Department of Radiology, Women's Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yi Zhang
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Yu Zou
- Department of Radiology, Women's Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Danqing Chen
- Department of Obstetrics, Women's Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Dan Wu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| |
Collapse
|
26
|
Yeung PH, Aliasi M, Papageorghiou AT, Haak M, Xie W, Namburete AIL. Learning to map 2D ultrasound images into 3D space with minimal human annotation. Med Image Anal 2021; 70:101998. [PMID: 33711741 DOI: 10.1016/j.media.2021.101998] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 01/26/2021] [Accepted: 02/01/2021] [Indexed: 10/22/2022]
Abstract
In fetal neurosonography, aligning two-dimensional (2D) ultrasound scans to their corresponding plane in the three-dimensional (3D) space remains a challenging task. In this paper, we propose a convolutional neural network that predicts the position of 2D ultrasound fetal brain scans in 3D atlas space. Instead of purely supervised learning that requires heavy annotations for each 2D scan, we train the model by sampling 2D slices from 3D fetal brain volumes, and target the model to predict the inverse of the sampling process, resembling the idea of self-supervised learning. We propose a model that takes a set of images as input, and learns to compare them in pairs. The pairwise comparison is weighted by the attention module based on its contribution to the prediction, which is learnt implicitly during training. The feature representation for each image is thus computed by incorporating the relative position information to all the other images in the set, and is later used for the final prediction. We benchmark our model on 2D slices sampled from 3D fetal brain volumes at 18-22 weeks' gestational age. Using three evaluation metrics, namely, Euclidean distance, plane angles and normalized cross correlation, which account for both the geometric and appearance discrepancy between the ground-truth and prediction, in all these metrics, our model outperforms a baseline model by as much as 23%, when the number of input images increases. We further demonstrate that our model generalizes to (i) real 2D standard transthalamic plane images, achieving comparable performance as human annotations, as well as (ii) video sequences of 2D freehand fetal brain scans.
Collapse
Affiliation(s)
- Pak-Hei Yeung
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom.
| | - Moska Aliasi
- Division of Fetal Medicine, Department of Obstetrics, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Aris T Papageorghiou
- Nuffield Department of Obstetrics and Gynaecology, University of Oxford, Oxford, United Kingdom
| | - Monique Haak
- Division of Fetal Medicine, Department of Obstetrics, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Weidi Xie
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom; Visual Geometry Group, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Ana I L Namburete
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
27
|
Davidson L, Boland MR. Towards deep phenotyping pregnancy: a systematic review on artificial intelligence and machine learning methods to improve pregnancy outcomes. Brief Bioinform 2021; 22:6065792. [PMID: 33406530 PMCID: PMC8424395 DOI: 10.1093/bib/bbaa369] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 10/13/2020] [Accepted: 11/18/2020] [Indexed: 12/16/2022] Open
Abstract
Objective Development of novel informatics methods focused on improving pregnancy outcomes remains an active area of research. The purpose of this study is to systematically review the ways that artificial intelligence (AI) and machine learning (ML), including deep learning (DL), methodologies can inform patient care during pregnancy and improve outcomes. Materials and methods We searched English articles on EMBASE, PubMed and SCOPUS. Search terms included ML, AI, pregnancy and informatics. We included research articles and book chapters, excluding conference papers, editorials and notes. Results We identified 127 distinct studies from our queries that were relevant to our topic and included in the review. We found that supervised learning methods were more popular (n = 69) than unsupervised methods (n = 9). Popular methods included support vector machines (n = 30), artificial neural networks (n = 22), regression analysis (n = 17) and random forests (n = 16). Methods such as DL are beginning to gain traction (n = 13). Common areas within the pregnancy domain where AI and ML methods were used the most include prenatal care (e.g. fetal anomalies, placental functioning) (n = 73); perinatal care, birth and delivery (n = 20); and preterm birth (n = 13). Efforts to translate AI into clinical care include clinical decision support systems (n = 24) and mobile health applications (n = 9). Conclusions Overall, we found that ML and AI methods are being employed to optimize pregnancy outcomes, including modern DL methods (n = 13). Future research should focus on less-studied pregnancy domain areas, including postnatal and postpartum care (n = 2). Also, more work on clinical adoption of AI methods and the ethical implications of such adoption is needed.
Collapse
Affiliation(s)
- Lena Davidson
- MS degree at College of St. Scholastica, Duluth, MN, USA
| | - Mary Regina Boland
- Department of Biostatistics, Epidemiology, and Informatics at the University of Pennsylvania
| |
Collapse
|
28
|
Chen Z, Xu Z, Gui Q, Yang X, Cheng Q, Hou W, Ding M. Self-learning based medical image representation for rigid real-time and multimodal slice-to-volume registration. Inf Sci (N Y) 2020. [DOI: 10.1016/j.ins.2020.06.072] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
29
|
Singh A, Salehi SSM, Gholipour A. Deep Predictive Motion Tracking in Magnetic Resonance Imaging: Application to Fetal Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3523-3534. [PMID: 32746102 PMCID: PMC7787194 DOI: 10.1109/tmi.2020.2998600] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Fetal magnetic resonance imaging (MRI) is challenged by uncontrollable, large, and irregular fetal movements. It is, therefore, performed through visual monitoring of fetal motion and repeated acquisitions to ensure diagnostic-quality images are acquired. Nevertheless, visual monitoring of fetal motion based on displayed slices, and navigation at the level of stacks-of-slices is inefficient. The current process is highly operator-dependent, increases scanner usage and cost, and significantly increases the length of fetal MRI scans which makes them hard to tolerate for pregnant women. To help build automatic MRI motion tracking and navigation systems to overcome the limitations of the current process and improve fetal imaging, we have developed a new real-time image-based motion tracking method based on deep learning that learns to predict fetal motion directly from acquired images. Our method is based on a recurrent neural network, composed of spatial and temporal encoder-decoders, that infers motion parameters from anatomical features extracted from sequences of acquired slices. We compared our trained network on held-out test sets (including data with different characteristics, e.g. different fetuses scanned at different ages, and motion trajectories recorded from volunteer subjects) with networks designed for estimation as well as methods adopted to make predictions. The results show that our method outperformed alternative techniques, and achieved real-time performance with average errors of 3.5 and 8 degrees for the estimation and prediction tasks, respectively. Our real-time deep predictive motion tracking technique can be used to assess fetal movements, to guide slice acquisitions, and to build navigation systems for fetal MRI.
Collapse
|
30
|
Pei Y, Wang L, Zhao F, Zhong T, Liao L, Shen D, Li G. Anatomy-Guided Convolutional Neural Network for Motion Correction in Fetal Brain MRI. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2020; 12436:384-393. [PMID: 33644782 PMCID: PMC7912521 DOI: 10.1007/978-3-030-59861-7_39] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Fetal Magnetic Resonance Imaging (MRI) is challenged by the fetal movements and maternal breathing. Although fast MRI sequences allow artifact free acquisition of individual 2D slices, motion commonly occurs in between slices acquisitions. Motion correction for each slice is thus very important for reconstruction of 3D fetal brain MRI, but is highly operator-dependent and time-consuming. Approaches based on convolutional neural networks (CNNs) have achieved encouraging performance on prediction of 3D motion parameters of arbitrarily oriented 2D slices, which, however, does not capitalize on important brain structural information. To address this problem, we propose a new multi-task learning framework to jointly learn the transformation parameters and tissue segmentation map of each slice, for providing brain anatomical information to guide the mapping from 2D slices to 3D volumetric space in a coarse to fine manner. In the coarse stage, the first network learns the features shared for both regression and segmentation tasks. In the refinement stage, to fully utilize the anatomical information, distance maps constructed based on the coarse segmentation are introduced to the second network. Finally, incorporation of the signed distance maps to guide the regression and segmentation together improves the performance in both tasks. Experimental results indicate that the proposed method achieves superior performance in reducing the motion prediction error and obtaining satisfactory tissue segmentation results simultaneously, compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Yuchen Pei
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Lisheng Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
| | - Fenqiang Zhao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Tao Zhong
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Lufan Liao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
31
|
Uus A, Zhang T, Jackson LH, Roberts TA, Rutherford MA, Hajnal JV, Deprez M. Deformable Slice-to-Volume Registration for Motion Correction of Fetal Body and Placenta MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2750-2759. [PMID: 32086200 PMCID: PMC7116020 DOI: 10.1109/tmi.2020.2974844] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
In in-utero MRI, motion correction for fetal body and placenta poses a particular challenge due to the presence of local non-rigid transformations of organs caused by bending and stretching. The existing slice-to-volume registration (SVR) reconstruction methods are widely employed for motion correction of fetal brain that undergoes only rigid transformation. However, for reconstruction of fetal body and placenta, rigid registration cannot resolve the issue of misregistrations due to deformable motion, resulting in degradation of features in the reconstructed volume. We propose a Deformable SVR (DSVR), a novel approach for non-rigid motion correction of fetal MRI based on a hierarchical deformable SVR scheme to allow high resolution reconstruction of the fetal body and placenta. Additionally, a robust scheme for structure-based rejection of outliers minimises the impact of registration errors. The improved performance of DSVR in comparison to SVR and patch-to-volume registration (PVR) methods is quantitatively demonstrated in simulated experiments and 20 fetal MRI datasets from 28-31 weeks gestational age (GA) range with varying degree of motion corruption. In addition, we present qualitative evaluation of 100 fetal body cases from 20-34 weeks GA range.
Collapse
|
32
|
Xiao Y, Rivaz H, Chabanas M, Fortin M, Machado I, Ou Y, Heinrich MP, Schnabel JA. Evaluation of MRI to Ultrasound Registration Methods for Brain Shift Correction: The CuRIOUS2018 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:777-786. [PMID: 31425023 PMCID: PMC7611407 DOI: 10.1109/tmi.2019.2935060] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
In brain tumor surgery, the quality and safety of the procedure can be impacted by intra-operative tissue deformation, called brain shift. Brain shift can move the surgical targets and other vital structures such as blood vessels, thus invalidating the pre-surgical plan. Intra-operative ultrasound (iUS) is a convenient and cost-effective imaging tool to track brain shift and tumor resection. Accurate image registration techniques that update pre-surgical MRI based on iUS are crucial but challenging. The MICCAI Challenge 2018 for Correction of Brain shift with Intra-Operative UltraSound (CuRIOUS2018) provided a public platform to benchmark MRI-iUS registration algorithms on newly released clinical datasets. In this work, we present the data, setup, evaluation, and results of CuRIOUS 2018, which received 6 fully automated algorithms from leading academic and industrial research groups. All algorithms were first trained with the public RESECT database, and then ranked based on a test dataset of 10 additional cases with identical data curation and annotation protocols as the RESECT database. The article compares the results of all participating teams and discusses the insights gained from the challenge, as well as future work.
Collapse
Affiliation(s)
- Yiming Xiao
- the Robarts Research Institute, Western University, London, ON N6A 5B7, Canada
| | - Hassan Rivaz
- the PERFORM Centre, Concordia University, Montreal, QC H3G 1M8, Canada, and also with the Department of Electrical and Computer Engineering, Concordia University, Montreal, QC H3G 1M8, Canada
| | - Matthieu Chabanas
- the School of Computer Science and Applied Mathematics, Grenoble Institute of Technology, 38031 Grenoble, France, and also with the TIMC-IMAG Laboratory, University of Grenoble Alpes, 38400 Grenoble, France
| | - Maryse Fortin
- the PERFORM Centre, Concordia University, Montreal, QC H3G 1M8, Canada, and also with the Department of Health, Kinesiology and Applied Physiology, Concordia University, Montreal, QC H3G 1M8, Canada
| | - Ines Machado
- the Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA 02115 USA
| | - Yangming Ou
- the Department of Pediatrics and Radiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA 02115 USA
| | - Mattias P. Heinrich
- the Institute of Medical Informatics, University of Lübeck, 23538 Lübeck, Germany
| | - Julia A. Schnabel
- the School of Biomedical Engineering and Imaging Sciences, King’s College London, London WC2R 2LS, U.K
| |
Collapse
|
33
|
Ebner M, Wang G, Li W, Aertsen M, Patel PA, Aughwane R, Melbourne A, Doel T, Dymarkowski S, De Coppi P, David AL, Deprest J, Ourselin S, Vercauteren T. An automated framework for localization, segmentation and super-resolution reconstruction of fetal brain MRI. Neuroimage 2020; 206:116324. [PMID: 31704293 PMCID: PMC7103783 DOI: 10.1016/j.neuroimage.2019.116324] [Citation(s) in RCA: 117] [Impact Index Per Article: 23.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Revised: 09/26/2019] [Accepted: 10/29/2019] [Indexed: 12/17/2022] Open
Abstract
High-resolution volume reconstruction from multiple motion-corrupted stacks of 2D slices plays an increasing role for fetal brain Magnetic Resonance Imaging (MRI) studies. Currently existing reconstruction methods are time-consuming and often require user interactions to localize and extract the brain from several stacks of 2D slices. We propose a fully automatic framework for fetal brain reconstruction that consists of four stages: 1) fetal brain localization based on a coarse segmentation by a Convolutional Neural Network (CNN), 2) fine segmentation by another CNN trained with a multi-scale loss function, 3) novel, single-parameter outlier-robust super-resolution reconstruction, and 4) fast and automatic high-resolution visualization in standard anatomical space suitable for pathological brains. We validated our framework with images from fetuses with normal brains and with variable degrees of ventriculomegaly associated with open spina bifida, a congenital malformation affecting also the brain. Experiments show that each step of our proposed pipeline outperforms state-of-the-art methods in both segmentation and reconstruction comparisons including expert-reader quality assessments. The reconstruction results of our proposed method compare favorably with those obtained by manual, labor-intensive brain segmentation, which unlocks the potential use of automatic fetal brain reconstruction studies in clinical practice.
Collapse
Affiliation(s)
- Michael Ebner
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK; School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK; School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.
| | - Wenqi Li
- Nvidia, Cambridge, UK; School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Michael Aertsen
- Department of Radiology, University Hospitals KU Leuven, Leuven, Belgium
| | - Premal A Patel
- Department of Radiology, Great Ormond Street Hospital for Children, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Rosalind Aughwane
- Institute for Women's Health, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Andrew Melbourne
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK; Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Tom Doel
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Steven Dymarkowski
- Department of Radiology, University Hospitals KU Leuven, Leuven, Belgium
| | - Paolo De Coppi
- Institute of Child Health, University College London, London, UK
| | - Anna L David
- Institute for Women's Health, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK; Department of Obstetrics and Gynaecology, University Hospitals KU Leuven, Leuven, Belgium
| | - Jan Deprest
- Department of Obstetrics and Gynaecology, University Hospitals KU Leuven, Leuven, Belgium; Institute for Women's Health, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK; Department of Obstetrics and Gynaecology, University Hospitals KU Leuven, Leuven, Belgium
| |
Collapse
|
34
|
Mohseni Salehi SS, Khan S, Erdogmus D, Gholipour A. Real-Time Deep Pose Estimation With Geodesic Loss for Image-to-Template Rigid Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:470-481. [PMID: 30138909 PMCID: PMC6438698 DOI: 10.1109/tmi.2018.2866442] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
With an aim to increase the capture range and accelerate the performance of state-of-the-art inter-subject and subject-to-template 3-D rigid registration, we propose deep learning-based methods that are trained to find the 3-D position of arbitrarily-oriented subjects or anatomy in a canonical space based on slices or volumes of medical images. For this, we propose regression convolutional neural networks (CNNs) that learn to predict the angle-axis representation of 3-D rotations and translations using image features. We use and compare mean square error and geodesic loss to train regression CNNs for 3-D pose estimation used in two different scenarios: slice-to-volume registration and volume-to-volume registration. As an exemplary application, we applied the proposed methods to register arbitrarily oriented reconstructed images of fetuses scanned in-utero at a wide gestational age range to a standard atlas space. Our results show that in such registration applications that are amendable to learning, the proposed deep learning methods with geodesic loss minimization achieved 3-D pose estimation with a wide capture range in real-time (<100ms). We also tested the generalization capability of the trained CNNs on an expanded age range and on images of newborn subjects with similar and different MR image contrasts. We trained our models on T2-weighted fetal brain MRI scans and used them to predict the 3-D pose of newborn brains based on T1-weighted MRI scans. We showed that the trained models generalized well for the new domain when we performed image contrast transfer through a conditional generative adversarial network. This indicates that the domain of application of the trained deep regression CNNs can be further expanded to image modalities and contrasts other than those used in training. A combination of our proposed methods with accelerated optimization-based registration algorithms can dramatically enhance the performance of automatic imaging devices and image processing methods of the future.
Collapse
|