1
|
Zhang H, Li H, Ali R, Jia W, Pan W, Reeder SB, Harris D, Masch W, Aslam A, Shanbhogue K, Parikh NA, Dillman JR, He L. Development and Validation of a Modality-Invariant 3D Swin U-Net Transformer for Liver and Spleen Segmentation on Multi-Site Clinical Bi-parametric MR Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01362-w. [PMID: 39707114 DOI: 10.1007/s10278-024-01362-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 11/26/2024] [Accepted: 11/28/2024] [Indexed: 12/23/2024]
Abstract
To develop and validate a modality-invariant Swin U-Net Transformer (UNETR) deep learning model for liver and spleen segmentation on abdominal T1-weighted (T1w) or T2-weighted (T2w) MR images from multiple institutions for pediatric and adult patients with known or suspected chronic liver diseases. In this IRB-approved retrospective study, clinical abdominal axial T1w and T2w MR images from pediatric and adult patients were retrieved from four study sites, including Cincinnati Children's Hospital Medical Center (CCHMC), New York University (NYU), University of Wisconsin (UW) and University of Michigan / Michigan Medicine (UM). The whole liver and spleen were manually delineated as the ground truth masks. We developed a modality-invariant 3D Swin UNETR using a modality-invariant training strategy, in which each patient's T1w and T2w MR images were treated as separate training samples. We conducted both internal and external validation experiments. A total of 241 T1w and 339 T2w MR sequences from 304 patients (age [mean ± standard deviation], 31.8 ± 20.3 years; 132 [43%] female) were included for model development. The Swin UNETR achieved a Dice similarity coefficient (DSC) of 0.95 ± 0.02 on T1w images and 0.93 ± 0.05 on T2w images for liver segmentation. This is significantly better than the U-Net model (0.90 ± 0.05, p < 0.001 and 0.90 ± 0.13, p < 0.001, respectively). The Swin UNETR achieved a DSC of 0.88 ± 0.12 on T1w images and 0.93 ± 0.10 on T2w images for spleen segmentation, and it significantly outperformed a modality-invariant U-Net model (0.80 ± 0.18, p = 0.001 and 0.88 ± 0.12, p = 0.002, respectively). Our study demonstrated that a modality-invariant Swin UNETR model can segment the liver and spleen on routinely collected clinical bi-parametric abdominal MR images from children and adult patients.
Collapse
Affiliation(s)
- Huixian Zhang
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
- Department of Computer Science, University of Cincinnati, Cincinnati, OH, USA
| | - Hailong Li
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
- Artificial Intelligence Imaging Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
- Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Redha Ali
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Wei Jia
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
- Department of Biostatistics, University of Cincinnati, Cincinnati, OH, USA
| | - Wen Pan
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Scott B Reeder
- Department of Radiology, Medical Physics, Biomedical Engineering, Medicine, Emergency Medicine, University of Wisconsin, Madison, WI, USA
| | - David Harris
- Department of Radiology, Medical Physics, Biomedical Engineering, Medicine, Emergency Medicine, University of Wisconsin, Madison, WI, USA
| | - William Masch
- Department of Radiology, Michigan Medicine, Ann Arbor, MI, USA
| | - Anum Aslam
- Department of Radiology, Michigan Medicine, Ann Arbor, MI, USA
| | | | - Nehal A Parikh
- Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jonathan R Dillman
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
- Artificial Intelligence Imaging Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Lili He
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA.
- Department of Computer Science, University of Cincinnati, Cincinnati, OH, USA.
- Artificial Intelligence Imaging Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA.
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
- Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA.
- Department of Biomedical Engineering, University of Cincinnati, Cincinnati, OH, USA.
- Department of Biomedical Informatics, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| |
Collapse
|
2
|
Zarenia M, Zhang Y, Sarosiek C, Conlin R, Amjad A, Paulson E. Deep learning-based automatic contour quality assurance for auto-segmented abdominal MR-Linac contours. Phys Med Biol 2024; 69:10.1088/1361-6560/ad87a6. [PMID: 39413822 PMCID: PMC11551967 DOI: 10.1088/1361-6560/ad87a6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Accepted: 10/16/2024] [Indexed: 10/18/2024]
Abstract
Objective.Deep-learning auto-segmentation (DLAS) aims to streamline contouring in clinical settings. Nevertheless, achieving clinical acceptance of DLAS remains a hurdle in abdominal MRI, hindering the implementation of efficient clinical workflows for MR-guided online adaptive radiotherapy (MRgOART). Integrating automated contour quality assurance (ACQA) with automatic contour correction (ACC) techniques could optimize the performance of ACC by concentrating on inaccurate contours. Furthermore, ACQA can facilitate the contour selection process from various DLAS tools and/or deformable contour propagation from a prior treatment session. Here, we present the performance of novel DL-based 3D ACQA models for evaluating DLAS contours acquired during MRgOART.Approach.The ACQA model, based on a 3D convolutional neural network (CNN), was trained using pancreas and duodenum contours obtained from a research DLAS tool on abdominal MRIs acquired from a 1.5 T MR-Linac. The training dataset contained abdominal MR images, DL contours, and their corresponding quality ratings, from 103 datasets. The quality of DLAS contours was determined using an in-house contour classification tool, which categorizes contours as acceptable or edit-required based on the expected editing effort. The performance of the 3D ACQA model was evaluated using an independent dataset of 34 abdominal MRIs, utilizing confusion matrices for true and predicted classes.Main results.The ACQA predicted 'acceptable' and 'edit-required' contours at 72.2% (91/126) and 83.6% (726/868) accuracy for pancreas, and at 71.2% (79/111) and 89.6% (772/862) for duodenum contours, respectively. The model successfully identified false positive (extra) and false negative (missing) DLAS contours at 93.75% (15/16) and %99.7 (438/439) accuracy for pancreas, and at 95% (57/60) and 98.9% (91/99) for duodenum, respectively.Significance.We developed 3D-ACQA models capable of quickly evaluating the quality of DLAS pancreas and duodenum contours on abdominal MRI. These models can be integrated into clinical workflow, facilitating efficient and consistent contour evaluation process in MRgOART for abdominal malignancies.
Collapse
Affiliation(s)
- Mohammad Zarenia
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI
- Department of Radiation Medicine, MedStar Georgetown University Hospital, Washington, D.C
| | - Ying Zhang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX
| | - Christina Sarosiek
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI
| | - Renae Conlin
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI
| | - Asma Amjad
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI
| | - Eric Paulson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI
| |
Collapse
|
3
|
El Homsi M, Bane O, Fauveau V, Hectors S, Vietti Violi N, Sylla P, Ko HB, Cuevas J, Carbonell G, Nehlsen A, Vanguri R, Viswanath S, Jambawalikar S, Shaish H, Taouli B. Prediction of locally advanced rectal cancer response to neoadjuvant chemoradiation therapy using volumetric multiparametric MRI-based radiomics. Abdom Radiol (NY) 2024; 49:791-800. [PMID: 38150143 DOI: 10.1007/s00261-023-04128-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 11/06/2023] [Accepted: 11/12/2023] [Indexed: 12/28/2023]
Abstract
PURPOSE To assess the role of pretreatment multiparametric (mp)MRI-based radiomic features in predicting pathologic complete response (pCR) of locally advanced rectal cancer (LARC) to neoadjuvant chemoradiation therapy (nCRT). METHODS This was a retrospective dual-center study including 98 patients (M/F 77/21, mean age 60 years) with LARC who underwent pretreatment mpMRI followed by nCRT and total mesorectal excision or watch and wait. Fifty-eight patients from institution 1 constituted the training set and 40 from institution 2 the validation set. Manual segmentation using volumes of interest was performed on T1WI pre-/post-contrast, T2WI and diffusion-weighted imaging (DWI) sequences. Demographic information and serum carcinoembryonic antigen (CEA) levels were collected. Shape, 1st and 2nd order radiomic features were extracted and entered in models based on principal component analysis used to predict pCR. The best model was obtained using a k-fold cross-validation method on the training set, and AUC, sensitivity and specificity for prediction of pCR were calculated on the validation set. RESULTS Stage distribution was T3 (n = 79) or T4 (n = 19). Overall, 16 (16.3%) patients achieved pCR. Demographics, MRI TNM stage, and CEA were not predictive of pCR (p range 0.59-0.96), while several radiomic models achieved high diagnostic performance for prediction of pCR (in the validation set), with AUCs ranging from 0.7 to 0.9, with the best model based on high b-value DWI demonstrating AUC of 0.9 [95% confidence intervals: 0.67, 1], sensitivity of 100% [100%, 100%], and specificity of 81% [66%, 96%]. CONCLUSION Radiomic models obtained from pre-treatment MRI show good to excellent performance for the prediction of pCR in patients with LARC, superior to clinical parameters and CEA. A larger study is needed for confirmation of these results.
Collapse
Affiliation(s)
- Maria El Homsi
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, USA.
- Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York av, New York, USA.
| | - Octavia Bane
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Valentin Fauveau
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Stefanie Hectors
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Naik Vietti Violi
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Radiology, Lausanne University Hospital, Lausanne, Switzerland
| | - Patricia Sylla
- Department of Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Huai-Bin Ko
- Department of Pathology, Molecular and Cell Based Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Pathology, Columbia University Medical Center, New York, NY, USA
| | - Jordan Cuevas
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Guillermo Carbonell
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Radiology, Virgen de la Arrixaca University Clinical Hospital, University of Murcia, Murcia, Spain
| | - Anthony Nehlsen
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Rami Vanguri
- Department of Epidemiology & Biostatistics, Columbia University Medical Center, New York, NY, USA
| | - Satish Viswanath
- Department of Radiology, Case Western University, Cleveland, OH, USA
| | - Sachin Jambawalikar
- Department of Radiology, Columbia University Medical Center, New York, NY, USA
| | - Hiram Shaish
- Department of Radiology, Columbia University Medical Center, New York, NY, USA
| | - Bachir Taouli
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, USA
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
4
|
Azuri I, Wattad A, Peri-Hanania K, Kashti T, Rosen R, Caspi Y, Istaiti M, Wattad M, Applbaum Y, Zimran A, Revel-Vilk S, C. Eldar Y. A Deep-Learning Approach to Spleen Volume Estimation in Patients with Gaucher Disease. J Clin Med 2023; 12:5361. [PMID: 37629403 PMCID: PMC10455264 DOI: 10.3390/jcm12165361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 08/04/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
The enlargement of the liver and spleen (hepatosplenomegaly) is a common manifestation of Gaucher disease (GD). An accurate estimation of the liver and spleen volumes in patients with GD, using imaging tools such as magnetic resonance imaging (MRI), is crucial for the baseline assessment and monitoring of the response to treatment. A commonly used method in clinical practice to estimate the spleen volume is the employment of a formula that uses the measurements of the craniocaudal length, diameter, and thickness of the spleen in MRI. However, the inaccuracy of this formula is significant, which, in turn, emphasizes the need for a more precise and reliable alternative. To this end, we employed deep-learning techniques, to achieve a more accurate spleen segmentation and, subsequently, calculate the resulting spleen volume with higher accuracy on a testing set cohort of 20 patients with GD. Our results indicate that the mean error obtained using the deep-learning approach to spleen volume estimation is 3.6 ± 2.7%, which is significantly lower than the common formula approach, which resulted in a mean error of 13.9 ± 9.6%. These findings suggest that the integration of deep-learning methods into the clinical routine practice for spleen volume calculation could lead to improved diagnostic and monitoring outcomes.
Collapse
Affiliation(s)
- Ido Azuri
- Bioinformatics Unit, Department of Life Sciences Core Facilities, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Ameer Wattad
- Department of Radiology, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Keren Peri-Hanania
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Tamar Kashti
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Ronnie Rosen
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Yaron Caspi
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Majdolen Istaiti
- Gaucher Unit, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Makram Wattad
- Department of Radiology, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Yaakov Applbaum
- Department of Radiology, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University, Jerusalem 9112102, Israel
| | - Ari Zimran
- Gaucher Unit, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University, Jerusalem 9112102, Israel
| | - Shoshana Revel-Vilk
- Gaucher Unit, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University, Jerusalem 9112102, Israel
| | - Yonina C. Eldar
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| |
Collapse
|
5
|
Amjad A, Xu J, Thill D, Zhang Y, Ding J, Paulson E, Hall W, Erickson BA, Li XA. Deep learning auto-segmentation on multi-sequence magnetic resonance images for upper abdominal organs. Front Oncol 2023; 13:1209558. [PMID: 37483486 PMCID: PMC10358771 DOI: 10.3389/fonc.2023.1209558] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 06/19/2023] [Indexed: 07/25/2023] Open
Abstract
Introduction Multi-sequence multi-parameter MRIs are often used to define targets and/or organs at risk (OAR) in radiation therapy (RT) planning. Deep learning has so far focused on developing auto-segmentation models based on a single MRI sequence. The purpose of this work is to develop a multi-sequence deep learning based auto-segmentation (mS-DLAS) based on multi-sequence abdominal MRIs. Materials and methods Using a previously developed 3DResUnet network, a mS-DLAS model using 4 T1 and T2 weighted MRI acquired during routine RT simulation for 71 cases with abdominal tumors was trained and tested. Strategies including data pre-processing, Z-normalization approach, and data augmentation were employed. Additional 2 sequence specific T1 weighted (T1-M) and T2 weighted (T2-M) models were trained to evaluate performance of sequence-specific DLAS. Performance of all models was quantitatively evaluated using 6 surface and volumetric accuracy metrics. Results The developed DLAS models were able to generate reasonable contours of 12 upper abdomen organs within 21 seconds for each testing case. The 3D average values of dice similarity coefficient (DSC), mean distance to agreement (MDA mm), 95 percentile Hausdorff distance (HD95% mm), percent volume difference (PVD), surface DSC (sDSC), and relative added path length (rAPL mm/cc) over all organs were 0.87, 1.79, 7.43, -8.95, 0.82, and 12.25, respectively, for mS-DLAS model. Collectively, 71% of the auto-segmented contours by the three models had relatively high quality. Additionally, the obtained mS-DLAS successfully segmented 9 out of 16 MRI sequences that were not used in the model training. Conclusion We have developed an MRI-based mS-DLAS model for auto-segmenting of upper abdominal organs on MRI. Multi-sequence segmentation is desirable in routine clinical practice of RT for accurate organ and target delineation, particularly for abdominal tumors. Our work will act as a stepping stone for acquiring fast and accurate segmentation on multi-contrast MRI and make way for MR only guided radiation therapy.
Collapse
Affiliation(s)
- Asma Amjad
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | | | - Dan Thill
- Elekta Inc., ST. Charles, MO, United States
| | - Ying Zhang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Jie Ding
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Eric Paulson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - William Hall
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Beth A. Erickson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - X. Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| |
Collapse
|
6
|
Berbís MA, Paulano Godino F, Royuela del Val J, Alcalá Mata L, Luna A. Clinical impact of artificial intelligence-based solutions on imaging of the pancreas and liver. World J Gastroenterol 2023; 29:1427-1445. [PMID: 36998424 PMCID: PMC10044858 DOI: 10.3748/wjg.v29.i9.1427] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/13/2023] [Accepted: 02/27/2023] [Indexed: 03/07/2023] Open
Abstract
Artificial intelligence (AI) has experienced substantial progress over the last ten years in many fields of application, including healthcare. In hepatology and pancreatology, major attention to date has been paid to its application to the assisted or even automated interpretation of radiological images, where AI can generate accurate and reproducible imaging diagnosis, reducing the physicians’ workload. AI can provide automatic or semi-automatic segmentation and registration of the liver and pancreatic glands and lesions. Furthermore, using radiomics, AI can introduce new quantitative information which is not visible to the human eye to radiological reports. AI has been applied in the detection and characterization of focal lesions and diffuse diseases of the liver and pancreas, such as neoplasms, chronic hepatic disease, or acute or chronic pancreatitis, among others. These solutions have been applied to different imaging techniques commonly used to diagnose liver and pancreatic diseases, such as ultrasound, endoscopic ultrasonography, computerized tomography (CT), magnetic resonance imaging, and positron emission tomography/CT. However, AI is also applied in this context to many other relevant steps involved in a comprehensive clinical scenario to manage a gastroenterological patient. AI can also be applied to choose the most convenient test prescription, to improve image quality or accelerate its acquisition, and to predict patient prognosis and treatment response. In this review, we summarize the current evidence on the application of AI to hepatic and pancreatic radiology, not only in regard to the interpretation of images, but also to all the steps involved in the radiological workflow in a broader sense. Lastly, we discuss the challenges and future directions of the clinical application of AI methods.
Collapse
Affiliation(s)
- M Alvaro Berbís
- Department of Radiology, HT Médica, San Juan de Dios Hospital, Córdoba 14960, Spain
- Faculty of Medicine, Autonomous University of Madrid, Madrid 28049, Spain
| | | | | | - Lidia Alcalá Mata
- Department of Radiology, HT Médica, Clínica las Nieves, Jaén 23007, Spain
| | - Antonio Luna
- Department of Radiology, HT Médica, Clínica las Nieves, Jaén 23007, Spain
| |
Collapse
|
7
|
Huang SY, Hsu WL, Hsu RJ, Liu DW. Fully Convolutional Network for the Semantic Segmentation of Medical Images: A Survey. Diagnostics (Basel) 2022; 12:diagnostics12112765. [PMID: 36428824 PMCID: PMC9689961 DOI: 10.3390/diagnostics12112765] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/19/2022] [Accepted: 11/04/2022] [Indexed: 11/16/2022] Open
Abstract
There have been major developments in deep learning in computer vision since the 2010s. Deep learning has contributed to a wealth of data in medical image processing, and semantic segmentation is a salient technique in this field. This study retrospectively reviews recent studies on the application of deep learning for segmentation tasks in medical imaging and proposes potential directions for future development, including model development, data augmentation processing, and dataset creation. The strengths and deficiencies of studies on models and data augmentation, as well as their application to medical image segmentation, were analyzed. Fully convolutional network developments have led to the creation of the U-Net and its derivatives. Another noteworthy image segmentation model is DeepLab. Regarding data augmentation, due to the low data volume of medical images, most studies focus on means to increase the wealth of medical image data. Generative adversarial networks (GAN) increase data volume via deep learning. Despite the increasing types of medical image datasets, there is still a deficiency of datasets on specific problems, which should be improved moving forward. Considering the wealth of ongoing research on the application of deep learning processing to medical image segmentation, the data volume and practical clinical application problems must be addressed to ensure that the results are properly applied.
Collapse
Affiliation(s)
- Sheng-Yao Huang
- Institute of Medical Science, Tzu Chi University, Hualien 97071, Taiwan
- Department of Radiation Oncology, Hualien Tzu Chi General Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
| | - Wen-Lin Hsu
- Department of Radiation Oncology, Hualien Tzu Chi General Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- Cancer Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- School of Medicine, Tzu Chi University, Hualien 97071, Taiwan
| | - Ren-Jun Hsu
- Institute of Medical Science, Tzu Chi University, Hualien 97071, Taiwan
- Cancer Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- School of Medicine, Tzu Chi University, Hualien 97071, Taiwan
- Correspondence: (R.-J.H.); (D.-W.L.); Tel. & Fax: +886-3-8561825 (R.-J.H. & D.-W.L.)
| | - Dai-Wei Liu
- Institute of Medical Science, Tzu Chi University, Hualien 97071, Taiwan
- Department of Radiation Oncology, Hualien Tzu Chi General Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- Cancer Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- School of Medicine, Tzu Chi University, Hualien 97071, Taiwan
- Correspondence: (R.-J.H.); (D.-W.L.); Tel. & Fax: +886-3-8561825 (R.-J.H. & D.-W.L.)
| |
Collapse
|
8
|
Yin H, Zhang F, Yang X, Meng X, Miao Y, Noor Hussain MS, Yang L, Li Z. Research trends of artificial intelligence in pancreatic cancer: a bibliometric analysis. Front Oncol 2022; 12:973999. [PMID: 35982967 PMCID: PMC9380440 DOI: 10.3389/fonc.2022.973999] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 07/13/2022] [Indexed: 01/03/2023] Open
Abstract
Purpose We evaluated the related research on artificial intelligence (AI) in pancreatic cancer (PC) through bibliometrics analysis and explored the research hotspots and current status from 1997 to 2021. Methods Publications related to AI in PC were retrieved from the Web of Science Core Collection (WoSCC) during 1997-2021. Bibliometrix package of R software 4.0.3 and VOSviewer were used to bibliometrics analysis. Results A total of 587 publications in this field were retrieved from WoSCC database. After 2018, the number of publications grew rapidly. The United States and Johns Hopkins University were the most influential country and institution, respectively. A total of 2805 keywords were investigated, 81 of which appeared more than 10 times. Co-occurrence analysis categorized these keywords into five types of clusters: (1) AI in biology of PC, (2) AI in pathology and radiology of PC, (3) AI in the therapy of PC, (4) AI in risk assessment of PC and (5) AI in endoscopic ultrasonography (EUS) of PC. Trend topics and thematic maps show that keywords " diagnosis ", “survival”, “classification”, and “management” are the research hotspots in this field. Conclusion The research related to AI in pancreatic cancer is still in the initial stage. Currently, AI is widely studied in biology, diagnosis, treatment, risk assessment, and EUS of pancreatic cancer. This bibliometrics study provided an insight into AI in PC research and helped researchers identify new research orientations.
Collapse
Affiliation(s)
- Hua Yin
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
- Postgraduate Training Base in Shanghai Gongli Hospital, Ningxia Medical University, Shanghai, China
| | - Feixiong Zhang
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
| | - Xiaoli Yang
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
| | - Xiangkun Meng
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
| | - Yu Miao
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
| | | | - Li Yang
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
- *Correspondence: Zhaoshen Li, ; Li Yang,
| | - Zhaoshen Li
- Postgraduate Training Base in Shanghai Gongli Hospital, Ningxia Medical University, Shanghai, China
- Clinical Medical College, Ningxia Medical University, Yinchuan, China
- *Correspondence: Zhaoshen Li, ; Li Yang,
| |
Collapse
|
9
|
Altini N, Prencipe B, Cascarano GD, Brunetti A, Brunetti G, Triggiani V, Carnimeo L, Marino F, Guerriero A, Villani L, Scardapane A, Bevilacqua V. Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.157] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
10
|
Roger R, Hilmes MA, Williams JM, Moore DJ, Powers AC, Craddock RC, Virostko J. Deep learning-based pancreas volume assessment in individuals with type 1 diabetes. BMC Med Imaging 2022; 22:5. [PMID: 34986790 PMCID: PMC8734282 DOI: 10.1186/s12880-021-00729-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 12/10/2021] [Indexed: 01/11/2023] Open
Abstract
Pancreas volume is reduced in individuals with diabetes and in autoantibody positive individuals at high risk for developing type 1 diabetes (T1D). Studies investigating pancreas volume are underway to assess pancreas volume in large clinical databases and studies, but manual pancreas annotation is time-consuming and subjective, preventing extension to large studies and databases. This study develops deep learning for automated pancreas volume measurement in individuals with diabetes. A convolutional neural network was trained using manual pancreas annotation on 160 abdominal magnetic resonance imaging (MRI) scans from individuals with T1D, controls, or a combination thereof. Models trained using each cohort were then tested on scans of 25 individuals with T1D. Deep learning and manual segmentations of the pancreas displayed high overlap (Dice coefficient = 0.81) and excellent correlation of pancreas volume measurements (R2 = 0.94). Correlation was highest when training data included individuals both with and without T1D. The pancreas of individuals with T1D can be automatically segmented to measure pancreas volume. This algorithm can be applied to large imaging datasets to quantify the spectrum of human pancreas volume.
Collapse
Affiliation(s)
- Raphael Roger
- Department of Diagnostic Medicine, Dell Medical School, University of Texas at Austin, 1701 Trinity St., Stop C0200, Austin, TX, 78712, USA
| | - Melissa A Hilmes
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Jonathan M Williams
- Division of Diabetes, Endocrinology, and Metabolism, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Daniel J Moore
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Pathology, Immunology, and Microbiology, Vanderbilt University, Nashville, TN, USA
| | - Alvin C Powers
- Division of Diabetes, Endocrinology, and Metabolism, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Molecular Physiology and Biophysics, Vanderbilt University, Nashville, TN, USA.,VA Tennessee Valley Healthcare System, Nashville, TN, USA
| | - R Cameron Craddock
- Department of Diagnostic Medicine, Dell Medical School, University of Texas at Austin, 1701 Trinity St., Stop C0200, Austin, TX, 78712, USA
| | - John Virostko
- Department of Diagnostic Medicine, Dell Medical School, University of Texas at Austin, 1701 Trinity St., Stop C0200, Austin, TX, 78712, USA. .,Livestrong Cancer Institutes, University of Texas at Austin, Austin, TX, USA. .,Department of Oncology, University of Texas at Austin, Austin, TX, USA. .,Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
11
|
Meddeb A, Kossen T, Bressem KK, Hamm B, Nagel SN. Evaluation of a Deep Learning Algorithm for Automated Spleen Segmentation in Patients with Conditions Directly or Indirectly Affecting the Spleen. Tomography 2021; 7:950-960. [PMID: 34941650 PMCID: PMC8704906 DOI: 10.3390/tomography7040078] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 12/06/2021] [Accepted: 12/07/2021] [Indexed: 12/12/2022] Open
Abstract
The aim of this study was to develop a deep learning-based algorithm for fully automated spleen segmentation using CT images and to evaluate the performance in conditions directly or indirectly affecting the spleen (e.g., splenomegaly, ascites). For this, a 3D U-Net was trained on an in-house dataset (n = 61) including diseases with and without splenic involvement (in-house U-Net), and an open-source dataset from the Medical Segmentation Decathlon (open dataset, n = 61) without splenic abnormalities (open U-Net). Both datasets were split into a training (n = 32.52%), a validation (n = 9.15%) and a testing dataset (n = 20.33%). The segmentation performances of the two models were measured using four established metrics, including the Dice Similarity Coefficient (DSC). On the open test dataset, the in-house and open U-Net achieved a mean DSC of 0.906 and 0.897 respectively (p = 0.526). On the in-house test dataset, the in-house U-Net achieved a mean DSC of 0.941, whereas the open U-Net obtained a mean DSC of 0.648 (p < 0.001), showing very poor segmentation results in patients with abnormalities in or surrounding the spleen. Thus, for reliable, fully automated spleen segmentation in clinical routine, the training dataset of a deep learning-based algorithm should include conditions that directly or indirectly affect the spleen.
Collapse
Affiliation(s)
- Aymen Meddeb
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Klinik für Radiologie, Hindenburgdamm 30, 12203 Berlin, Germany; (K.K.B.); (B.H.); (S.N.N.)
- Correspondence: ; Tel.: +49-30-450-527792
| | - Tabea Kossen
- CLAIM—Charité Lab for AI in Medicine, Charité—Universitätsmedizin Berlin, Augustenburger Platz 1, 13353 Berlin, Germany;
| | - Keno K. Bressem
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Klinik für Radiologie, Hindenburgdamm 30, 12203 Berlin, Germany; (K.K.B.); (B.H.); (S.N.N.)
- Berlin Institute of Health, Charité—Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany
| | - Bernd Hamm
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Klinik für Radiologie, Hindenburgdamm 30, 12203 Berlin, Germany; (K.K.B.); (B.H.); (S.N.N.)
| | - Sebastian N. Nagel
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Klinik für Radiologie, Hindenburgdamm 30, 12203 Berlin, Germany; (K.K.B.); (B.H.); (S.N.N.)
| |
Collapse
|
12
|
Feng DY, Ren Y, Zhou M, Zou XL, Wu WB, Yang HL, Zhou YQ, Zhang TT. Deep Learning-Based Available and Common Clinical-Related Feature Variables Robustly Predict Survival in Community-Acquired Pneumonia. Risk Manag Healthc Policy 2021; 14:3701-3709. [PMID: 34512057 PMCID: PMC8427836 DOI: 10.2147/rmhp.s317735] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 08/14/2021] [Indexed: 01/16/2023] Open
Abstract
Background Community-acquired pneumonia (CAP) is a leading cause of morbidity and mortality worldwide. Although there are many predictors of death for CAP, there are still some limitations. This study aimed to build a simple and accurate model based on available and common clinical-related feature variables for predicting CAP mortality by adopting machine learning techniques. Methods This was a single-center retrospective study. The data used in this study were collected from all patients (≥18 years) with CAP admitted to research hospitals between January 2012 and April 2020. Each patient had 62 clinical-related features, including clinical diagnostic and treatment features. Patients were divided into two endpoints, and by using Tensorflow2.4.1 as the modeling framework, a three-layer fully connected neural network (FCNN) was built as a base model for classification. For a comprehensive comparison, seven classical machine learning methods and their integrated stacking patterns were introduced to model and compare the same training and test data. Results A total of 3997 patients with CAP were included; 205 (5.12%) died in the hospital. After performing deep learning methods, this study established an ensemble FCNN model based on 12 FCNNs. By comparing with seven classical machine learning methods, the area under the curve of the ensemble FCNN was 0.975 when using deep learning algorithms to classify poor from good prognosis based on available and common clinical-related feature variables. The predicted outcome was poor prognosis if the ControlNet's poor prognosis score was greater than the cutoff value of 0.50. To confirm the scientificity of the ensemble FCNN model, this study analyzed the weight of random forest features and found that mainstream prognostic features still held weight, although the model is perfect after integrating other factors considered less important by previous studies. Conclusion This study used deep learning algorithms to classify prognosis based on available and common clinical-related feature variables in patients with CAP with high accuracy and good generalizability. Every clinical-related feature is important to the model.
Collapse
Affiliation(s)
- Ding-Yun Feng
- Department of Pulmonary and Critical Care Medicine, The Third Affiliated Hospital of Sun Yat-sen University, Institute of Respiratory Diseases of Sun Yat-Sen University, Guangzhou, People's Republic of China
| | - Yong Ren
- Guangdong Provincial Key Laboratory of Digestive Cancer Research, The Seventh Affiliated Hospital of Sun Yat-Sen University, Shenzhen, People's Republic of China
| | - Mi Zhou
- Department of Surgery Intensive Care Unit, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, People's Republic of China
| | - Xiao-Ling Zou
- Department of Pulmonary and Critical Care Medicine, The Third Affiliated Hospital of Sun Yat-sen University, Institute of Respiratory Diseases of Sun Yat-Sen University, Guangzhou, People's Republic of China
| | - Wen-Bin Wu
- Department of Pulmonary and Critical Care Medicine, The Third Affiliated Hospital of Sun Yat-sen University, Institute of Respiratory Diseases of Sun Yat-Sen University, Guangzhou, People's Republic of China
| | - Hai-Ling Yang
- Department of Pulmonary and Critical Care Medicine, The Third Affiliated Hospital of Sun Yat-sen University, Institute of Respiratory Diseases of Sun Yat-Sen University, Guangzhou, People's Republic of China
| | - Yu-Qi Zhou
- Department of Pulmonary and Critical Care Medicine, The Third Affiliated Hospital of Sun Yat-sen University, Institute of Respiratory Diseases of Sun Yat-Sen University, Guangzhou, People's Republic of China
| | - Tian-Tuo Zhang
- Department of Pulmonary and Critical Care Medicine, The Third Affiliated Hospital of Sun Yat-sen University, Institute of Respiratory Diseases of Sun Yat-Sen University, Guangzhou, People's Republic of China
| |
Collapse
|
13
|
Hamwood J, Schmutz B, Collins MJ, Allenby MC, Alonso-Caneiro D. A deep learning method for automatic segmentation of the bony orbit in MRI and CT images. Sci Rep 2021; 11:13693. [PMID: 34211081 PMCID: PMC8249400 DOI: 10.1038/s41598-021-93227-3] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 06/15/2021] [Indexed: 12/23/2022] Open
Abstract
This paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer.
Collapse
Affiliation(s)
- Jared Hamwood
- Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland University of Technology (QUT), Kelvin Grove, Qld, 4059, Australia
| | - Beat Schmutz
- Centre in Regenerative Medicine, Institute of Health and Biomedical Innovation, Queensland University of Technology, Kelvin Grove, QLD, 4059, Australia
- Metro North Hospital and Health Service, Jamieson Trauma Institute, Herston, QLD, 4029, Australia
| | - Michael J Collins
- Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland University of Technology (QUT), Kelvin Grove, Qld, 4059, Australia
| | - Mark C Allenby
- Biofabrication and Tissue Morphology Laboratory, Centre for Biomedical Technologies, School of Mechanical Medical and Process Engineering, Queensland University of Technology (QUT), Herston, Qld, 4000, Australia
| | - David Alonso-Caneiro
- Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland University of Technology (QUT), Kelvin Grove, Qld, 4059, Australia.
| |
Collapse
|
14
|
Barat M, Chassagnon G, Dohan A, Gaujoux S, Coriat R, Hoeffel C, Cassinotto C, Soyer P. Artificial intelligence: a critical review of current applications in pancreatic imaging. Jpn J Radiol 2021; 39:514-523. [PMID: 33550513 DOI: 10.1007/s11604-021-01098-5] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 01/25/2021] [Indexed: 12/11/2022]
Abstract
The applications of artificial intelligence (AI), including machine learning and deep learning, in the field of pancreatic disease imaging are rapidly expanding. AI can be used for the detection of pancreatic ductal adenocarcinoma and other pancreatic tumors but also for pancreatic lesion characterization. In this review, the basic of radiomics, recent developments and current results of AI in the field of pancreatic tumors are presented. Limitations and future perspectives of AI are discussed.
Collapse
Affiliation(s)
- Maxime Barat
- Department of Radiology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 27 Rue du Faubourg Saint-Jacques, Paris, France
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
| | - Guillaume Chassagnon
- Department of Radiology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 27 Rue du Faubourg Saint-Jacques, Paris, France
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
| | - Anthony Dohan
- Department of Radiology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 27 Rue du Faubourg Saint-Jacques, Paris, France
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
| | - Sébastien Gaujoux
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
- Department of Abdominal Surgery, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 75014, Paris, France
| | - Romain Coriat
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
- Department of Gastroenterology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 75014, Paris, France
| | - Christine Hoeffel
- Department of Radiology, Robert Debré Hospital, 51092, Reims, France
| | - Christophe Cassinotto
- Department of Radiology, CHU Montpellier, University of Montpellier, Saint-Éloi Hospital, 34000, Montpellier, France
| | - Philippe Soyer
- Department of Radiology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 27 Rue du Faubourg Saint-Jacques, Paris, France.
- Université de Paris, Descartes-Paris 5, 75006, Paris, France.
| |
Collapse
|
15
|
Humpire-Mamani GE, Bukala J, Scholten ET, Prokop M, van Ginneken B, Jacobs C. Fully Automatic Volume Measurement of the Spleen at CT Using Deep Learning. Radiol Artif Intell 2021; 2:e190102. [PMID: 33937830 DOI: 10.1148/ryai.2020190102] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 04/26/2020] [Accepted: 05/01/2020] [Indexed: 12/15/2022]
Abstract
Purpose To develop a fully automated algorithm for spleen segmentation and to assess the performance of this algorithm in a large dataset. Materials and Methods In this retrospective study, a three-dimensional deep learning network was developed to segment the spleen on thorax-abdomen CT scans. Scans were extracted from patients undergoing oncologic treatment from 2014 to 2017. A total of 1100 scans from 1100 patients were used in this study, and 400 were selected for development of the algorithm. For testing, a dataset of 50 scans was annotated to assess the segmentation accuracy and was compared against the splenic index equation. In a qualitative observer experiment, an enriched set of 100 scan-pairs was used to evaluate whether the algorithm could aid a radiologist in assessing splenic volume change. The reference standard was set by the consensus of two other independent radiologists. A Mann-Whitney U test was conducted to test whether there was a performance difference between the algorithm and the independent observer. Results The algorithm and the independent observer obtained comparable Dice scores (P = .834) on the test set of 50 scans of 0.962 and 0.964, respectively. The radiologist had an agreement with the reference standard in 81% (81 of 100) of the cases after a visual classification of volume change, which increased to 92% (92 of 100) when aided by the algorithm. Conclusion A segmentation method based on deep learning can accurately segment the spleen on CT scans and may help radiologists to detect abnormal splenic volumes and splenic volume changes.Supplemental material is available for this article.© RSNA, 2020.
Collapse
Affiliation(s)
- Gabriel E Humpire-Mamani
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Joris Bukala
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Ernst T Scholten
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Mathias Prokop
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Colin Jacobs
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| |
Collapse
|
16
|
Mendoza Ladd A, Diehl DL. Artificial intelligence for early detection of pancreatic adenocarcinoma: The future is promising. World J Gastroenterol 2021; 27:1283-1295. [PMID: 33833482 PMCID: PMC8015296 DOI: 10.3748/wjg.v27.i13.1283] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 01/22/2021] [Accepted: 03/13/2021] [Indexed: 02/06/2023] Open
Abstract
Pancreatic ductal adenocarcinoma (PDAC) is a worldwide public health concern. Despite extensive research efforts toward improving diagnosis and treatment, the 5-year survival rate at best is approximately 15%. This dismal figure can be attributed to a variety of factors including lack of adequate screening methods, late symptom onset, and treatment resistance. Pancreatic ductal adenocarcinoma remains a grim diagnosis with a high mortality rate and a significant psy-chological burden for patients and their families. In recent years artificial intelligence (AI) has permeated the medical field at an accelerated pace, bringing potential new tools that carry the promise of improving diagnosis and treatment of a variety of diseases. In this review we will summarize the landscape of AI in diagnosis and treatment of PDAC.
Collapse
Affiliation(s)
- Antonio Mendoza Ladd
- Department of Internal Medicine, Division of Gastroenterology, Texas Tech University Health Sciences Center El Paso, El Paso, TX 79905, United States
| | - David L Diehl
- Department of Gastroenterology and Nutrition, Geisinger Medical Center, Danville, PA 17822, United States
| |
Collapse
|
17
|
Furtado P. Testing Segmentation Popular Loss and Variations in Three Multiclass Medical Imaging Problems. J Imaging 2021; 7:16. [PMID: 34460615 PMCID: PMC8321275 DOI: 10.3390/jimaging7020016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 01/16/2021] [Accepted: 01/22/2021] [Indexed: 12/15/2022] Open
Abstract
Image structures are segmented automatically using deep learning (DL) for analysis and processing. The three most popular base loss functions are cross entropy (crossE), intersect-over-the-union (IoU), and dice. Which should be used, is it useful to consider simple variations, such as modifying formula coefficients? How do characteristics of different image structures influence scores? Taking three different medical image segmentation problems (segmentation of organs in magnetic resonance images (MRI), liver in computer tomography images (CT) and diabetic retinopathy lesions in eye fundus images (EFI)), we quantify loss functions and variations, as well as segmentation scores of different targets. We first describe the limitations of metrics, since loss is a metric, then we describe and test alternatives. Experimentally, we observed that DeeplabV3 outperforms UNet and fully convolutional network (FCN) in all datasets. Dice scored 1 to 6 percentage points (pp) higher than cross entropy over all datasets, IoU improved 0 to 3 pp. Varying formula coefficients improved scores, but the best choices depend on the dataset: compared to crossE, different false positive vs. false negative weights improved MRI by 12 pp, and assigning zero weight to background improved EFI by 6 pp. Multiclass segmentation scored higher than n-uniclass segmentation in MRI by 8 pp. EFI lesions score low compared to more constant structures (e.g., optic disk or even organs), but loss modifications improve those scores significantly 6 to 9 pp. Our conclusions are that dice is best, it is worth assigning 0 weight to class background and to test different weights on false positives and false negatives.
Collapse
Affiliation(s)
- Pedro Furtado
- Dei/FCT/CISUC, University of Coimbra, Polo II, 3030-290 Coimbra, Portugal
| |
Collapse
|
18
|
Anderson BM, Lin EY, Cardenas CE, Gress DA, Erwin WD, Odisio BC, Koay EJ, Brock KK. Automated Contouring of Contrast and Noncontrast Computed Tomography Liver Images With Fully Convolutional Networks. Adv Radiat Oncol 2021; 6:100464. [PMID: 33490720 PMCID: PMC7807136 DOI: 10.1016/j.adro.2020.04.023] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 04/14/2020] [Accepted: 04/25/2020] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The deformable nature of the liver can make focal treatment challenging and is not adequately addressed with simple rigid registration techniques. More advanced registration techniques can take deformations into account (eg, biomechanical modeling) but require segmentations of the whole liver for each scan, which is a time-intensive process. We hypothesize that fully convolutional networks can be used to rapidly and accurately autosegment the liver, removing the temporal bottleneck for biomechanical modeling. METHODS AND MATERIALS Manual liver segmentations on computed tomography scans from 183 patients treated at our institution and 30 scans from the Medical Image Computing & Computer Assisted Intervention challenges were collected for this study. Three architectures were investigated for rapid automated segmentation of the liver (VGG-16, DeepLabv3 +, and a 3-dimensional UNet). Fifty-six cases were set aside as a final test set for quantitative model evaluation. Accuracy of the autosegmentations was assessed using Dice similarity coefficient and mean surface distance. Qualitative evaluation was also performed by 3 radiation oncologists on 50 independent cases with previously clinically treated liver contours. RESULTS The mean (minimum-maximum) mean surface distance for the test groups with the final model, DeepLabv3 +, were as follows: μContrast(N = 17): 0.99 mm (0.47-2.2), μNon_Contrast(N = 19)l: 1.12 mm (0.41-2.87), and μMiccai(N = 30)t: 1.48 mm (0.82-3.96). The qualitative evaluation showed that 30 of 50 autosegmentations (60%) were preferred to manual contours (majority voting) in a blinded comparison, and 48 of 50 autosegmentations (96%) were deemed clinically acceptable by at least 1 reviewing physician. CONCLUSIONS The autosegmentations were preferred compared with manually defined contours in the majority of cases. The ability to rapidly segment the liver with high accuracy achieved in this investigation has the potential to enable the efficient integration of biomechanical model-based registration into a clinical workflow.
Collapse
Affiliation(s)
- Brian M. Anderson
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Ethan Y. Lin
- Department of Interventional Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Carlos E. Cardenas
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Dustin A. Gress
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - William D. Erwin
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Bruno C. Odisio
- Department of Interventional Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Eugene J. Koay
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Kristy K. Brock
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| |
Collapse
|
19
|
Convolutional neural network-based automatic liver delineation on contrast-enhanced and non-contrast-enhanced CT images for radiotherapy planning. Rep Pract Oncol Radiother 2020; 25:981-986. [DOI: 10.1016/j.rpor.2020.09.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 08/23/2020] [Accepted: 09/21/2020] [Indexed: 11/21/2022] Open
|
20
|
Takagi S, Sakuma S, Morita I, Sugimoto E, Yamaguchi Y, Higuchi N, Inamoto K, Ariji Y, Ariji E, Murakami H. Application of Deep Learning in the Identification of Cerebral Hemodynamics Data Obtained from Functional Near-Infrared Spectroscopy: A Preliminary Study of Pre- and Post-Tooth Clenching Assessment. J Clin Med 2020; 9:E3475. [PMID: 33126595 PMCID: PMC7693464 DOI: 10.3390/jcm9113475] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 10/14/2020] [Accepted: 10/27/2020] [Indexed: 12/05/2022] Open
Abstract
In fields using functional near-infrared spectroscopy (fNIRS), there is a need for an easy-to-understand method that allows visual presentation and rapid analysis of data and test results. This preliminary study examined whether deep learning (DL) could be applied to the analysis of fNIRS-derived brain activity data. To create a visual presentation of the data, an imaging program was developed for the analysis of hemoglobin (Hb) data from the prefrontal cortex in healthy volunteers, obtained by fNIRS before and after tooth clenching. Three types of imaging data were prepared: oxygenated hemoglobin (oxy-Hb) data, deoxygenated hemoglobin (deoxy-Hb) data, and mixed data (using both oxy-Hb and deoxy-Hb data). To differentiate between rest and tooth clenching, a cross-validation test using the image data for DL and a convolutional neural network was performed. The network identification rate using Hb imaging data was relatively high (80‒90%). These results demonstrated that a method using DL for the assessment of fNIRS imaging data may provide a useful analysis system.
Collapse
Affiliation(s)
- Shinya Takagi
- Department of Fixed Prosthodontics, School of Dentistry, Aichi Gakuin University, Nagoya 464-8651, Japan;
| | - Shigemitsu Sakuma
- Department of Fixed Prosthodontics, School of Dentistry, Aichi Gakuin University, Nagoya 464-8651, Japan;
| | - Ichizo Morita
- Japanese Red Cross Toyota College of Nursing, Toyota 471-8565, Japan;
| | - Eri Sugimoto
- Department of Pediatric Dentistry, School of Dentistry, Aichi Gakuin University, Nagoya 464-8651, Japan;
| | - Yoshihiro Yamaguchi
- Department of Fixed Prosthodontics, School of Dentistry, Aichi Gakuin University, Nagoya 464-8651, Japan;
| | - Naoya Higuchi
- Department of Endodontics, School of Dentistry, Aichi Gakuin University, Nagoya 464-8651, Japan; (N.H.); (K.I.)
| | - Kyoko Inamoto
- Department of Endodontics, School of Dentistry, Aichi Gakuin University, Nagoya 464-8651, Japan; (N.H.); (K.I.)
| | - Yoshiko Ariji
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Aichi Gakuin University, Nagoya 464-8651, Japan; (Y.A.); (E.A.)
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Aichi Gakuin University, Nagoya 464-8651, Japan; (Y.A.); (E.A.)
| | - Hiroshi Murakami
- Department of Gerodontology and Home Care Dentistry, School of Dentistry, Aichi Gakuin University, Nagoya 464-8651, Japan;
| |
Collapse
|
21
|
Chen Y, Ruan D, Xiao J, Wang L, Sun B, Saouaf R, Yang W, Li D, Fan Z. Fully automated multiorgan segmentation in abdominal magnetic resonance imaging with deep neural networks. Med Phys 2020; 47:4971-4982. [PMID: 32748401 DOI: 10.1002/mp.14429] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 07/12/2020] [Accepted: 07/17/2020] [Indexed: 02/06/2023] Open
Abstract
PURPOSE Segmentation of multiple organs-at-risk (OARs) is essential for magnetic resonance (MR)-only radiation therapy treatment planning and MR-guided adaptive radiotherapy of abdominal cancers. Current practice requires manual delineation that is labor-intensive, time-consuming, and prone to intra- and interobserver variations. We developed a deep learning (DL) technique for fully automated segmentation of multiple OARs on clinical abdominal MR images with high accuracy, reliability, and efficiency. METHODS We developed Automated deep Learning-based abdominal multiorgan segmentation (ALAMO) technique based on two-dimensional U-net and a densely connected network structure with tailored design in data augmentation and training procedures such as deep connection, auxiliary supervision, and multiview. The model takes in multislice MR images and generates the output of segmentation results. 3.0-Tesla T1 VIBE (Volumetric Interpolated Breath-hold Examination) images of 102 subjects were used in our study and split into 66 for training, 16 for validation, and 20 for testing. Ten OARs were studied, including the liver, spleen, pancreas, left/right kidneys, stomach, duodenum, small intestine, spinal cord, and vertebral bodies. An experienced radiologist manually labeled each OAR, followed by reediting, if necessary, by a senior radiologist, to create the ground-truth. The performance was measured using volume overlapping and surface distance. RESULTS The ALAMO technique generated segmentation labels in good agreement with the manual results. Specifically, among the ten OARs, nine achieved high dice similarity coefficients (DSCs) in the range of 0.87-0.96, except for the duodenum with a DSC of 0.80. The inference completed within 1 min for a three-dimensional volume of 320 × 288 × 180. Overall, the ALAMO model matched the state-of-the-art techniques in performance. CONCLUSION The proposed ALAMO technique allows for fully automated abdominal MR segmentation with high accuracy and practical memory and computation time demands.
Collapse
Affiliation(s)
- Yuhua Chen
- Department of Bioengineering, University of California, Los Angeles, CA, USA.,Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Dan Ruan
- Department of Bioengineering, University of California, Los Angeles, CA, USA.,Department of Radiation Oncology, University of California, Los Angeles, CA, USA
| | - Jiayu Xiao
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Lixia Wang
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.,Department of Radiology, Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Bin Sun
- Department of Radiology, Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| | - Rola Saouaf
- Department of Imaging, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Wensha Yang
- Department of Radiation Oncology, University of Southern California, Los Angeles, CA, USA
| | - Debiao Li
- Department of Bioengineering, University of California, Los Angeles, CA, USA.,Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.,Department of Medicine, University of California, Los Angeles, CA, USA
| | - Zhaoyang Fan
- Department of Bioengineering, University of California, Los Angeles, CA, USA.,Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.,Department of Medicine, University of California, Los Angeles, CA, USA
| |
Collapse
|
22
|
Sanders JW, Fletcher JR, Frank SJ, Liu HL, Johnson JM, Zhou Z, Chen HSM, Venkatesan AM, Kudchadker RJ, Pagel MD, Ma J. Deep learning application engine (DLAE): Development and integration of deep learning algorithms in medical imaging. SOFTWAREX 2019; 10:100347. [PMID: 34113706 PMCID: PMC8188855 DOI: 10.1016/j.softx.2019.100347] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Herein we introduce a deep learning (DL) application engine (DLAE) system concept, present potential uses of it, and describe pathways for its integration in clinical workflows. An open-source software application was developed to provide a code-free approach to DL for medical imaging applications. DLAE supports several DL techniques used in medical imaging, including convolutional neural networks, fully convolutional networks, generative adversarial networks, and bounding box detectors. Several example applications using clinical images were developed and tested to demonstrate the capabilities of DLAE. Additionally, a model deployment example was demonstrated in which DLAE was used to integrate two trained models into a commercial clinical software package.
Collapse
Affiliation(s)
- Jeremiah W. Sanders
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
| | - Justin R. Fletcher
- Odyssey Systems Consulting, LLC, 550 Lipoa Parkway, Kihei, Maui, HI, United States of America
| | - Steven J. Frank
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1422, Houston, TX 77030, United States of America
| | - Ho-Ling Liu
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
| | - Jason M. Johnson
- Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1473, Houston, TX 77030, United States of America
| | - Zijian Zhou
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
| | - Henry Szu-Meng Chen
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
| | - Aradhana M. Venkatesan
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
- Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1473, Houston, TX 77030, United States of America
| | - Rajat J. Kudchadker
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1420, Houston, TX 77030, United States of America
| | - Mark D. Pagel
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
- Department of Cancer Systems Imaging, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1907, Houston, TX 77030, United States of America
| | - Jingfei Ma
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
| |
Collapse
|
23
|
Jarrett D, Stride E, Vallis K, Gooding MJ. Applications and limitations of machine learning in radiation oncology. Br J Radiol 2019; 92:20190001. [PMID: 31112393 PMCID: PMC6724618 DOI: 10.1259/bjr.20190001] [Citation(s) in RCA: 86] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Machine learning approaches to problem-solving are growing rapidly within healthcare, and radiation oncology is no exception. With the burgeoning interest in machine learning comes the significant risk of misaligned expectations as to what it can and cannot accomplish. This paper evaluates the role of machine learning and the problems it solves within the context of current clinical challenges in radiation oncology. The role of learning algorithms within the workflow for external beam radiation therapy are surveyed, considering simulation imaging, multimodal fusion, image segmentation, treatment planning, quality assurance, and treatment delivery and adaptation. For each aspect, the clinical challenges faced, the learning algorithms proposed, and the successes and limitations of various approaches are analyzed. It is observed that machine learning has largely thrived on reproducibly mimicking conventional human-driven solutions with more efficiency and consistency. On the other hand, since algorithms are generally trained using expert opinion as ground truth, machine learning is of limited utility where problems or ground truths are not well-defined, or if suitable measures of correctness are not available. As a result, machines may excel at replicating, automating and standardizing human behaviour on manual chores, meanwhile the conceptual clinical challenges relating to definition, evaluation, and judgement remain in the realm of human intelligence and insight.
Collapse
Affiliation(s)
- Daniel Jarrett
- 1 Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, UK.,2 Mirada Medical Ltd, Oxford, UK
| | - Eleanor Stride
- 1 Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, UK
| | - Katherine Vallis
- 3 Department of Oncology, Oxford Institute for Radiation Oncology, University of Oxford, UK
| | | |
Collapse
|
24
|
Virostko J, Williams J, Hilmes M, Bowman C, Wright JJ, Du L, Kang H, Russell WE, Powers AC, Moore DJ. Pancreas Volume Declines During the First Year After Diagnosis of Type 1 Diabetes and Exhibits Altered Diffusion at Disease Onset. Diabetes Care 2019; 42:248-257. [PMID: 30552135 PMCID: PMC6341292 DOI: 10.2337/dc18-1507] [Citation(s) in RCA: 72] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Accepted: 11/15/2018] [Indexed: 02/03/2023]
Abstract
OBJECTIVE This study investigated the temporal dynamics of pancreas volume and microstructure in children and adolescents with recent-onset type 1 diabetes (T1D) and individuals without diabetes, including a subset expressing autoantibodies associated with the early stages of T1D. RESEARCH DESIGN AND METHODS MRI was performed in individuals with recent-onset stage 3 T1D (n = 51; median age 13 years) within 100 days after diagnosis (mean 67 days), 6 months, and 1 year postdiagnosis. Longitudinal MRI measurements were also made in similarly aged control participants (n = 57) and in autoantibody-positive individuals without diabetes (n = 20). The MRI protocol consisted of anatomical imaging to determine pancreas volume and quantitative MRI protocols interrogating tissue microstructure and composition. RESULTS Within 100 days of diabetes onset, individuals with T1D had a smaller pancreas (median volume 28.6 mL) than control participants (median volume 48.4 mL; P < 0.001), including when normalized by individual weight (P < 0.001). Longitudinal measurements of pancreas volume increased in control participants over the year, consistent with adolescent growth, but pancreas volume declined over the first year after T1D diagnosis (P < 0.001). In multiple autoantibody-positive individuals, the pancreas volume was significantly larger than that of the T1D cohort (P = 0.017) but smaller than that of the control cohort (P = 0.04). Diffusion-weighted MRI showed that individuals with recent-onset T1D had a higher apparent diffusion coefficient (P = 0.012), suggesting a loss of cellular structural integrity, with heterogeneous pancreatic distribution. CONCLUSIONS These results indicate that pancreas volume is decreased in stages 1, 2, and 3 of T1D and decreases during the first year after diabetes onset and that this loss of pancreatic volume is accompanied by microstructural changes.
Collapse
Affiliation(s)
- John Virostko
- Department of Diagnostic Medicine, Dell Medical School, University of Texas at Austin, Austin, TX
| | - Jon Williams
- Division of Diabetes, Endocrinology, and Metabolism, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN
| | - Melissa Hilmes
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN.,Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN
| | - Chris Bowman
- Division of Diabetes, Endocrinology, and Metabolism, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN
| | - Jordan J Wright
- Division of Diabetes, Endocrinology, and Metabolism, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN
| | - Liping Du
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN
| | - Hakmook Kang
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN
| | - William E Russell
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN.,Department of Cell and Developmental Biology, Vanderbilt University, Nashville, TN
| | - Alvin C Powers
- Division of Diabetes, Endocrinology, and Metabolism, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN .,Department of Molecular Physiology and Biophysics, Vanderbilt University, Nashville, TN.,VA Tennessee Valley Healthcare System, Nashville, TN
| | - Daniel J Moore
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN .,Department of Pathology, Immunology, and Microbiology, Vanderbilt University, Nashville, TN
| |
Collapse
|