1
|
Tavanaei R, Akhlaghpasand M, Alikhani A, Hajikarimloo B, Ansari A, Yong RL, Margetis K. Performance of Radiomics-based machine learning and deep learning-based methods in the prediction of tumor grade in meningioma: a systematic review and meta-analysis. Neurosurg Rev 2025; 48:78. [PMID: 39849257 DOI: 10.1007/s10143-025-03236-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Revised: 01/10/2025] [Accepted: 01/15/2025] [Indexed: 01/25/2025]
Abstract
Currently, the World Health Organization (WHO) grade of meningiomas is determined based on the biopsy results. Therefore, accurate non-invasive preoperative grading could significantly improve treatment planning and patient outcomes. Considering recent advances in machine learning (ML) and deep learning (DL), this meta-analysis aimed to evaluate the performance of these models in predicting the WHO meningioma grade using imaging data. A systematic search was performed in PubMed/MEDLINE, Embase, and the Cochrane Library for studies published up to April 1, 2024, and reporting the performance metrics of the ML models in predicting of WHO meningioma grade using imaging studies. Pooled area under the receiver operating characteristics curve (AUROC), specificity, and sensitivity were estimated. Subgroup and meta-regression analyses were performed based on a number of potential influencing variables. A total of 32 studies with 15,365 patients were included in the present study. The overall pooled sensitivity, specificity, and AUROC of ML methods for prediction of tumor grade in meningioma were 85% (95% CI, 79-89%), 87% (95% CI, 81-91%), and 93% (95% CI, 90-95%), respectively. Both the type of validation and study cohort (training or test) were significantly associated with model performance. However, no significant association was found between the sample size or the type of ML method and model performance. The ML predictive models show a high overall performance in predicting the WHO meningioma grade using imaging data. Further studies on the performance of DL algorithms in larger datasets using external validation are needed.
Collapse
Affiliation(s)
- Roozbeh Tavanaei
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammadhosein Akhlaghpasand
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Alireza Alikhani
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Bardia Hajikarimloo
- Department of Neurological Surgery, University of Virginia, Charlottesville, VA, USA
| | - Ali Ansari
- Student Research Committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Raymund L Yong
- Department of Neurosurgery, Mount Sinai Hospital, Icahn School of Medicine, New York City, NY, USA
| | - Konstantinos Margetis
- Department of Neurosurgery, Mount Sinai Hospital, Icahn School of Medicine, New York City, NY, USA.
| |
Collapse
|
2
|
Li Y, Liu YB, Li XB, Cui XN, Meng DH, Yuan CC, Ye ZX. Deep learning model combined with computed tomography features to preoperatively predicting the risk stratification of gastrointestinal stromal tumors. World J Gastrointest Oncol 2024; 16:4663-4674. [PMID: 39678791 PMCID: PMC11577356 DOI: 10.4251/wjgo.v16.i12.4663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 10/02/2024] [Accepted: 10/22/2024] [Indexed: 11/12/2024] Open
Abstract
BACKGROUND Gastrointestinal stromal tumors (GIST) are prevalent neoplasm originating from the gastrointestinal mesenchyme. Approximately 50% of GIST patients experience tumor recurrence within 5 years. Thus, there is a pressing need to accurately evaluate risk stratification preoperatively. AIM To assess the application of a deep learning model (DLM) combined with computed tomography features for predicting risk stratification of GISTs. METHODS Preoperative contrast-enhanced computed tomography (CECT) images of 551 GIST patients were retrospectively analyzed. All image features were independently analyzed by two radiologists. Quantitative parameters were statistically analyzed to identify significant predictors of high-risk malignancy. Patients were randomly assigned to the training (n = 386) and validation cohorts (n = 165). A DLM and a combined DLM were established for predicting the GIST risk stratification using convolutional neural network and subsequently evaluated in the validation cohort. RESULTS Among the analyzed CECT image features, tumor size, ulceration, and enlarged feeding vessels were identified as significant risk predictors (P < 0.05). In DLM, the overall area under the receiver operating characteristic curve (AUROC) was 0.88, with the accuracy (ACC) and AUROCs for each stratification being 87% and 0.96 for low-risk, 79% and 0.74 for intermediate-risk, and 84% and 0.90 for high-risk, respectively. The overall ACC and AUROC were 84% and 0.94 in the combined model. The ACC and AUROCs for each risk stratification were 92% and 0.97 for low-risk, 87% and 0.83 for intermediate-risk, and 90% and 0.96 for high-risk, respectively. Differences in AUROCs for each risk stratification between the two models were significant (P < 0.05). CONCLUSION A combined DLM with satisfactory performance for preoperatively predicting GIST stratifications was developed using routine computed tomography data, demonstrating superiority compared to DLM.
Collapse
Affiliation(s)
- Yi Li
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, State Key Laboratory of Druggability Evaluation and Systematic Translational Medicine, Tianjin Key Laboratory of Digestive Cancer, Tianjin 300060, China
| | - Yan-Bei Liu
- School of Life Sciences, Tiangong University, Tianjin 300387, China
| | - Xu-Bin Li
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, State Key Laboratory of Druggability Evaluation and Systematic Translational Medicine, Tianjin Key Laboratory of Digestive Cancer, Tianjin 300060, China
| | - Xiao-Nan Cui
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, State Key Laboratory of Druggability Evaluation and Systematic Translational Medicine, Tianjin Key Laboratory of Digestive Cancer, Tianjin 300060, China
| | - Dong-Hua Meng
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, State Key Laboratory of Druggability Evaluation and Systematic Translational Medicine, Tianjin Key Laboratory of Digestive Cancer, Tianjin 300060, China
| | - Cong-Cong Yuan
- Department of Radiology, Tianjin First Central Hospital, Tianjin 300190, China
| | - Zhao-Xiang Ye
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, State Key Laboratory of Druggability Evaluation and Systematic Translational Medicine, Tianjin Key Laboratory of Digestive Cancer, Tianjin 300060, China
| |
Collapse
|
3
|
Gui Y, Zhang J. Research Progress of Artificial Intelligence in the Grading and Classification of Meningiomas. Acad Radiol 2024; 31:3346-3354. [PMID: 38413314 DOI: 10.1016/j.acra.2024.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Revised: 02/02/2024] [Accepted: 02/02/2024] [Indexed: 02/29/2024]
Abstract
A meningioma is a common primary central nervous system tumor. The histological features of meningiomas vary significantly depending on the grade and subtype, leading to differences in treatment and prognosis. Therefore, early diagnosis, grading, and typing of meningiomas are crucial for developing comprehensive and individualized diagnosis and treatment plans. The advancement of artificial intelligence (AI) in medical imaging, particularly radiomics and deep learning (DL), has contributed to the increasing research on meningioma grading and classification. These techniques are fast and accurate, involve fully automated learning, are non-invasive and objective, enable the efficient and non-invasive prediction of meningioma grades and classifications, and provide valuable assistance in clinical treatment and prognosis. This article provides a summary and analysis of the research progress in radiomics and DL for meningioma grading and classification. It also highlights the existing research findings, limitations, and suggestions for future improvement, aiming to facilitate the future application of AI in the diagnosis and treatment of meningioma.
Collapse
Affiliation(s)
- Yuan Gui
- Department of Radiology, the fifth affiliated hospital of zunyi medical university, zhufengdadao No.1439, Doumen District, Zhuhai, China
| | - Jing Zhang
- Department of Radiology, the fifth affiliated hospital of zunyi medical university, zhufengdadao No.1439, Doumen District, Zhuhai, China.
| |
Collapse
|
4
|
Han T, Liu X, Zhou J. Progression/Recurrence of Meningioma: An Imaging Review Based on Magnetic Resonance Imaging. World Neurosurg 2024; 186:98-107. [PMID: 38499241 DOI: 10.1016/j.wneu.2024.03.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 03/10/2024] [Accepted: 03/11/2024] [Indexed: 03/20/2024]
Abstract
Meningiomas are the most common primary central nervous system tumors. The preferred treatment is maximum safe resection, and the heterogeneity of meningiomas results in a variable prognosis. Progression/recurrence (P/R) can occur at any grade of meningioma and is a common adverse outcome after surgical treatment and a major cause of postoperative rehospitalization, secondary surgery, and mortality. Early prediction of P/R plays an important role in postoperative management, further adjuvant therapy, and follow-up of patients. Therefore, it is essential to thoroughly analyze the heterogeneity of meningiomas and predict postoperative P/R with the aid of noninvasive preoperative imaging. In recent years, the development of advanced magnetic resonance imaging technology and machine learning has provided new insights into noninvasive preoperative prediction of meningioma P/R, which helps to achieve accurate prediction of meningioma P/R. This narrative review summarizes the current research on conventional magnetic resonance imaging, functional magnetic resonance imaging, and machine learning in predicting meningioma P/R. We further explore the significance of tumor microenvironment in meningioma P/R, linking imaging features with tumor microenvironment to comprehensively reveal tumor heterogeneity and provide new ideas for future research.
Collapse
Affiliation(s)
- Tao Han
- Department of Radiology, Lanzhou University Second Hospita, Lanzhou, China; Second Clinical School, Lanzhou University, Lanzhou, China; Key Laboratory of Medical Imaging of Gansu Province, Lanzhou, China; Gansu International Scientific and Technological Cooperation Base of Medical Imaging Artificial Intelligence, Lanzhou, China
| | - Xianwang Liu
- Department of Radiology, Lanzhou University Second Hospita, Lanzhou, China; Second Clinical School, Lanzhou University, Lanzhou, China; Key Laboratory of Medical Imaging of Gansu Province, Lanzhou, China; Gansu International Scientific and Technological Cooperation Base of Medical Imaging Artificial Intelligence, Lanzhou, China
| | - Junlin Zhou
- Department of Radiology, Lanzhou University Second Hospita, Lanzhou, China; Key Laboratory of Medical Imaging of Gansu Province, Lanzhou, China; Gansu International Scientific and Technological Cooperation Base of Medical Imaging Artificial Intelligence, Lanzhou, China.
| |
Collapse
|
5
|
Vassantachart A, Cao Y, Shen Z, Cheng K, Gribble M, Ye JC, Zada G, Hurth K, Mathew A, Guzman S, Yang W. A repository of grade 1 and 2 meningioma MRIs in a public dataset for radiomics reproducibility tests. Med Phys 2024; 51:2334-2344. [PMID: 37815256 PMCID: PMC10939960 DOI: 10.1002/mp.16763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 07/24/2023] [Accepted: 08/28/2023] [Indexed: 10/11/2023] Open
Abstract
PURPOSE Meningiomas are the most common primary brain tumors in adults with management varying widely based on World Health Organization (WHO) grade. However, there are limited datasets available for researchers to develop and validate radiomic models. The purpose of our manuscript is to report on the first dataset of meningiomas in The Cancer Imaging Archive (TCIA). ACQUISITION AND VALIDATION METHODS The dataset consists of pre-operative MRIs from 96 patients with meningiomas who underwent resection from 2010-2019 and include axial T1post and T2-FLAIR sequences-55 grade 1 and 41 grade 2. Meningioma grade was confirmed based on the 2016 WHO Bluebook classification guideline by two neuropathologists and one neuropathology fellow. The hyperintense T1post tumor and hyperintense T2-FLAIR regions were manually contoured on both sequences and resampled to an isotropic resolution of 1 × 1 × 1 mm3 . The entire dataset was reviewed by a certified medical physicist. DATA FORMAT AND USAGE NOTES The data was imported into TCIA for storage and can be accessed at https://doi.org/10.7937/0TKV-1A36. The total size of the dataset is 8.8GB, with 47 519 individual Digital Imaging and Communications in Medicine (DICOM) files consisting of 384 image series, and 192 structures. POTENTIAL APPLICATIONS Grade 1 and 2 meningiomas have different treatment paradigms and are often treated based on radiologic diagnosis alone. Therefore, predicting grade prior to treatment is essential in clinical decision-making. This dataset will allow researchers to create models to auto-differentiate grade 1 and 2 meningiomas as well as evaluate for other pathologic features including mitotic index, brain invasion, and atypical features. Limitations of this study are the small sample size and inclusion of only two MRI sequences. However, there are no meningioma datasets on TCIA and limited datasets elsewhere although meningiomas are the most common intracranial tumor in adults.
Collapse
Affiliation(s)
- April Vassantachart
- Department of Radiation Oncology, LAC+USC Medical Center, Los Angeles, California, USA
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Yufeng Cao
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Zhilei Shen
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Karen Cheng
- Department of Radiation Oncology, LAC+USC Medical Center, Los Angeles, California, USA
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Michael Gribble
- Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Jason C. Ye
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Gabriel Zada
- Department of Neurological Surgery, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Kyle Hurth
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Anna Mathew
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Samuel Guzman
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Wensha Yang
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
6
|
Chen C, Teng Y, Tan S, Wang Z, Zhang L, Xu J. Performance Test of a Well-Trained Model for Meningioma Segmentation in Health Care Centers: Secondary Analysis Based on Four Retrospective Multicenter Data Sets. J Med Internet Res 2023; 25:e44119. [PMID: 38100181 PMCID: PMC10757229 DOI: 10.2196/44119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 06/21/2023] [Accepted: 11/22/2023] [Indexed: 12/18/2023] Open
Abstract
BACKGROUND Convolutional neural networks (CNNs) have produced state-of-the-art results in meningioma segmentation on magnetic resonance imaging (MRI). However, images obtained from different institutions, protocols, or scanners may show significant domain shift, leading to performance degradation and challenging model deployment in real clinical scenarios. OBJECTIVE This research aims to investigate the realistic performance of a well-trained meningioma segmentation model when deployed across different health care centers and verify the methods to enhance its generalization. METHODS This study was performed in four centers. A total of 606 patients with 606 MRIs were enrolled between January 2015 and December 2021. Manual segmentations, determined through consensus readings by neuroradiologists, were used as the ground truth mask. The model was previously trained using a standard supervised CNN called Deeplab V3+ and was deployed and tested separately in four health care centers. To determine the appropriate approach to mitigating the observed performance degradation, two methods were used: unsupervised domain adaptation and supervised retraining. RESULTS The trained model showed a state-of-the-art performance in tumor segmentation in two health care institutions, with a Dice ratio of 0.887 (SD 0.108, 95% CI 0.903-0.925) in center A and a Dice ratio of 0.874 (SD 0.800, 95% CI 0.854-0.894) in center B. Whereas in the other health care institutions, the performance declined, with Dice ratios of 0.631 (SD 0.157, 95% CI 0.556-0.707) in center C and 0.649 (SD 0.187, 95% CI 0.566-0.732) in center D, as they obtained the MRI using different scanning protocols. The unsupervised domain adaptation showed a significant improvement in performance scores, with Dice ratios of 0.842 (SD 0.073, 95% CI 0.820-0.864) in center C and 0.855 (SD 0.097, 95% CI 0.826-0.886) in center D. Nonetheless, it did not overperform the supervised retraining, which achieved Dice ratios of 0.899 (SD 0.026, 95% CI 0.889-0.906) in center C and 0.886 (SD 0.046, 95% CI 0.870-0.903) in center D. CONCLUSIONS Deploying the trained CNN model in different health care institutions may show significant performance degradation due to the domain shift of MRIs. Under this circumstance, the use of unsupervised domain adaptation or supervised retraining should be considered, taking into account the balance between clinical requirements, model performance, and the size of the available data.
Collapse
Affiliation(s)
- Chaoyue Chen
- Neurosurgery Department, West China Hospital, Sichuan University, Chengdu, China
| | - Yuen Teng
- Neurosurgery Department, West China Hospital, Sichuan University, Chengdu, China
| | - Shuo Tan
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Zizhou Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore, Singapore
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Jianguo Xu
- Neurosurgery Department, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
7
|
Agadi K, Dominari A, Tebha SS, Mohammadi A, Zahid S. Neurosurgical Management of Cerebrospinal Tumors in the Era of Artificial Intelligence : A Scoping Review. J Korean Neurosurg Soc 2023; 66:632-641. [PMID: 35831137 PMCID: PMC10641423 DOI: 10.3340/jkns.2021.0213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 10/06/2021] [Accepted: 03/14/2022] [Indexed: 11/27/2022] Open
Abstract
Central nervous system tumors are identified as tumors of the brain and spinal cord. The associated morbidity and mortality of cerebrospinal tumors are disproportionately high compared to other malignancies. While minimally invasive techniques have initiated a revolution in neurosurgery, artificial intelligence (AI) is expediting it. Our study aims to analyze AI's role in the neurosurgical management of cerebrospinal tumors. We conducted a scoping review using the Arksey and O'Malley framework. Upon screening, data extraction and analysis were focused on exploring all potential implications of AI, classification of these implications in the management of cerebrospinal tumors. AI has enhanced the precision of diagnosis of these tumors, enables surgeons to excise the tumor margins completely, thereby reducing the risk of recurrence, and helps to make a more accurate prediction of the patient's prognosis than the conventional methods. AI also offers real-time training to neurosurgeons using virtual and 3D simulation, thereby increasing their confidence and skills during procedures. In addition, robotics is integrated into neurosurgery and identified to increase patient outcomes by making surgery less invasive. AI, including machine learning, is rigorously considered for its applications in the neurosurgical management of cerebrospinal tumors. This field requires further research focused on areas clinically essential in improving the outcome that is also economically feasible for clinical use. The authors suggest that data analysts and neurosurgeons collaborate to explore the full potential of AI.
Collapse
Affiliation(s)
- Kuchalambal Agadi
- Division of Research and Academic Affairs, Larkin Health System, South Miami, FL, USA
| | - Asimina Dominari
- Division of Research and Academic Affairs, Larkin Health System, South Miami, FL, USA
- Aristotle University of Thessaloniki School of Medicine, Thessaloniki, Greece
| | - Sameer Saleem Tebha
- Division of Research and Academic Affairs, Larkin Health System, South Miami, FL, USA
- Department of Neurosurgery and Neurology, Jinnah Medical and Dental College, Karachi, Pakistan
| | - Asma Mohammadi
- Division of Research and Academic Affairs, Larkin Health System, South Miami, FL, USA
| | - Samina Zahid
- Division of Research and Academic Affairs, Larkin Health System, South Miami, FL, USA
| |
Collapse
|
8
|
Maniar KM, Lassarén P, Rana A, Yao Y, Tewarie IA, Gerstl JVE, Recio Blanco CM, Power LH, Mammi M, Mattie H, Smith TR, Mekary RA. Traditional Machine Learning Methods versus Deep Learning for Meningioma Classification, Grading, Outcome Prediction, and Segmentation: A Systematic Review and Meta-Analysis. World Neurosurg 2023; 179:e119-e134. [PMID: 37574189 DOI: 10.1016/j.wneu.2023.08.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 08/06/2023] [Indexed: 08/15/2023]
Abstract
BACKGROUND Meningiomas are common intracranial tumors. Machine learning (ML) algorithms are emerging to improve accuracy in 4 primary domains: classification, grading, outcome prediction, and segmentation. Such algorithms include both traditional approaches that rely on hand-crafted features and deep learning (DL) techniques that utilize automatic feature extraction. The aim of this study was to evaluate the performance of published traditional ML versus DL algorithms in classification, grading, outcome prediction, and segmentation of meningiomas. METHODS A systematic review and meta-analysis were conducted. Major databases were searched through September 2021 for publications evaluating traditional ML versus DL models on meningioma management. Performance measures including pooled sensitivity, specificity, F1-score, area under the receiver-operating characteristic curve, positive and negative likelihood ratios (LR+, LR-) along with their respective 95% confidence intervals (95% CIs) were derived using random-effects models. RESULTS Five hundred thirty-four records were screened, and 43 articles were included, regarding classification (3 articles), grading (29), outcome prediction (7), and segmentation (6) of meningiomas. Of the 29 studies that reported on grading, 10 could be meta-analyzed with 2 DL models (sensitivity 0.89, 95% CI: 0.74-0.96; specificity 0.91, 95% CI: 0.45-0.99; LR+ 10.1, 95% CI: 1.33-137; LR- 0.12, 95% CI: 0.04-0.59) and 8 traditional ML (sensitivity 0.74, 95% CI: 0.62-0.83; specificity 0.93, 95% CI: 0.79-0.98; LR+ 10.5, 95% CI: 2.91-39.5; and LR- 0.28, 95% CI: 0.17-0.49). The insufficient performance metrics reported precluded further statistical analysis of other performance metrics. CONCLUSIONS ML on meningiomas is mostly carried out with traditional methods. For meningioma grading, traditional ML methods generally had a higher LR+, while DL models a lower LR-.
Collapse
Affiliation(s)
- Krish M Maniar
- Department of Neurosurgery, Computational Neurosciences Outcomes Center (CNOC), Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts, United States
| | - Philipp Lassarén
- Department of Neurosurgery, Computational Neurosciences Outcomes Center (CNOC), Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts, United States; Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Aakanksha Rana
- Department of Neurosurgery, Computational Neurosciences Outcomes Center (CNOC), Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts, United States; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Boston, Massachusetts, United States
| | - Yuxin Yao
- Department of Pharmaceutical Business and Administrative Sciences, School of Pharmacy, Massachusetts College of Pharmacy and Health Sciences University, Boston, Massachusetts, United States
| | - Ishaan A Tewarie
- Department of Neurosurgery, Computational Neurosciences Outcomes Center (CNOC), Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts, United States; Department of Neurosurgery, Haaglanden Medical Center, The Hague, The Netherlands; Faculty of Medicine, Erasmus University Rotterdam/Erasmus Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Jakob V E Gerstl
- Department of Neurosurgery, Computational Neurosciences Outcomes Center (CNOC), Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts, United States
| | - Camila M Recio Blanco
- Department of Neurosurgery, Computational Neurosciences Outcomes Center (CNOC), Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts, United States; Northeast National University, Corrientes, Argentina; Prisma Salud, Puerto San Julian, Santa Cruz, Argentina
| | - Liam H Power
- Department of Neurosurgery, Computational Neurosciences Outcomes Center (CNOC), Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts, United States; School of Medicine, Tufts University, Boston, Massachusetts, United States
| | - Marco Mammi
- Neurosurgery Unit, S. Croce e Carle Hospital, Cuneo, Italy
| | - Heather Mattie
- Department of Biostatistics, Harvard TH Chan School of Public Health, Boston, Massachusetts, United States
| | - Timothy R Smith
- Department of Neurosurgery, Computational Neurosciences Outcomes Center (CNOC), Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts, United States; Department of Neurosurgery, Brigham and Women's Hospital, Harvard University, Boston, Massachusetts, United States
| | - Rania A Mekary
- Department of Neurosurgery, Computational Neurosciences Outcomes Center (CNOC), Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts, United States; Department of Pharmaceutical Business and Administrative Sciences, School of Pharmacy, Massachusetts College of Pharmacy and Health Sciences University, Boston, Massachusetts, United States.
| |
Collapse
|
9
|
Jun Y, Park YW, Shin H, Shin Y, Lee JR, Han K, Ahn SS, Lim SM, Hwang D, Lee SK. Intelligent noninvasive meningioma grading with a fully automatic segmentation using interpretable multiparametric deep learning. Eur Radiol 2023; 33:6124-6133. [PMID: 37052658 DOI: 10.1007/s00330-023-09590-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 11/30/2022] [Accepted: 02/09/2023] [Indexed: 04/14/2023]
Abstract
OBJECTIVES To establish a robust interpretable multiparametric deep learning (DL) model for automatic noninvasive grading of meningiomas along with segmentation. METHODS In total, 257 patients with pathologically confirmed meningiomas (162 low-grade, 95 high-grade) who underwent a preoperative brain MRI, including T2-weighted (T2) and contrast-enhanced T1-weighted images (T1C), were included in the institutional training set. A two-stage DL grading model was constructed for segmentation and classification based on multiparametric three-dimensional U-net and ResNet. The models were validated in the external validation set consisting of 61 patients with meningiomas (46 low-grade, 15 high-grade). Relevance-weighted Class Activation Mapping (RCAM) method was used to interpret the DL features contributing to the prediction of the DL grading model. RESULTS On external validation, the combined T1C and T2 model showed a Dice coefficient of 0.910 in segmentation and the highest performance for meningioma grading compared to the T2 or T1C only models, with an area under the curve (AUC) of 0.770 (95% confidence interval: 0.644-0.895) and accuracy, sensitivity, and specificity of 72.1%, 73.3%, and 71.7%, respectively. The AUC and accuracy of the combined DL grading model were higher than those of the human readers (AUCs of 0.675-0.690 and accuracies of 65.6-68.9%, respectively). The RCAM of the DL grading model showed activated maps at the surface regions of meningiomas indicating that the model recognized the features at the tumor margin for grading. CONCLUSIONS An interpretable multiparametric DL model combining T1C and T2 can enable fully automatic grading of meningiomas along with segmentation. KEY POINTS • The multiparametric DL model showed robustness in grading and segmentation on external validation. • The diagnostic performance of the combined DL grading model was higher than that of the human readers. • The RCAM interpreted that DL grading model recognized the meaningful features at the tumor margin for grading.
Collapse
Affiliation(s)
- Yohan Jun
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Yae Won Park
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Hyungseob Shin
- School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Yejee Shin
- School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jeong Ryong Lee
- School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Kyunghwa Han
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Sung Soo Ahn
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea.
| | - Soo Mee Lim
- Department of Radiology, Ewha Womans University College of Medicine, Seoul, Korea
| | - Dosik Hwang
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea.
- School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea.
- Center for Healthcare Robotics, Korea Institute of Science and Technology, Seoul, Korea.
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea.
| | - Seung-Koo Lee
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| |
Collapse
|
10
|
Meng M, Li H, Zhang M, He G, Wang L, Shen D. Reducing the number of unnecessary biopsies for mammographic BI-RADS 4 lesions through a deep transfer learning method. BMC Med Imaging 2023; 23:82. [PMID: 37312026 DOI: 10.1186/s12880-023-01023-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 05/23/2023] [Indexed: 06/15/2023] Open
Abstract
BACKGROUND In clinical practice, reducing unnecessary biopsies for mammographic BI-RADS 4 lesions is crucial. The objective of this study was to explore the potential value of deep transfer learning (DTL) based on the different fine-tuning strategies for Inception V3 to reduce the number of unnecessary biopsies that residents need to perform for mammographic BI-RADS 4 lesions. METHODS A total of 1980 patients with breast lesions were included, including 1473 benign lesions (185 women with bilateral breast lesions), and 692 malignant lesions collected and confirmed by clinical pathology or biopsy. The breast mammography images were randomly divided into three subsets, a training set, testing set, and validation set 1, at a ratio of 8:1:1. We constructed a DTL model for the classification of breast lesions based on Inception V3 and attempted to improve its performance with 11 fine-tuning strategies. The mammography images from 362 patients with pathologically confirmed BI-RADS 4 breast lesions were employed as validation set 2. Two images from each lesion were tested, and trials were categorized as correct if the judgement (≥ 1 image) was correct. We used precision (Pr), recall rate (Rc), F1 score (F1), and the area under the receiver operating characteristic curve (AUROC) as the performance metrics of the DTL model with validation set 2. RESULTS The S5 model achieved the best fit for the data. The Pr, Rc, F1 and AUROC of S5 were 0.90, 0.90, 0.90, and 0.86, respectively, for Category 4. The proportions of lesions downgraded by S5 were 90.73%, 84.76%, and 80.19% for categories 4 A, 4B, and 4 C, respectively. The overall proportion of BI-RADS 4 lesions downgraded by S5 was 85.91%. There was no significant difference between the classification results of the S5 model and pathological diagnosis (P = 0.110). CONCLUSION The S5 model we proposed here can be used as an effective approach for reducing the number of unnecessary biopsies that residents need to conduct for mammographic BI-RADS 4 lesions and may have other important clinical uses.
Collapse
Affiliation(s)
- Mingzhu Meng
- Department of Radiology, The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, Changzhou, 213164, Jiangsu Province, P. R. China
| | - Hong Li
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, Jiangsu Province, P.R. China
| | - Ming Zhang
- Department of Radiology, The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, Changzhou, 213164, Jiangsu Province, P. R. China
| | - Guangyuan He
- Department of Radiology, The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, Changzhou, 213164, Jiangsu Province, P. R. China
| | - Long Wang
- Department of Radiology, The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, Changzhou, 213164, Jiangsu Province, P. R. China.
| | - Dong Shen
- Department of Radiology, The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, Changzhou, 213164, Jiangsu Province, P. R. China.
| |
Collapse
|
11
|
Koechli C, Zwahlen DR, Schucht P, Windisch P. Radiomics and machine learning for predicting the consistency of benign tumors of the central nervous system: A systematic review. Eur J Radiol 2023; 164:110866. [PMID: 37207398 DOI: 10.1016/j.ejrad.2023.110866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/28/2023] [Accepted: 05/03/2023] [Indexed: 05/21/2023]
Abstract
PURPOSE Predicting the consistency of benign central nervous system (CNS) tumors prior to surgery helps to improve surgical outcomes. This review summarizes and analyzes the literature on using radiomics and/or machine learning (ML) for consistency prediction. METHOD The Medical Literature Analysis and Retrieval System Online (MEDLINE) database was screened for studies published in English from January 1st 2000. Data was extracted according to the PRISMA guidelines and quality of the studies was assessed in compliance with the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2). RESULTS Eight publications were included focusing on pituitary macroadenomas (n = 5), pituitary adenomas (n = 1), and meningiomas (n = 2) using a retrospective (n = 6), prospective (n = 1), and unknown (n = 1) study design with a total of 763 patients for the consistency prediction. The studies reported an area under the curve (AUC) of 0.71-0.99 for their respective best performing model regarding the consistency prediction. Of all studies, four articles validated their models internally whereas none validated their models externally. Two articles stated making data available on request with the remaining publications lacking information with regard to data availability. CONCLUSIONS The research on consistency prediction of CNS tumors is still at an early stage regarding the use of radiomics and different ML techniques. Best-practice procedures regarding radiomics and ML need to be followed more rigorously to facilitate the comparison between publications and, accordingly, the possible implementation into clinical practice in the future.
Collapse
Affiliation(s)
- Carole Koechli
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland; Universitätsklinik für Neurochirurgie, Bern University Hospital, 3010 Bern, Switzerland.
| | - Daniel R Zwahlen
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland
| | - Philippe Schucht
- Universitätsklinik für Neurochirurgie, Bern University Hospital, 3010 Bern, Switzerland
| | - Paul Windisch
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland
| |
Collapse
|
12
|
Liu X, Wang Y, Han T, Liu H, Zhou J. Preoperative surgical risk assessment of meningiomas: a narrative review based on MRI radiomics. Neurosurg Rev 2022; 46:29. [PMID: 36576657 DOI: 10.1007/s10143-022-01937-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 12/08/2022] [Accepted: 12/22/2022] [Indexed: 12/29/2022]
Abstract
Meningiomas are one of the most common intracranial primary central nervous system tumors. Regardless of the pathological grading and histological subtypes, maximum safe resection is the recommended treatment option for meningiomas. However, considering tumor heterogeneity, surgical treatment options and prognosis often vary greatly among meningiomas. Therefore, an accurate preoperative surgical risk assessment of meningiomas is of great clinical importance as it helps develop surgical treatment strategies and improve patient prognosis. In recent years, an increasing number of studies have proved that magnetic resonance imaging (MRI) radiomics has wide application values in the diagnostic, identification, and prognostic evaluations of brain tumors. The vital importance of MRI radiomics in the surgical risk assessment of meningiomas must be apprehended and emphasized in clinical practice. This narrative review summarizes the current research status of MRI radiomics in the preoperative surgical risk assessment of meningiomas, focusing on the applications of MRI radiomics in preoperative pathological grading, assessment of surrounding tissue invasion, and evaluation of tumor consistency. We further analyze the prospects of MRI radiomics in the preoperative assessment of meningiomas angiogenesis and adhesion with surrounding tissues, while pointing out the current challenges of MRI radiomics research.
Collapse
Affiliation(s)
- Xianwang Liu
- Department of Radiology, Lanzhou University Second Hospital, Chengguan District, Cuiyingmen No.82, Lanzhou, 730030, People's Republic of China
- Second Clinical School, Lanzhou University, Lanzhou, People's Republic of China
- Key Laboratory of Medical Imaging of Gansu Province, Lanzhou, People's Republic of China
- Gansu International Scientific and Technological Cooperation Base of Medical Imaging Artificial Intelligence, Lanzhou, People's Republic of China
| | - Yuzhu Wang
- Second Clinical School, Lanzhou University, Lanzhou, People's Republic of China
- Key Laboratory of Medical Imaging of Gansu Province, Lanzhou, People's Republic of China
| | - Tao Han
- Department of Radiology, Lanzhou University Second Hospital, Chengguan District, Cuiyingmen No.82, Lanzhou, 730030, People's Republic of China
- Second Clinical School, Lanzhou University, Lanzhou, People's Republic of China
- Key Laboratory of Medical Imaging of Gansu Province, Lanzhou, People's Republic of China
- Gansu International Scientific and Technological Cooperation Base of Medical Imaging Artificial Intelligence, Lanzhou, People's Republic of China
| | - Hong Liu
- Department of Radiology, Lanzhou University Second Hospital, Chengguan District, Cuiyingmen No.82, Lanzhou, 730030, People's Republic of China
- Second Clinical School, Lanzhou University, Lanzhou, People's Republic of China
- Key Laboratory of Medical Imaging of Gansu Province, Lanzhou, People's Republic of China
- Gansu International Scientific and Technological Cooperation Base of Medical Imaging Artificial Intelligence, Lanzhou, People's Republic of China
| | - Junlin Zhou
- Department of Radiology, Lanzhou University Second Hospital, Chengguan District, Cuiyingmen No.82, Lanzhou, 730030, People's Republic of China.
- Second Clinical School, Lanzhou University, Lanzhou, People's Republic of China.
- Key Laboratory of Medical Imaging of Gansu Province, Lanzhou, People's Republic of China.
- Gansu International Scientific and Technological Cooperation Base of Medical Imaging Artificial Intelligence, Lanzhou, People's Republic of China.
| |
Collapse
|
13
|
Beyond Imaging and Genetic Signature in Glioblastoma: Radiogenomic Holistic Approach in Neuro-Oncology. Biomedicines 2022; 10:biomedicines10123205. [PMID: 36551961 PMCID: PMC9775324 DOI: 10.3390/biomedicines10123205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/02/2022] [Accepted: 12/05/2022] [Indexed: 12/14/2022] Open
Abstract
Glioblastoma (GBM) is a malignant brain tumor exhibiting rapid and infiltrative growth, with less than 10% of patients surviving over 5 years, despite aggressive and multimodal treatments. The poor prognosis and the lack of effective pharmacological treatments are imputable to a remarkable histological and molecular heterogeneity of GBM, which has led, to date, to the failure of precision oncology and targeted therapies. Identification of molecular biomarkers is a paradigm for comprehensive and tailored treatments; nevertheless, biopsy sampling has proved to be invasive and limited. Radiogenomics is an emerging translational field of research aiming to study the correlation between radiographic signature and underlying gene expression. Although a research field still under development, not yet incorporated into routine clinical practice, it promises to be a useful non-invasive tool for future personalized/adaptive neuro-oncology. This review provides an up-to-date summary of the recent advancements in the use of magnetic resonance imaging (MRI) radiogenomics for the assessment of molecular markers of interest in GBM regarding prognosis and response to treatments, for monitoring recurrence, also providing insights into the potential efficacy of such an approach for survival prognostication. Despite a high sensitivity and specificity in almost all studies, accuracy, reproducibility and clinical value of radiomic features are the Achilles heel of this newborn tool. Looking into the future, investigators' efforts should be directed towards standardization and a disciplined approach to data collection, algorithms, and statistical analysis.
Collapse
|
14
|
Predicting Meningioma Resection Status: Use of Deep Learning. Acad Radiol 2022:S1076-6332(22)00518-9. [DOI: 10.1016/j.acra.2022.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/20/2022] [Accepted: 10/03/2022] [Indexed: 11/24/2022]
|
15
|
Boaro A, Kaczmarzyk JR, Kavouridis VK, Harary M, Mammi M, Dawood H, Shea A, Cho EY, Juvekar P, Noh T, Rana A, Ghosh S, Arnaout O. Deep neural networks allow expert-level brain meningioma segmentation and present potential for improvement of clinical practice. Sci Rep 2022; 12:15462. [PMID: 36104424 PMCID: PMC9474556 DOI: 10.1038/s41598-022-19356-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 08/29/2022] [Indexed: 11/20/2022] Open
Abstract
Accurate brain meningioma segmentation and volumetric assessment are critical for serial patient follow-up, surgical planning and monitoring response to treatment. Current gold standard of manual labeling is a time-consuming process, subject to inter-user variability. Fully-automated algorithms for meningioma segmentation have the potential to bring volumetric analysis into clinical and research workflows by increasing accuracy and efficiency, reducing inter-user variability and saving time. Previous research has focused solely on segmentation tasks without assessment of impact and usability of deep learning solutions in clinical practice. Herein, we demonstrate a three-dimensional convolutional neural network (3D-CNN) that performs expert-level, automated meningioma segmentation and volume estimation on MRI scans. A 3D-CNN was initially trained by segmenting entire brain volumes using a dataset of 10,099 healthy brain MRIs. Using transfer learning, the network was then specifically trained on meningioma segmentation using 806 expert-labeled MRIs. The final model achieved a median performance of 88.2% reaching the spectrum of current inter-expert variability (82.6–91.6%). We demonstrate in a simulated clinical scenario that a deep learning approach to meningioma segmentation is feasible, highly accurate and has the potential to improve current clinical practice.
Collapse
|
16
|
Machine Learning for the Detection and Segmentation of Benign Tumors of the Central Nervous System: A Systematic Review. Cancers (Basel) 2022; 14:cancers14112676. [PMID: 35681655 PMCID: PMC9179850 DOI: 10.3390/cancers14112676] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/18/2022] [Accepted: 05/26/2022] [Indexed: 11/20/2022] Open
Abstract
Simple Summary Machine learning in radiology of the central nervous system has seen many interesting publications in the past few years. Since the focus has largely been on malignant tumors such as brain metastases and high-grade gliomas, we conducted a systematic review on benign tumors to summarize what has been published and where there might be gaps in the research. We found several studies that report good results, but the descriptions of methodologies could be improved to enable better comparisons and assessment of biases. Abstract Objectives: To summarize the available literature on using machine learning (ML) for the detection and segmentation of benign tumors of the central nervous system (CNS) and to assess the adherence of published ML/diagnostic accuracy studies to best practice. Methods: The MEDLINE database was searched for the use of ML in patients with any benign tumor of the CNS, and the records were screened according to PRISMA guidelines. Results: Eleven retrospective studies focusing on meningioma (n = 4), vestibular schwannoma (n = 4), pituitary adenoma (n = 2) and spinal schwannoma (n = 1) were included. The majority of studies attempted segmentation. Links to repositories containing code were provided in two manuscripts, and no manuscripts shared imaging data. Only one study used an external test set, which raises the question as to whether some of the good performances that have been reported were caused by overfitting and may not generalize to data from other institutions. Conclusions: Using ML for detecting and segmenting benign brain tumors is still in its infancy. Stronger adherence to ML best practices could facilitate easier comparisons between studies and contribute to the development of models that are more likely to one day be used in clinical practice.
Collapse
|
17
|
Burti S, Zotti A, Bonsembiante F, Contiero B, Banzato T. A Machine Learning-Based Approach for Classification of Focal Splenic Lesions Based on Their CT Features. Front Vet Sci 2022; 9:872618. [PMID: 35585859 PMCID: PMC9108536 DOI: 10.3389/fvets.2022.872618] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 04/11/2022] [Indexed: 11/20/2022] Open
Abstract
The aim of the study was to describe the CT features of focal splenic lesions (FSLs) in dogs in order to predict lesion histotype. Dogs that underwent a CT scan and had a FSL diagnosis by cytology or histopathology were retrospectively included in the study. For the statistical analysis the cases were divided into four groups, based on the results of cytopatholoy or hystopathology, namely: nodular hyperplasia (NH), other benign lesions (OBLs), sarcoma (SA), round cell tumour (RCT). Several qualitative and quantitative CT features were described for each case. The relationship occurring between each individual CT feature and the histopathological groups was explred by means of c chi-square test for the count data and by means of Kruskal-Wallis or ANOVA for the continuous data. Furthermore, the main features of each group were described using factorial discriminant analysis, and a decision tree for lesion classification was then developed. Sarcomas were characterised by large dimensions, a cystic appearance and an overall low post contrast-enhancement. NH and OBLs were characterised by small dimensions, a solid appearance and a high post-contrast enhancement. OBLs showed higher post-contrast values than NH. Lastly, RCTs did not exhibit any distinctive CT features. The proposed decision tree had a high accuracy for the classification of SA (0.89) and a moderate accuracy for the classification of OBLs and NH (0.79), whereas it was unable to classify RCTs. The results of the factorial analysis and the proposed decision tree could help the clinician in classifying FSLs based on their CT features. A definitive FSL diagnosis can only be obtained by microscopic examination of the spleen.
Collapse
Affiliation(s)
- Silvia Burti
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Padua, Italy
| | - Alessandro Zotti
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Padua, Italy
| | - Federico Bonsembiante
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Padua, Italy
- Department of Comparative Biomedicine and Food Science, University of Padua, Viale dell'Università 16, Padua, Italy
| | - Barbara Contiero
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Padua, Italy
| | - Tommaso Banzato
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Padua, Italy
- *Correspondence: Tommaso Banzato
| |
Collapse
|
18
|
Brunasso L, Ferini G, Bonosi L, Costanzo R, Musso S, Benigno UE, Gerardi RM, Giammalva GR, Paolini F, Umana GE, Graziano F, Scalia G, Sturiale CL, Di Bonaventura R, Iacopino DG, Maugeri R. A Spotlight on the Role of Radiomics and Machine-Learning Applications in the Management of Intracranial Meningiomas: A New Perspective in Neuro-Oncology: A Review. Life (Basel) 2022; 12:life12040586. [PMID: 35455077 PMCID: PMC9026541 DOI: 10.3390/life12040586] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/05/2022] [Accepted: 04/06/2022] [Indexed: 12/12/2022] Open
Abstract
Background: In recent decades, the application of machine learning technologies to medical imaging has opened up new perspectives in neuro-oncology, in the so-called radiomics field. Radiomics offer new insight into glioma, aiding in clinical decision-making and patients’ prognosis evaluation. Although meningiomas represent the most common primary CNS tumor and the majority of them are benign and slow-growing tumors, a minor part of them show a more aggressive behavior with an increased proliferation rate and a tendency to recur. Therefore, their treatment may represent a challenge. Methods: According to PRISMA guidelines, a systematic literature review was performed. We included selected articles (meta-analysis, review, retrospective study, and case–control study) concerning the application of radiomics method in the preoperative diagnostic and prognostic algorithm, and planning for intracranial meningiomas. We also analyzed the contribution of radiomics in differentiating meningiomas from other CNS tumors with similar radiological features. Results: In the first research stage, 273 papers were identified. After a careful screening according to inclusion/exclusion criteria, 39 articles were included in this systematic review. Conclusions: Several preoperative features have been identified to increase preoperative intracranial meningioma assessment for guiding decision-making processes. The development of valid and reliable non-invasive diagnostic and prognostic modalities could have a significant clinical impact on meningioma treatment.
Collapse
Affiliation(s)
- Lara Brunasso
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
- Correspondence:
| | - Gianluca Ferini
- Department of Radiation Oncology, REM Radioterapia SRL, 95125 Catania, Italy;
| | - Lapo Bonosi
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Roberta Costanzo
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Sofia Musso
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Umberto E. Benigno
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Rosa M. Gerardi
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Giuseppe R. Giammalva
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Federica Paolini
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Giuseppe E. Umana
- Gamma Knife Center, Trauma Center, Department of Neurosurgery, Cannizzaro Hospital, 95100 Catania, Italy;
| | - Francesca Graziano
- Unit of Neurosurgery, Garibaldi Hospital, 95124 Catania, Italy; (F.G.); (G.S.)
| | - Gianluca Scalia
- Unit of Neurosurgery, Garibaldi Hospital, 95124 Catania, Italy; (F.G.); (G.S.)
| | - Carmelo L. Sturiale
- Division of Neurosurgery, Fondazione Policlinico Universitario A. Gemelli IRCCS, Università Cattolica del Sacro Cuore, 00100 Rome, Italy; (C.L.S.); (R.D.B.)
| | - Rina Di Bonaventura
- Division of Neurosurgery, Fondazione Policlinico Universitario A. Gemelli IRCCS, Università Cattolica del Sacro Cuore, 00100 Rome, Italy; (C.L.S.); (R.D.B.)
| | - Domenico G. Iacopino
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| | - Rosario Maugeri
- Neurosurgical Clinic AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (L.B.); (R.C.); (S.M.); (U.E.B.); (R.M.G.); (G.R.G.); (F.P.); (D.G.I.); (R.M.)
| |
Collapse
|
19
|
Chen H, Li S, Zhang Y, Liu L, Lv X, Yi Y, Ruan G, Ke C, Feng Y. Deep learning-based automatic segmentation of meningioma from multiparametric MRI for preoperative meningioma differentiation using radiomic features: a multicentre study. Eur Radiol 2022; 32:7248-7259. [PMID: 35420299 DOI: 10.1007/s00330-022-08749-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 02/18/2022] [Accepted: 03/14/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVES Develop and evaluate a deep learning-based automatic meningioma segmentation method for preoperative meningioma differentiation using radiomic features. METHODS A retrospective multicentre inclusion of MR examinations (T1/T2-weighted and contrast-enhanced T1-weighted imaging) was conducted. Data from centre 1 were allocated to training (n = 307, age = 50.94 ± 11.51) and internal testing (n = 238, age = 50.70 ± 12.72) cohorts, and data from centre 2 external testing cohort (n = 64, age = 48.45 ± 13.59). A modified attention U-Net was trained for meningioma segmentation. Segmentation accuracy was evaluated by five quantitative metrics. The agreement between radiomic features from manual and automatic segmentations was assessed using intra class correlation coefficient (ICC). After univariate and minimum-redundancy-maximum-relevance feature selection, L1-regularized logistic regression models for differentiating between low-grade (I) and high-grade (II and III) meningiomas were separately constructed using manual and automatic segmentations; their performances were evaluated using ROC analysis. RESULTS Dice of meningioma segmentation for the internal testing cohort were 0.94 ± 0.04 and 0.91 ± 0.05 for tumour volumes in contrast-enhanced T1-weighted and T2-weighted images, respectively; those for the external testing cohort were 0.90 ± 0.07 and 0.88 ± 0.07. Features extracted using manual and automatic segmentations agreed well, for both the internal (ICC = 0.94, interquartile range: 0.88-0.97) and external (ICC = 0.90, interquartile range: 0.78-70.96) testing cohorts. AUC of radiomic model with automatic segmentation was comparable with that of the model with manual segmentation for both the internal (0.95 vs. 0.93, p = 0.176) and external (0.88 vs. 0.91, p = 0.419) testing cohorts. CONCLUSIONS The developed deep learning-based segmentation method enables automatic and accurate extraction of meningioma from multiparametric MR images and can help deploy radiomics for preoperative meningioma differentiation in clinical practice. KEY POINTS • A deep learning-based method was developed for automatic segmentation of meningioma from multiparametric MR images. • The automatic segmentation method enabled accurate extraction of meningiomas and yielded radiomic features that were highly consistent with those that were obtained using manual segmentation. • High-grade meningiomas were preoperatively differentiated from low-grade meningiomas using a radiomic model constructed on features from automatic segmentation.
Collapse
Affiliation(s)
- Haolin Chen
- School of Biomedical Engineering, Southern Medical University, 1023 Shatainan Road, Guangzhou, 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing & Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China.,Guangdong-Hong Kong-Macao Greater Bay Area Centre for Brain Science and Brain-Inspired Intelligence & Key Laboratory of Mental Health of the Ministry of Education, Guangzhou, China
| | - Shuqi Li
- Department of Radiology, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,State Key Laboratory of Oncology in South China, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,Collaborative Innovation Centre for Cancer Medicine, Sun Yat-Sen University Cancer Centre, Guangzhou, China
| | - Youming Zhang
- Department of Radiology, Xiangya Hospital, Central South University, Changsha, China
| | - Lizhi Liu
- Department of Radiology, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,State Key Laboratory of Oncology in South China, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,Collaborative Innovation Centre for Cancer Medicine, Sun Yat-Sen University Cancer Centre, Guangzhou, China
| | - Xiaofei Lv
- Department of Radiology, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,State Key Laboratory of Oncology in South China, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,Collaborative Innovation Centre for Cancer Medicine, Sun Yat-Sen University Cancer Centre, Guangzhou, China
| | - Yongju Yi
- School of Biomedical Engineering, Southern Medical University, 1023 Shatainan Road, Guangzhou, 510515, China.,Network Information Centre, The Sixth Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Guangying Ruan
- Department of Radiology, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,State Key Laboratory of Oncology in South China, Sun Yat-Sen University Cancer Centre, Guangzhou, China.,Collaborative Innovation Centre for Cancer Medicine, Sun Yat-Sen University Cancer Centre, Guangzhou, China
| | - Chao Ke
- State Key Laboratory of Oncology in South China, Sun Yat-Sen University Cancer Centre, Guangzhou, China. .,Collaborative Innovation Centre for Cancer Medicine, Sun Yat-Sen University Cancer Centre, Guangzhou, China. .,Department of Neurosurgery and Neuro-oncology, Sun Yat-Sen University Cancer Centre, 651 Dongfeng East Road, Guangzhou, 510060, China.
| | - Yanqiu Feng
- School of Biomedical Engineering, Southern Medical University, 1023 Shatainan Road, Guangzhou, 510515, China. .,Guangdong Provincial Key Laboratory of Medical Image Processing & Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China. .,Guangdong-Hong Kong-Macao Greater Bay Area Centre for Brain Science and Brain-Inspired Intelligence & Key Laboratory of Mental Health of the Ministry of Education, Guangzhou, China. .,Department of Rehabilitation, Zhujiang Hospital, Southern Medical University, Guangzhou, China.
| |
Collapse
|
20
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 161] [Impact Index Per Article: 53.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
21
|
Automatic differentiation of Grade I and II meningiomas on magnetic resonance image using an asymmetric convolutional neural network. Sci Rep 2022; 12:3806. [PMID: 35264655 PMCID: PMC8907289 DOI: 10.1038/s41598-022-07859-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 02/28/2022] [Indexed: 01/28/2023] Open
Abstract
The Grade of meningioma has significant implications for selecting treatment regimens ranging from observation to surgical resection with adjuvant radiation. For most patients, meningiomas are diagnosed radiologically, and Grade is not determined unless a surgical procedure is performed. The goal of this study is to train a novel auto-classification network to determine Grade I and II meningiomas using T1-contrast enhancing (T1-CE) and T2-Fluid attenuated inversion recovery (FLAIR) magnetic resonance (MR) images. Ninety-six consecutive treatment naïve patients with pre-operative T1-CE and T2-FLAIR MR images and subsequent pathologically diagnosed intracranial meningiomas were evaluated. Delineation of meningiomas was completed on both MR images. A novel asymmetric 3D convolutional neural network (CNN) architecture was constructed with two encoding paths based on T1-CE and T2-FLAIR. Each path used the same 3 × 3 × 3 kernel with different filters to weigh the spatial features of each sequence separately. Final model performance was assessed by tenfold cross-validation. Of the 96 patients, 55 (57%) were pathologically classified as Grade I and 41 (43%) as Grade II meningiomas. Optimization of our model led to a filter weighting of 18:2 between the T1-CE and T2-FLAIR MR image paths. 86 (90%) patients were classified correctly, and 10 (10%) were misclassified based on their pre-operative MRs with a model sensitivity of 0.85 and specificity of 0.93. Among the misclassified, 4 were Grade I, and 6 were Grade II. The model is robust to tumor locations and sizes. A novel asymmetric CNN with two differently weighted encoding paths was developed for successful automated meningioma grade classification. Our model outperforms CNN using a single path for single or multimodal MR-based classification.
Collapse
|
22
|
Galldiks N, Angenstein F, Werner JM, Bauer EK, Gutsche R, Fink GR, Langen KJ, Lohmann P. Use of advanced neuroimaging and artificial intelligence in meningiomas. Brain Pathol 2022; 32:e13015. [PMID: 35213083 PMCID: PMC8877736 DOI: 10.1111/bpa.13015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 06/09/2021] [Accepted: 08/02/2021] [Indexed: 01/04/2023] Open
Abstract
Anatomical cross‐sectional imaging methods such as contrast‐enhanced MRI and CT are the standard for the delineation, treatment planning, and follow‐up of patients with meningioma. Besides, advanced neuroimaging is increasingly used to non‐invasively provide detailed insights into the molecular and metabolic features of meningiomas. These techniques are usually based on MRI, e.g., perfusion‐weighted imaging, diffusion‐weighted imaging, MR spectroscopy, and positron emission tomography. Furthermore, artificial intelligence methods such as radiomics offer the potential to extract quantitative imaging features from routinely acquired anatomical MRI and CT scans and advanced imaging techniques. This allows the linking of imaging phenotypes to meningioma characteristics, e.g., the molecular‐genetic profile. Here, we review several diagnostic applications and future directions of these advanced neuroimaging techniques, including radiomics in preclinical models and patients with meningioma.
Collapse
Affiliation(s)
- Norbert Galldiks
- Department of Neurology, Faculty of Medicine, University Hospital Cologne, University of Cologne, Cologne, Germany.,Institute of Neuroscience and Medicine (INM-3, -4), Research Center Juelich, Juelich, Germany.,Center for Integrated Oncology (CIO), Universities of Aachen, Cologne, Germany
| | - Frank Angenstein
- Functional Neuroimaging Group, Deutsches Zentrum für Neurodegenerative Erkrankungen (DZNE), Magdeburg, Germany.,Leibniz Institute for Neurobiology (LIN), Magdeburg, Germany.,Medical Faculty, Otto von Guericke University, Magdeburg, Germany
| | - Jan-Michael Werner
- Department of Neurology, Faculty of Medicine, University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Elena K Bauer
- Department of Neurology, Faculty of Medicine, University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Robin Gutsche
- Institute of Neuroscience and Medicine (INM-3, -4), Research Center Juelich, Juelich, Germany
| | - Gereon R Fink
- Department of Neurology, Faculty of Medicine, University Hospital Cologne, University of Cologne, Cologne, Germany.,Institute of Neuroscience and Medicine (INM-3, -4), Research Center Juelich, Juelich, Germany
| | - Karl-Josef Langen
- Institute of Neuroscience and Medicine (INM-3, -4), Research Center Juelich, Juelich, Germany.,Center for Integrated Oncology (CIO), Universities of Aachen, Cologne, Germany.,Department of Nuclear Medicine, University Hospital Aachen, Aachen, Germany
| | - Philipp Lohmann
- Institute of Neuroscience and Medicine (INM-3, -4), Research Center Juelich, Juelich, Germany.,Department of Stereotaxy and Functional Neurosurgery, Faculty of Medicine, University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
23
|
Yang L, Xu P, Zhang Y, Cui N, Wang M, Peng M, Gao C, Wang T. A deep learning radiomics model may help to improve the prediction performance of preoperative grading in meningioma. Neuroradiology 2022; 64:1373-1382. [PMID: 35037985 DOI: 10.1007/s00234-022-02894-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 01/04/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE This study aimed to investigate the clinical usefulness of the enhanced-T1WI-based deep learning radiomics model (DLRM) in differentiating low- and high-grade meningiomas. METHODS A total of 132 patients with pathologically confirmed meningiomas were consecutively enrolled (105 in the training cohort and 27 in the test cohort). Radiomics features and deep learning features were extracted from T1 weighted images (T1WI) (both axial and sagittal) and the maximum slice of the axial tumor lesion, respectively. Then, the synthetic minority oversampling technique (SMOTE) was utilized to balance the sample numbers. The optimal discriminative features were selected for model building. LightGBM algorithm was used to develop DLRM by a combination of radiomics features and deep learning features. For comparison, a radiomics model (RM) and a deep learning model (DLM) were constructed using a similar method as well. Differentiating efficacy was determined by using the receiver operating characteristic (ROC) analysis. RESULTS A total of 15 features were selected to construct the DLRM with SMOTE, which showed good discrimination performance in both the training and test cohorts. The DLRM outperformed RM and DLM for differentiating low- and high-grade meningiomas (training AUC: 0.988 vs. 0.980 vs. 0.892; test AUC: 0.935 vs. 0.918 vs. 0.718). The accuracy, sensitivity, and specificity of the DLRM with SMOTE were 0.926, 0.900, and 0.924 in the test cohort, respectively. CONCLUSION The DLRM with SMOTE based on enhanced T1WI images has favorable performance for noninvasively individualized prediction of meningioma grades, which exhibited favorable clinical usefulness superior over the radiomics features.
Collapse
Affiliation(s)
- Liping Yang
- PET-CT/MR Department, Harbin Medical University Cancer Hospital, Harbin, China
| | - Panpan Xu
- PET-CT/MR Department, Harbin Medical University Cancer Hospital, Harbin, China
| | - Ying Zhang
- PET-CT/MR Department, Harbin Medical University Cancer Hospital, Harbin, China
| | - Nan Cui
- PET-CT/MR Department, Harbin Medical University Cancer Hospital, Harbin, China
| | - Menglu Wang
- PET-CT/MR Department, Harbin Medical University Cancer Hospital, Harbin, China
| | - Mengye Peng
- PET-CT/MR Department, Harbin Medical University Cancer Hospital, Harbin, China
| | - Chao Gao
- Medical Imaging Department, The Fourth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Tianzuo Wang
- Medical Imaging Department, The Second Affiliated Hospital of Harbin Medical University, Harbin, China.
| |
Collapse
|
24
|
Danilov GV, Shifrin MA, Kotik KV, Ishankulov TA, Orlov YN, Kulikov AS, Potapov AA. Artificial Intelligence Technologies in Neurosurgery: a Systematic Literature Review Using Topic Modeling. Part II: Research Objectives and Perspectives. Sovrem Tekhnologii Med 2021; 12:111-118. [PMID: 34796024 PMCID: PMC8596229 DOI: 10.17691/stm2020.12.6.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Indexed: 12/29/2022] Open
Abstract
The current increase in the number of publications on the use of artificial intelligence (AI) technologies in neurosurgery indicates a new trend in clinical neuroscience. The aim of the study was to conduct a systematic literature review to highlight the main directions and trends in the use of AI in neurosurgery.
Collapse
Affiliation(s)
- G V Danilov
- Scientific Board Secretary; N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia; Head of the Laboratory of Biomedical Informatics and Artificial Intelligence; N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - M A Shifrin
- Scientific Consultant, Laboratory of Biomedical Informatics and Artificial Intelligence; N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - K V Kotik
- Physics Engineer, Laboratory of Biomedical Informatics and Artificial Intelligence; N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - T A Ishankulov
- Engineer, Laboratory of Biomedical Informatics and Artificial Intelligence; N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - Yu N Orlov
- Head of the Department of Computational Physics and Kinetic Equations; Keldysh Institute of Applied Mathematics, Russian Academy of Sciences, 4 Miusskaya Sq., Moscow, 125047, Russia
| | - A S Kulikov
- Staff Anesthesiologist; N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - A A Potapov
- Professor, Academician of the Russian Academy of Sciences, Chief Scientific Supervisor N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| |
Collapse
|
25
|
Banzato T, Wodzinski M, Tauceri F, Donà C, Scavazza F, Müller H, Zotti A. An AI-Based Algorithm for the Automatic Classification of Thoracic Radiographs in Cats. Front Vet Sci 2021; 8:731936. [PMID: 34722699 PMCID: PMC8554083 DOI: 10.3389/fvets.2021.731936] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/21/2021] [Indexed: 01/31/2023] Open
Abstract
An artificial intelligence (AI)-based computer-aided detection (CAD) algorithm to detect some of the most common radiographic findings in the feline thorax was developed and tested. The database used for training comprised radiographs acquired at two different institutions. Only correctly exposed and positioned radiographs were included in the database used for training. The presence of several radiographic findings was recorded. Consequenly, the radiographic findings included for training were: no findings, bronchial pattern, pleural effusion, mass, alveolar pattern, pneumothorax, cardiomegaly. Multi-label convolutional neural networks (CNNs) were used to develop the CAD algorithm, and the performance of two different CNN architectures, ResNet 50 and Inception V3, was compared. Both architectures had an area under the receiver operating characteristic curve (AUC) above 0.9 for alveolar pattern, bronchial pattern and pleural effusion, an AUC above 0.8 for no findings and pneumothorax, and an AUC above 0.7 for cardiomegaly. The AUC for mass was low (above 0.5) for both architectures. No significant differences were evident in the diagnostic accuracy of either architecture.
Collapse
Affiliation(s)
- Tommaso Banzato
- Department of Animal Medicine, Production and Health, University of Padua, Legnaro, Italy
| | - Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland.,Information Systems Institute, University of Applied Sciences - Western Switzerland (HES-SO Valais), Sierre, Switzerland
| | - Federico Tauceri
- Department of Animal Medicine, Production and Health, University of Padua, Legnaro, Italy
| | - Chiara Donà
- Department of Animal Medicine, Production and Health, University of Padua, Legnaro, Italy
| | - Filippo Scavazza
- Department of Animal Medicine, Production and Health, University of Padua, Legnaro, Italy
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences - Western Switzerland (HES-SO Valais), Sierre, Switzerland
| | - Alessandro Zotti
- Department of Animal Medicine, Production and Health, University of Padua, Legnaro, Italy
| |
Collapse
|
26
|
Radiomics machine learning study with a small sample size: Single random training-test set split may lead to unreliable results. PLoS One 2021; 16:e0256152. [PMID: 34383858 PMCID: PMC8360533 DOI: 10.1371/journal.pone.0256152] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 08/01/2021] [Indexed: 12/23/2022] Open
Abstract
This study aims to determine how randomly splitting a dataset into training and test sets affects the estimated performance of a machine learning model and its gap from the test performance under different conditions, using real-world brain tumor radiomics data. We conducted two classification tasks of different difficulty levels with magnetic resonance imaging (MRI) radiomics features: (1) "Simple" task, glioblastomas [n = 109] vs. brain metastasis [n = 58] and (2) "difficult" task, low- [n = 163] vs. high-grade [n = 95] meningiomas. Additionally, two undersampled datasets were created by randomly sampling 50% from these datasets. We performed random training-test set splitting for each dataset repeatedly to create 1,000 different training-test set pairs. For each dataset pair, the least absolute shrinkage and selection operator model was trained and evaluated using various validation methods in the training set, and tested in the test set, using the area under the curve (AUC) as an evaluation metric. The AUCs in training and testing varied among different training-test set pairs, especially with the undersampled datasets and the difficult task. The mean (±standard deviation) AUC difference between training and testing was 0.039 (±0.032) for the simple task without undersampling and 0.092 (±0.071) for the difficult task with undersampling. In a training-test set pair with the difficult task without undersampling, for example, the AUC was high in training but much lower in testing (0.882 and 0.667, respectively); in another dataset pair with the same task, however, the AUC was low in training but much higher in testing (0.709 and 0.911, respectively). When the AUC discrepancy between training and test, or generalization gap, was large, none of the validation methods helped sufficiently reduce the generalization gap. Our results suggest that machine learning after a single random training-test set split may lead to unreliable results in radiomics studies especially with small sample sizes.
Collapse
|
27
|
Automatic Meningioma Segmentation and Grading Prediction: A Hybrid Deep-Learning Method. J Pers Med 2021; 11:jpm11080786. [PMID: 34442431 PMCID: PMC8401675 DOI: 10.3390/jpm11080786] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Revised: 08/07/2021] [Accepted: 08/11/2021] [Indexed: 02/05/2023] Open
Abstract
The purpose of this study was to determine whether a deep-learning-based assessment system could facilitate preoperative grading of meningioma. This was a retrospective study conducted at two institutions covering 643 patients. The system, designed with a cascade network structure, was developed using deep-learning technology for automatic tumor detection, visual assessment, and grading prediction. Specifically, a modified U-Net convolutional neural network was first established to segment tumor images. Subsequently, the segmentations were introduced into rendering algorithms for spatial reconstruction and another DenseNet convolutional neural network for grading prediction. The trained models were integrated as a system, and the robustness was tested based on its performance on an external dataset from the second institution involving different magnetic resonance imaging platforms. The results showed that the segment model represented a noteworthy performance with dice coefficients of 0.920 ± 0.009 in the validation group. With accurate segmented tumor images, the rendering model delicately reconstructed the tumor body and clearly displayed the important intracranial vessels. The DenseNet model also achieved high accuracy with an area under the curve of 0.918 ± 0.006 and accuracy of 0.901 ± 0.039 when classifying tumors into low-grade and high-grade meningiomas. Moreover, the system exhibited good performance on the external validation dataset.
Collapse
|
28
|
McEvoy FJ, Proschowsky HF, Müller AV, Moorman L, Bender-Koch J, Svalastoga EL, Frellsen J, Nielsen DH. Deep transfer learning can be used for the detection of hip joints in pelvis radiographs and the classification of their hip dysplasia status. Vet Radiol Ultrasound 2021; 62:387-393. [PMID: 33818829 DOI: 10.1111/vru.12968] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 02/09/2021] [Accepted: 02/14/2021] [Indexed: 12/11/2022] Open
Abstract
Reports of machine learning implementations in veterinary imaging are infrequent but changes in machine learning architecture and access to increased computing power will likely prompt increased interest. This diagnostic accuracy study describes a particular form of machine learning, a deep learning convolution neural network (ConvNet) for hip joint detection and classification of hip dysplasia from ventro-dorsal (VD) pelvis radiographs submitted for hip dysplasia screening. 11,759 pelvis images were available together with their Fédération Cynologique Internationale (FCI) scores. The dataset was dicotomized into images showing no signs of hip dysplasia (FCI grades "A" and "B", the "A-B" group) and hips showing signs of dysplasia (FCI grades "C", "D," and "E", the "C-E" group). In a transfer learning approach, an existing pretrained ConvNet was fine-tuned to provide models to recognize hip joints in VD pelvis images and to classify them according to their FCI score grouping. The results yielded two models. The first was successful in detecting hip joints in the VD pelvis images (intersection over union of 85%). The second yielded a sensitivity of 0.53, a specificity of 0.92, a positive predictive value of 0.91, and a negative predictive value of 0.81 for the classification of detected hip joints as being in the "C-E" group. ConvNets and transfer learning are applicable to veterinary imaging. The models obtained have potential to be a tool to aid in hip screening protocols if hip dysplasia classification performance was improved through access to more data and possibly by model optimization.
Collapse
Affiliation(s)
- Fintan J McEvoy
- Department of Veterinary Clinical Sciences, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | | | - Anna V Müller
- Department of Veterinary Clinical Sciences, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Lilah Moorman
- Department of Veterinary Clinical Sciences, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | | | - Eiliv L Svalastoga
- Department of Veterinary Clinical Sciences, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Jes Frellsen
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Dorte H Nielsen
- Department of Veterinary Clinical Sciences, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
29
|
Zegers C, Posch J, Traverso A, Eekers D, Postma A, Backes W, Dekker A, van Elmpt W. Current applications of deep-learning in neuro-oncological MRI. Phys Med 2021; 83:161-173. [DOI: 10.1016/j.ejmp.2021.03.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 02/01/2021] [Accepted: 03/02/2021] [Indexed: 12/18/2022] Open
|
30
|
Artificial intelligence applications in medical imaging: A review of the medical physics research in Italy. Phys Med 2021; 83:221-241. [DOI: 10.1016/j.ejmp.2021.04.010] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 03/31/2021] [Accepted: 04/03/2021] [Indexed: 02/06/2023] Open
|
31
|
Banzato T, Wodzinski M, Burti S, Osti VL, Rossoni V, Atzori M, Zotti A. Automatic classification of canine thoracic radiographs using deep learning. Sci Rep 2021; 11:3964. [PMID: 33597566 PMCID: PMC7889925 DOI: 10.1038/s41598-021-83515-3] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Accepted: 02/04/2021] [Indexed: 01/13/2023] Open
Abstract
The interpretation of thoracic radiographs is a challenging and error-prone task for veterinarians. Despite recent advancements in machine learning and computer vision, the development of computer-aided diagnostic systems for radiographs remains a challenging and unsolved problem, particularly in the context of veterinary medicine. In this study, a novel method, based on multi-label deep convolutional neural network (CNN), for the classification of thoracic radiographs in dogs was developed. All the thoracic radiographs of dogs performed between 2010 and 2020 in the institution were retrospectively collected. Radiographs were taken with two different radiograph acquisition systems and were divided into two data sets accordingly. One data set (Data Set 1) was used for training and testing and another data set (Data Set 2) was used to test the generalization ability of the CNNs. Radiographic findings used as non mutually exclusive labels to train the CNNs were: unremarkable, cardiomegaly, alveolar pattern, bronchial pattern, interstitial pattern, mass, pleural effusion, pneumothorax, and megaesophagus. Two different CNNs, based on ResNet-50 and DenseNet-121 architectures respectively, were developed and tested. The CNN based on ResNet-50 had an Area Under the Receive-Operator Curve (AUC) above 0.8 for all the included radiographic findings except for bronchial and interstitial patterns both on Data Set 1 and Data Set 2. The CNN based on DenseNet-121 had a lower overall performance. Statistically significant differences in the generalization ability between the two CNNs were evident, with the CNN based on ResNet-50 showing better performance for alveolar pattern, interstitial pattern, megaesophagus, and pneumothorax.
Collapse
Affiliation(s)
- Tommaso Banzato
- Department of Animal Medicine, Productions, and Health, Legnaro (PD), University of Padua, 35020, Padua, Italy.
| | - Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Science and Technology, 32059, Kraków, Poland
| | - Silvia Burti
- Department of Animal Medicine, Productions, and Health, Legnaro (PD), University of Padua, 35020, Padua, Italy
| | - Valentina Longhin Osti
- Department of Animal Medicine, Productions, and Health, Legnaro (PD), University of Padua, 35020, Padua, Italy
| | - Valentina Rossoni
- Department of Animal Medicine, Productions, and Health, Legnaro (PD), University of Padua, 35020, Padua, Italy
| | - Manfredo Atzori
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), 3960, Sierre, Switzerland.,Department of Neuroscience, University of Padua, 35128, Padua, IT, Italy
| | - Alessandro Zotti
- Department of Animal Medicine, Productions, and Health, Legnaro (PD), University of Padua, 35020, Padua, Italy
| |
Collapse
|
32
|
Meningioma Consistency Can Be Defined by Combining the Radiomic Features of Magnetic Resonance Imaging and Ultrasound Elastography. A Pilot Study Using Machine Learning Classifiers. World Neurosurg 2020; 146:e1147-e1159. [PMID: 33259973 DOI: 10.1016/j.wneu.2020.11.113] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 11/19/2020] [Accepted: 11/20/2020] [Indexed: 12/24/2022]
Abstract
BACKGROUND The consistency of meningioma is a factor that may influence surgical planning and the extent of resection. The aim of our study is to develop a predictive model of tumor consistency using the radiomic features of preoperative magnetic resonance imaging and the tumor elasticity measured by intraoperative ultrasound elastography (IOUS-E) as a reference parameter. METHODS A retrospective analysis was performed on supratentorial meningiomas that were operated on between March 2018 and July 2020. Cases with IOUS-E studies were included. A semiquantitative analysis of elastograms was used to define the meningioma consistency. MRIs were preprocessed before extracting radiomic features. Predictive models were built using a combination of feature selection filters and machine learning algorithms: logistic regression, Naive Bayes, k-nearest neighbors, Random Forest, Support Vector Machine, and Neural Network. A stratified 5-fold cross-validation was performed. Then, models were evaluated using the area under the curve and classification accuracy. RESULTS Eighteen patients were available for analysis. Meningiomas were classified as hard or soft according to a mean tissue elasticity threshold of 120. The best-ranked radiomic features were obtained from T1-weighted post-contrast, apparent diffusion coefficient map, and T2-weighted images. The combination of Information Gain and ReliefF filters with the Naive Bayes algorithm resulted in an area under the curve of 0.961 and classification accuracy of 94%. CONCLUSIONS We have developed a high-precision classification model that is capable of predicting consistency of meningiomas based on the radiomic features in preoperative magnetic resonance imaging (T2-weighted, T1-weighted post-contrast, and apparent diffusion coefficient map).
Collapse
|
33
|
Wodzinski M, Banzato T, Atzori M, Andrearczyk V, Cid YD, Muller H. Training Deep Neural Networks for Small and Highly Heterogeneous MRI Datasets for Cancer Grading. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1758-1761. [PMID: 33018338 DOI: 10.1109/embc44109.2020.9175634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Using medical images recorded in clinical practice has the potential to be a game-changer in the application of machine learning for medical decision support. Thousands of medical images are produced in daily clinical activity. The diagnosis of medical doctors on these images represents a source of knowledge to train machine learning algorithms for scientific research or computer-aided diagnosis. However, the requirement of manual data annotations and the heterogeneity of images and annotations make it difficult to develop algorithms that are effective on images from different centers or sources (scanner manufacturers, protocols, etc.). The objective of this article is to explore the opportunities and the limits of highly heterogeneous biomedical data, since many medical data sets are small and entail a challenge for machine learning techniques. Particularly, we focus on a small data set targeting meningioma grading. Meningioma grading is crucial for patient treatment and prognosis. It is normally performed by histological examination but recent articles showed that it is possible to do it also on magnetic resonance images (MRI), so non-invasive. Our data set consists of 174 T1-weighted MRI images of patients with meningioma, divided into 126 benign and 48 atypical/anaplastic cases, acquired using 26 different MRI scanners and 125 acquisition protocols, which shows the enormous variability in the data set. The performed preprocessing steps include tumor segmentation, spatial image normalization and data augmentation based on color and affine transformations. The preprocessed cases are passed to a carefully trained 2-D convolutional neural network. Accuracy above 74% was obtained, with the high-grade tumor recall above 74%. The results are encouraging considering the limited size and high heterogeneity of the data set. The proposed methodology can be useful for other problems involving classification of small and highly heterogeneous data sets.
Collapse
|
34
|
Neromyliotis E, Kalamatianos T, Paschalis A, Komaitis S, Fountas KN, Kapsalaki EZ, Stranjalis G, Tsougos I. Machine Learning in Meningioma MRI: Past to Present. A Narrative Review. J Magn Reson Imaging 2020; 55:48-60. [PMID: 33006425 DOI: 10.1002/jmri.27378] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 09/10/2020] [Accepted: 09/10/2020] [Indexed: 12/28/2022] Open
Abstract
Meningioma is one of the most frequent primary central nervous system tumors. While magnetic resonance imaging (MRI), is the standard radiologic technique for provisional diagnosis and surveillance of meningioma, it nevertheless lacks the prima facie capacity in determining meningioma biological aggressiveness, growth, and recurrence potential. An increasing body of evidence highlights the potential of machine learning and radiomics in improving the consistency and productivity and in providing novel diagnostic, treatment, and prognostic modalities in neuroncology imaging. The aim of the present article is to review the evolution and progress of approaches utilizing machine learning in meningioma MRI-based sementation, diagnosis, grading, and prognosis. We provide a historical perspective on original research on meningioma spanning over two decades and highlight recent studies indicating the feasibility of pertinent approaches, including deep learning in addressing several clinically challenging aspects. We indicate the limitations of previous research designs and resources and propose future directions by highlighting areas of research that remain largely unexplored. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
- Eleftherios Neromyliotis
- Departent of Neurosurgery, University of Athens Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Theodosis Kalamatianos
- Departent of Neurosurgery, University of Athens Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Athanasios Paschalis
- Department of Neurosurgery, School of Medicine, University of Thessaly, Larisa, Greece
| | - Spyridon Komaitis
- Departent of Neurosurgery, University of Athens Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Konstantinos N Fountas
- Department of Clinical and Laboratory Research, School of Medicine, University of Thessaly, Larisa, Greece
| | - Eftychia Z Kapsalaki
- Department of Clinical and Laboratory Research, School of Medicine, University of Thessaly, Larisa, Greece
| | - George Stranjalis
- Departent of Neurosurgery, University of Athens Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Ioannis Tsougos
- Department of Medical Physics, School of Medicine, University of Thessaly, Larisa, Greece
| |
Collapse
|
35
|
Burti S, Longhin Osti V, Zotti A, Banzato T. Use of deep learning to detect cardiomegaly on thoracic radiographs in dogs. Vet J 2020; 262:105505. [PMID: 32792095 DOI: 10.1016/j.tvjl.2020.105505] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Revised: 07/04/2020] [Accepted: 07/06/2020] [Indexed: 12/31/2022]
Abstract
The purpose of this study was to develop a computer-aided detection (CAD) device based on convolutional neural networks (CNNs) to detect cardiomegaly from plain radiographs in dogs. Right lateral chest radiographs (n = 1465) were retrospectively selected from archives. The radiographs were classified as having a normal cardiac silhouette (No-vertebral heart scale [VHS]-Cardiomegaly) or an enlarged cardiac silhouette (VHS-Cardiomegaly) based on the breed-specific VHS. The database was divided into a training set (1153 images) and a test set (315 images). The diagnostic accuracy of four different CNN models in the detection of cardiomegaly was calculated using the test set. All tested models had an area under the curve >0.9, demonstrating high diagnostic accuracy. There was a statistically significant difference between Model C and the remainder models (Model A vs. Model C, P = 0.0298; Model B vs. Model C, P = 0.003; Model C vs. Model D, P = 0.0018), but there were no significant differences between other combinations of models (Model A vs. Model B, P = 0.395; Model A vs. Model D, P = 0.128; Model B vs. Model D, P = 0.373). Convolutional neural networks could therefore assist veterinarians in detecting cardiomegaly in dogs from plain radiographs.
Collapse
Affiliation(s)
- S Burti
- Department of Animal Medicine, Productions and Health, University of Padua, Viale Dell'Università 16, 35020 Legnaro, Padua, Italy
| | - V Longhin Osti
- Department of Animal Medicine, Productions and Health, University of Padua, Viale Dell'Università 16, 35020 Legnaro, Padua, Italy
| | - A Zotti
- Department of Animal Medicine, Productions and Health, University of Padua, Viale Dell'Università 16, 35020 Legnaro, Padua, Italy
| | - T Banzato
- Department of Animal Medicine, Productions and Health, University of Padua, Viale Dell'Università 16, 35020 Legnaro, Padua, Italy.
| |
Collapse
|