1
|
Gong T, Gao Y, Li H, Wang J, Li Z, Yuan Q. Research progress in multimodal radiomics of rectal cancer tumors and peritumoral regions in MRI. Abdom Radiol (NY) 2025:10.1007/s00261-025-04965-1. [PMID: 40448847 DOI: 10.1007/s00261-025-04965-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2025] [Revised: 04/18/2025] [Accepted: 04/20/2025] [Indexed: 06/02/2025]
Abstract
Rectal cancer (RC) is one of the most common malignant tumors of the digestive system and has an alarmingly high incidence and mortality rate globally. Compared to conventional imaging examinations, radiomics can extract quantitative features that reflect tumor heterogeneity and mine data from medical images. In this review, we discuss the potential value of multimodal MRI-based radiomics in the diagnosis and treatment of RC, with a special emphasis on the role of peritumoral tissue characteristics in clinical decision-making. Existing studies have shown that a radiomics model integrating intratumoral and peritumoral characteristics has good application prospects in RC staging evaluation, efficacy prediction, metastasis monitoring, recurrence early warning, and prognosis judgment. At the same time, this paper also objectively analyzes the existing methodological limitations in this field, including insufficient data standardization, inadequate model validation, limited sample size and poor reproducibility of results. By combining existing evidence, this review aimed to enhance the attention of clinicians and radiologists on the characteristics of peritumoral tissues and promote the translational application of radiomics technology in the individualized treatment of RC.
Collapse
Affiliation(s)
- Tingting Gong
- The Second Affiliated Hospital of Jilin University, Jilin Province, China
| | - Ying Gao
- The Second Affiliated Hospital of Jilin University, Jilin Province, China
| | - He Li
- The Second Affiliated Hospital of Jilin University, Jilin Province, China
| | - Jianqiu Wang
- The Second Affiliated Hospital of Jilin University, Jilin Province, China
| | - Zili Li
- Jilin Province Cancer Hospital, Jilin Province, China.
| | - Qinghai Yuan
- The Second Affiliated Hospital of Jilin University, Jilin Province, China.
| |
Collapse
|
2
|
Lin Y, Liu Y, Zhang X, Zhong T, Hu F. A High-resolution T2WI-based Deep Learning Model for Preoperative Discrimination Between T2 and T3 Rectal Cancer: A Multicenter Study. Acad Radiol 2025:S1076-6332(25)00291-0. [PMID: 40221285 DOI: 10.1016/j.acra.2025.03.048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2025] [Revised: 01/27/2025] [Accepted: 03/25/2025] [Indexed: 04/14/2025]
Abstract
RATIONALE AND OBJECTIVES To construct a deep learning model (DL) based on high-resolution T2-weighted images for preoperative differentiation between T2 and T3 stage rectal cancer (RC), and to compare its performance with experienced radiologists. METHODS This retrospective study included 281 patients with pathologically confirmed RC from four centers (January 2017-December 2022). A DenseNet model was developed using 255 patients from three centers (training:validation ratio=8:2) and externally tested on 26 patients from a fourth center. Two experienced radiologists independently assessed T staging. Diagnostic performance was evaluated using accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). RESULTS The DL model outperformed radiologists in differentiating T2 and T3 stages across all datasets. In the training set, the DL model achieved an AUC of 0.810, compared to 0.578 and 0.625 for radiologists A and B, respectively. In the external test set, the DL model maintained superior diagnostic performance (AUC=0.715) compared to radiologist A (AUC=0.549) and radiologist B (AUC=0.493). The DL model demonstrated higher accuracy for T2 staging (0.625-0.787) and T3 staging (0.611-0.814) compared to radiologists (0.373-0.526 for T2; 0.611-0.783 for T3), who showed a tendency to over-stage T2 tumors. Inter-observer agreement between radiologists was moderate (kappa=0.451). CONCLUSION The DenseNet-based DL model demonstrated superior accuracy and diagnostic efficiency than radiologists in preoperative differentiation between T2 and T3 stages RC. This automated approach could potentially improve staging accuracy and support clinical decision-making in RC treatment planning.
Collapse
Affiliation(s)
- Yanmei Lin
- Department of Radiology, The First Affiliated Hospital of Chengdu Medical College, Chengdu, Sichuan, China (Y.L., Y.L., F.H.); Department of Radiology, the Second Affiliated Hospital of Chengdu Medical College, Chengdu, Sichuan, China (Y.L.)
| | - Ying Liu
- Department of Radiology, The First Affiliated Hospital of Chengdu Medical College, Chengdu, Sichuan, China (Y.L., Y.L., F.H.)
| | - Xiao Zhang
- Department of Radiology, The People's Hospital of Leshan, Sichuan, China (X.Z.)
| | - Tangli Zhong
- Department of Radiology, Mianyang Central Hospital, Mianyang, Sichuan, China (T.Z.)
| | - Fubi Hu
- Department of Radiology, The First Affiliated Hospital of Chengdu Medical College, Chengdu, Sichuan, China (Y.L., Y.L., F.H.).
| |
Collapse
|
3
|
Bian Y, Li J, Ye C, Jia X, Yang Q. Artificial intelligence in medical imaging: From task-specific models to large-scale foundation models. Chin Med J (Engl) 2025; 138:651-663. [PMID: 40008785 PMCID: PMC11925424 DOI: 10.1097/cm9.0000000000003489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Indexed: 02/27/2025] Open
Abstract
ABSTRACT Artificial intelligence (AI), particularly deep learning, has demonstrated remarkable performance in medical imaging across a variety of modalities, including X-ray, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), and pathological imaging. However, most existing state-of-the-art AI techniques are task-specific and focus on a limited range of imaging modalities. Compared to these task-specific models, emerging foundation models represent a significant milestone in AI development. These models can learn generalized representations of medical images and apply them to downstream tasks through zero-shot or few-shot fine-tuning. Foundation models have the potential to address the comprehensive and multifactorial challenges encountered in clinical practice. This article reviews the clinical applications of both task-specific and foundation models, highlighting their differences, complementarities, and clinical relevance. We also examine their future research directions and potential challenges. Unlike the replacement relationship seen between deep learning and traditional machine learning, task-specific and foundation models are complementary, despite inherent differences. While foundation models primarily focus on segmentation and classification, task-specific models are integrated into nearly all medical image analyses. However, with further advancements, foundation models could be applied to other clinical scenarios. In conclusion, all indications suggest that task-specific and foundation models, especially the latter, have the potential to drive breakthroughs in medical imaging, from image processing to clinical workflows.
Collapse
Affiliation(s)
- Yueyan Bian
- Department of Radiology, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
- Key Lab of Medical Engineering for Cardiovascular Disease, Ministry of Education, Beijing 100020, China
- Laboratory for Clinical Medicine, Capital Medical University, Beijing 100020, China
| | - Jin Li
- Department of Radiology, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
- Key Lab of Medical Engineering for Cardiovascular Disease, Ministry of Education, Beijing 100020, China
- Laboratory for Clinical Medicine, Capital Medical University, Beijing 100020, China
| | - Chuyang Ye
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Xiuqin Jia
- Department of Radiology, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
- Key Lab of Medical Engineering for Cardiovascular Disease, Ministry of Education, Beijing 100020, China
- Laboratory for Clinical Medicine, Capital Medical University, Beijing 100020, China
| | - Qi Yang
- Department of Radiology, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
- Key Lab of Medical Engineering for Cardiovascular Disease, Ministry of Education, Beijing 100020, China
- Laboratory for Clinical Medicine, Capital Medical University, Beijing 100020, China
| |
Collapse
|
4
|
Huang Z, Pan Y, Huang W, Pan F, Wang H, Yan C, Ye R, Weng S, Cai J, Li Y. Predicting Microvascular Invasion and Early Recurrence in Hepatocellular Carcinoma Using DeepLab V3+ Segmentation of Multiregional MR Habitat Images. Acad Radiol 2025:S1076-6332(25)00109-6. [PMID: 40011096 DOI: 10.1016/j.acra.2025.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2025] [Revised: 02/05/2025] [Accepted: 02/05/2025] [Indexed: 02/28/2025]
Abstract
RATIONALE AND OBJECTIVES Accurate identification of microvascular invasion (MVI) in hepatocellular carcinoma (HCC) is crucial for treatment and prognosis. Single-modality and feature fusion models using manual segmentation fail to provide insights into MVI. This study aims to develop a DeepLab V3+ model for automated segmentation of HCC magnetic resonance (MR) images and a decision fusion model to predict MVI and early recurrence (ER). MATERIALS AND METHODS This retrospective study included 209 HCC patients (146 in the training and 63 in the test cohorts). The performance of DeepLab V3+ for HCC MR image segmentation was evaluated using Dice Loss and F1 score. Intraclass correlation coefficients (ICCs) assessed feature extraction reliability. Spearman's correlation analyzed the relationship between tumor volumes from automated and manual segmentation, with agreement evaluated using Bland-Altman plots. Model performance was assessed using the area under the receiver operating characteristic curve (ROC AUC), calibration curves, and decision curve analysis. A nomogram predicted ER of HCC after surgery, with Kaplan-Meier analysis for 2-year recurrence-free survival (RFS). RESULTS The DeepLab V3+ model demonstrated high segmentation accuracy, with strong agreement in feature extraction (ICC: 0.802-0.999). The decision fusion model achieved AUCs of 0.968 and 0.878 for MVI prediction, and the nomogram for predicting ER yielded AUCs of 0.782 and 0.690 in the training and test cohorts, respectively, with significant RFS differences between the risk groups. CONCLUSION The DeepLab V3+ model accurately segmented HCC. The decision fusion model significantly improved MVI prediction, and the nomogram offered valuable insights into recurrence risk for clinical decision-making. AVAILABILITY OF DATA AND MATERIALS The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Collapse
Affiliation(s)
- Zhenhuan Huang
- Department of Radiology, Longyan First Affiliated Hospital of Fujian Medical University, Longyan, Fujian 364000, China (Z.H.); Department of Radiology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, Fujian 350005, China (Z.H., Y.P., W.H., F.P., H.W., C.Y., R.Y., Y.L.)
| | - Yifan Pan
- Department of Radiology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, Fujian 350005, China (Z.H., Y.P., W.H., F.P., H.W., C.Y., R.Y., Y.L.)
| | - Wanrong Huang
- Department of Radiology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, Fujian 350005, China (Z.H., Y.P., W.H., F.P., H.W., C.Y., R.Y., Y.L.)
| | - Feng Pan
- Department of Radiology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, Fujian 350005, China (Z.H., Y.P., W.H., F.P., H.W., C.Y., R.Y., Y.L.)
| | - Huifang Wang
- Department of Radiology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, Fujian 350005, China (Z.H., Y.P., W.H., F.P., H.W., C.Y., R.Y., Y.L.)
| | - Chuan Yan
- Department of Radiology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, Fujian 350005, China (Z.H., Y.P., W.H., F.P., H.W., C.Y., R.Y., Y.L.)
| | - Rongping Ye
- Department of Radiology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, Fujian 350005, China (Z.H., Y.P., W.H., F.P., H.W., C.Y., R.Y., Y.L.)
| | - Shuping Weng
- Department of Radiology, Fujian Maternity and Child Health Hospital, Fujian Medical University, Fuzhou, Fujian 350001, China (S.W.)
| | - Jingyi Cai
- School of Medical Imaging, Fujian Medical University, Fuzhou, Fujian 350001, China (J.C.)
| | - Yueming Li
- Department of Radiology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, Fujian 350005, China (Z.H., Y.P., W.H., F.P., H.W., C.Y., R.Y., Y.L.); Department of Radiology, National Regional Medical Center, Binhai Campus of The First Affiliated Hospital, Fujian Medical University, Fuzhou, Fujian 350212, China (Y.L.); Key Laboratory of Radiation Biology of Fujian higher education institutions, The First Affiliated Hospital, Fujian Medical University, Fuzhou, Fujian 350005, China (Y.L.).
| |
Collapse
|
5
|
Onishi S, Kuwahara T, Tajika M, Tanaka T, Yamada K, Shimizu M, Niwa Y, Yamaguchi R. Artificial intelligence for body composition assessment focusing on sarcopenia. Sci Rep 2025; 15:1324. [PMID: 39779762 PMCID: PMC11711400 DOI: 10.1038/s41598-024-83401-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Accepted: 12/13/2024] [Indexed: 01/11/2025] Open
Abstract
This study aimed to address the limitations of conventional methods for measuring skeletal muscle mass for sarcopenia diagnosis by introducing an artificial intelligence (AI) system for direct computed tomography (CT) analysis. The primary focus was on enhancing simplicity, reproducibility, and convenience, and assessing the accuracy and speed of AI compared with conventional methods. A cohort of 3096 cases undergoing CT imaging up to the third lumbar (L3) level between 2011 and 2021 were included. Random division into preprocessing and sarcopenia cohorts was performed, with further random splits into training and validation cohorts for BMI_AI and Body_AI creation. Sarcopenia_AI utilizes the Skeletal Muscle Index (SMI), which is calculated as (total skeletal muscle area at L3)/(height)2. The SMI was conventionally measured twice, with the first as the AI label reference and the second for comparison. Agreement and diagnostic change rates were calculated. Three groups were randomly assigned and 10 images before and after L3 were collected for each case. AI models for body region detection (Deeplabv3) and sarcopenia diagnosis (EfficientNetV2-XL) were trained on a supercomputer, and their abilities and speed per image were evaluated. The conventional method showed a low agreement rate (κ coefficient) of 0.478 for the test cohort and 0.236 for the validation cohort, with diagnostic changes in 43% of cases. Conversely, the AI consistently produced identical results after two measurements. The AI demonstrated robust body region detection ability (intersection over Union (IoU) = 0.93), accurately detecting only the body region in all images. The AI for sarcopenia diagnosis exhibited high accuracy, with a sensitivity of 82.3%, specificity of 98.1%, and a positive predictive value of 89.5%. In conclusion, the reproducibility of the conventional method for sarcopenia diagnosis was low. The developed sarcopenia diagnostic AI, with its high positive predictive value and convenient diagnostic capabilities, is a promising alternative for addressing the shortcomings of conventional approaches.
Collapse
Affiliation(s)
- Sachiyo Onishi
- Department of Endoscopy, Aichi Cancer Center, Nagoya, Aichi, Japan
| | - Takamichi Kuwahara
- Department of Gastroenterology, Aichi Cancer Center, 1-1 Kanokoden, Chikusa-ku, Nagoya, Aichi, 464-8681, Japan.
| | - Masahiro Tajika
- Department of Endoscopy, Aichi Cancer Center, Nagoya, Aichi, Japan
| | - Tsutomu Tanaka
- Department of Endoscopy, Aichi Cancer Center, Nagoya, Aichi, Japan
| | - Keisaku Yamada
- Department of Endoscopy, Aichi Cancer Center, Nagoya, Aichi, Japan
| | - Masahito Shimizu
- Department of Gastroenterology/Internal Medicine, Gifu University School of Medicine Graduate School of Medicine, Gifu, Gifu, Japan
| | - Yasumasa Niwa
- Department of Endoscopy, Aichi Cancer Center, Nagoya, Aichi, Japan
| | - Rui Yamaguchi
- Division of Cancer Systems Biology, Aichi Cancer Center Research Institute, Nagoya, Aichi, Japan
- Division of Cancer Informatics, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
6
|
Chen D, Yang X, Qin S, Li X, Dai J, Tang Y, Men K. Efficient strategy for magnetic resonance image-guided adaptive radiotherapy of rectal cancer using a library of reference plans. Phys Imaging Radiat Oncol 2025; 33:100747. [PMID: 40123773 PMCID: PMC11926541 DOI: 10.1016/j.phro.2025.100747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Revised: 02/27/2025] [Accepted: 02/28/2025] [Indexed: 03/25/2025] Open
Abstract
Background and purpose Adaptive radiotherapy for patients with rectal cancer using a magnetic resonance-guided linear accelerator has limitations in managing bladder shape variations. Conventional couch shifts may result in missing the target while requiring a large margin. Conversely, fully adaptive strategy is time-consuming. Therefore, a more efficient strategy for online adaptive radiotherapy is required. Materials and methods This retrospective study included 50 fractions from 10 patients with rectal cancer undergoing preoperative radiotherapy. The proposed method involved preparing a library of reference plans (LoRP) based on diverse bladder shapes. For each fraction, a plan from the LoRP was selected based on daily bladder filling. This plan was compared with those generated by conventional couch shift and fully adaptive strategies. The clinical acceptability of the plans (i.e., per protocol, variation-acceptable, or unacceptable) was assessed. Results In per protocol criterion, 44 %, 6 %, and 100 % of the plans for LoRP, conventional couch shift, and fully adaptive strategies were achieved, respectively. In variation-acceptable criterion, 92 % of LoRP plans and 74 % of conventional couch shift plans were achieved. LoRP demonstrated 94 % target coverage (100 % prescription dose) in the fully adaptive strategy compared with 91 % in conventional couch shift strategy. The fully adaptive strategy had the best performance in sparing the intestine and colon. LoRP reduced the treatment session duration by more than a third (>20 min) compared with the fully adaptive strategy. Conclusion LoRP achieved adequate target coverage with a short treatment session duration, potentially increasing treatment efficiency and improving patient comfort.
Collapse
Affiliation(s)
- Deqi Chen
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Xiongtao Yang
- Department of Oncology, Beijing Changping Hospital, Beijing 102202, China
| | - Shirui Qin
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Xiufen Li
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Jianrong Dai
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yuan Tang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Kuo Men
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| |
Collapse
|
7
|
Zhang Z, Han J, Ji W, Lou H, Li Z, Hu Y, Wang M, Qi B, Liu S. Improved deep learning for automatic localisation and segmentation of rectal cancer on T2-weighted MRI. J Med Radiat Sci 2024; 71:509-518. [PMID: 38654675 PMCID: PMC11638361 DOI: 10.1002/jmrs.794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 04/09/2024] [Indexed: 04/26/2024] Open
Abstract
INTRODUCTION The automatic segmentation approaches of rectal cancer from magnetic resonance imaging (MRI) are very valuable to relieve physicians from heavy workloads and enhance working efficiency. This study aimed to compare the segmentation accuracy of a proposed model with the other three models and the inter-observer consistency. METHODS A total of 65 patients with rectal cancer who underwent MRI examination were enrolled in our cohort and were randomly divided into a training cohort (n = 45) and a validation cohort (n = 20). Two experienced radiologists independently segmented rectal cancer lesions. A novel segmentation model (AttSEResUNet) was trained on T2WI based on ResUNet and attention mechanisms. The segmentation performance of the AttSEResUNet, U-Net, ResUNet and U-Net with Attention Gate (AttUNet) was compared, using Dice similarity coefficient (DSC), Hausdorff distance (HD), mean distance to agreement (MDA) and Jaccard index. The segmentation variability of automatic segmentation models and inter-observer was also evaluated. RESULTS The AttSEResUNet with post-processing showed perfect lesion recognition rate (100%) and false recognition rate (0), and its evaluation metrics outperformed other three models for two independent readers (observer 1: DSC = 0.839 ± 0.112, HD = 9.55 ± 6.68, MDA = 0.556 ± 0.722, Jaccard index = 0.736 ± 0.150; observer 2: DSC = 0.856 ± 0.099, HD = 11.0 ± 10.1, MDA = 0.789 ± 1.07, Jaccard index = 0.673 ± 0.130). The segmentation performance of AttSEResUNet was comparable and similar to manual variability (DSC = 0.857 ± 0.115, HD = 10.0 ± 10.0, MDA = 0.704 ± 1.17, Jaccard index = 0.666 ± 0.139). CONCLUSION Comparing with other three models, the proposed AttSEResUNet model was demonstrated as a more accurate model for contouring the rectal tumours in axial T2WI images, whose variability was similar to that of inter-observer.
Collapse
Affiliation(s)
- Zaixian Zhang
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| | - Junqi Han
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| | - Weina Ji
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| | - Henan Lou
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| | - Zhiming Li
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| | - Yabin Hu
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| | - Mingjia Wang
- College of Automation and Electronic EngineeringQingdao University of Science and TechnologyQingdaoChina
| | - Baozhu Qi
- College of Automation and Electronic EngineeringQingdao University of Science and TechnologyQingdaoChina
| | - Shunli Liu
- Department of RadiologyThe Affiliated Hospital of Qingdao UniversityQingdaoChina
| |
Collapse
|
8
|
Rhanoui M, Mikram M, Amazian K, Ait-Abderrahim A, Yousfi S, Toughrai I. Multimodal Machine Learning for Predicting Post-Surgery Quality of Life in Colorectal Cancer Patients. J Imaging 2024; 10:297. [PMID: 39728194 DOI: 10.3390/jimaging10120297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2024] [Revised: 10/26/2024] [Accepted: 11/09/2024] [Indexed: 12/28/2024] Open
Abstract
Colorectal cancer is a major public health issue, causing significant morbidity and mortality worldwide. Treatment for colorectal cancer often has a significant impact on patients' quality of life, which can vary over time and across individuals. The application of artificial intelligence and machine learning techniques has great potential for optimizing patient outcomes by providing valuable insights. In this paper, we propose a multimodal machine learning framework for the prediction of quality of life indicators in colorectal cancer patients at various temporal stages, leveraging both clinical data and computed tomography scan images. Additionally, we identify key predictive factors for each quality of life indicator, thereby enabling clinicians to make more informed treatment decisions and ultimately enhance patient outcomes. Our approach integrates data from multiple sources, enhancing the performance of our predictive models. The analysis demonstrates a notable improvement in accuracy for some indicators, with results for the Wexner score increasing from 24% to 48% and for the Anorectal Ultrasound score from 88% to 96% after integrating data from different modalities. These results highlight the potential of multimodal learning to provide valuable insights and improve patient care in real-world applications.
Collapse
Affiliation(s)
- Maryem Rhanoui
- Laboratory Health Systemic Process (P2S), UR4129, University Claude Bernard Lyon 1, University of Lyon, 69008 Lyon, France
| | - Mounia Mikram
- Meridian Team, LyRICA Laboratory, School of Information Sciences, Rabat 10100, Morocco
| | - Kamelia Amazian
- Higher Institute of Nursing Professions and Health Technology, Fez 30050, Morocco
- Human Pathology, Biomedicine and Environment Laboratory, Faculty of Medicine and Pharmacy, Sidi Mohamed Ben Abdellah University, Fez 30000, Morocco
| | | | - Siham Yousfi
- Meridian Team, LyRICA Laboratory, School of Information Sciences, Rabat 10100, Morocco
| | - Imane Toughrai
- General Surgery Department, Hassan II University Hospital, Fez 30050, Morocco
| |
Collapse
|
9
|
Shahadat N, Lama R, Nguyen A. Lung and Colon Cancer Detection Using a Deep AI Model. Cancers (Basel) 2024; 16:3879. [PMID: 39594834 PMCID: PMC11592951 DOI: 10.3390/cancers16223879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Revised: 10/31/2024] [Accepted: 11/10/2024] [Indexed: 11/28/2024] Open
Abstract
Lung and colon cancers are among the leading causes of cancer-related mortality worldwide. Early and accurate detection of these cancers is crucial for effective treatment and improved patient outcomes. False or incorrect detection is harmful. Accurately detecting cancer in a patient's tissue is crucial to their effective treatment. While analyzing tissue samples is complicated and time-consuming, deep learning techniques have made it possible to complete this process more efficiently and accurately. As a result, researchers can study more patients in a shorter amount of time and at a lower cost. Much research has been conducted to investigate deep learning models that require great computational ability and resources. However, none of these have had a 100% accurate detection rate for these life-threatening malignancies. Misclassified or falsely detecting cancer can have very harmful consequences. This research proposes a new lightweight, parameter-efficient, and mobile-embedded deep learning model based on a 1D convolutional neural network with squeeze-and-excitation layers for efficient lung and colon cancer detection. This proposed model diagnoses and classifies lung squamous cell carcinomas and adenocarcinoma of the lung and colon from digital pathology images. Extensive experiment demonstrates that our proposed model achieves 100% accuracy for detecting lung, colon, and lung and colon cancers from the histopathological (LC25000) lung and colon datasets, which is considered the best accuracy for around 0.35 million trainable parameters and around 6.4 million flops. Compared with the existing results, our proposed architecture shows state-of-the-art performance in lung, colon, and lung and colon cancer detection.
Collapse
Affiliation(s)
- Nazmul Shahadat
- Department of Computer and Data Sciences, Truman State University, Kirksville, MO 63501, USA; (R.L.); (A.N.)
| | | | | |
Collapse
|
10
|
Ma T, Wang J, Ma F, Shi J, Li Z, Cui J, Wu G, Zhao G, An Q. Visualization analysis of research hotspots and trends in MRI-based artificial intelligence in rectal cancer. Heliyon 2024; 10:e38927. [PMID: 39524896 PMCID: PMC11544045 DOI: 10.1016/j.heliyon.2024.e38927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 10/01/2024] [Accepted: 10/02/2024] [Indexed: 11/16/2024] Open
Abstract
Background Rectal cancer (RC) is one of the most common types of cancer worldwide. With the development of artificial intelligence (AI), the application of AI in preoperative evaluation and follow-up treatment of RC based on magnetic resonance imaging (MRI) has been the focus of research in this field. This review was conducted to develop comprehensive insight into the current research progress, hotspots, and future trends in AI based on MRI in RC, which remains to be studied. Methods Literature related to AI based on MRI and RC, as of November 2023, was obtained from the Web of Science Core Collection database. Visualization and bibliometric analyses of publication quantity and content were conducted to explore temporal trends, spatial distribution, collaborative networks, influential articles, keyword co-occurrence, and research directions. Results A total of 177 papers (152 original articles and 25 reviews) were identified from 24 countries/regions, 351 institutions, and 81 journals. Since 2019, the number of studies on this topic has rapidly increased. China and the United States have contributed the highest number of publications and institutions, cultivating the most intimate collaborative relationship. The highest number of articles derive from Sun Yat-sen University, while Frontiers in Oncology has published the highest number of relevant articles. Research on MRI-based AI in this field has mainly focused on preoperative diagnosis and prediction of treatment efficacy and prognosis. Conclusions This study provides an objective and comprehensive overview of the publications on MRI-based AI in RC and identifies the present research landscape, hotspots, and prospective trends in this field, which can provide valuable guidance for scholars worldwide.
Collapse
Affiliation(s)
- Tianming Ma
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Jiawen Wang
- Department of Urology, Shengli Clinical Medical College of Fujian Medical University, Fujian Provincial Hospital, Fuzhou University Affiliated Provincial Hospital, Fuzhou, 350001, China
| | - Fuhai Ma
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Jinxin Shi
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Zijian Li
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Jian Cui
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Guoju Wu
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Gang Zhao
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Qi An
- Department of General Surgery, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| |
Collapse
|
11
|
Lu H, Yuan Y, Liu M, Li Z, Ma X, Xia Y, Shi F, Lu Y, Lu J, Shen F. Predicting pathological complete response following neoadjuvant chemoradiotherapy (nCRT) in patients with locally advanced rectal cancer using merged model integrating MRI-based radiomics and deep learning data. BMC Med Imaging 2024; 24:289. [PMID: 39448917 PMCID: PMC11515279 DOI: 10.1186/s12880-024-01474-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Accepted: 10/21/2024] [Indexed: 10/26/2024] Open
Abstract
BACKGROUND To construct and compare merged models integrating clinical factors, MRI-based radiomics features and deep learning (DL) models for predicting pathological complete response (pCR) to neoadjuvant chemoradiotherapy (nCRT) in patients with locally advanced rectal cancer (LARC). METHODS Totally 197 patients with LARC administered surgical resection after nCRT were assigned to cohort 1 (training and test sets); meanwhile, 52 cases were assigned to cohort 2 as a validation set. Radscore and DL models were established for predicting pCR applying pre- and post-nCRT MRI data, respectively. Different merged models integrating clinical factors, Radscore and DL model were constituted. Their predictive performances were validated and compared by receiver operating characteristic (ROC) and decision curve analyses (DCA). RESULTS Merged models were established integrating selected clinical factors, Radscore and DL model for pCR prediction. The areas under the ROC curves (AUCs) of the pre-nCRT merged model were 0.834 (95% CI: 0.737-0.931) and 0.742 (95% CI: 0.650-0.834) in test and validation sets, respectively. The AUCs of the post-nCRT merged model were 0.746 (95% CI: 0.636-0.856) and 0.737 (95% CI: 0.646-0.828) in test and validation sets, respectively. DCA showed that the pretreatment algorithm could yield enhanced clinically benefit than the post-nCRT approach. CONCLUSIONS The pre-nCRT merged model including clinical factors, Radscore and DL model constitutes an effective non-invasive tool for pCR prediction in LARC.
Collapse
Affiliation(s)
- Haidi Lu
- Department of Radiology, Changhai Hospital, Naval Medical University, 168 Changhai Road, Shanghai, 200433, China
| | - Yuan Yuan
- Department of Radiology, Changhai Hospital, Naval Medical University, 168 Changhai Road, Shanghai, 200433, China
| | - Minglu Liu
- Department of Radiology, Changhai Hospital, Naval Medical University, 168 Changhai Road, Shanghai, 200433, China
| | - Zhihui Li
- Department of Radiology, RuiJin Hospital LuWan Branch, Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Xiaolu Ma
- Department of Radiology, Changhai Hospital, Naval Medical University, 168 Changhai Road, Shanghai, 200433, China
| | - Yuwei Xia
- Shanghai United Imaging Intelligence, Co., Ltd, Shanghai, China
| | - Feng Shi
- Shanghai United Imaging Intelligence, Co., Ltd, Shanghai, China
| | - Yong Lu
- Department of Radiology, RuiJin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, Shanghai, 200025, China.
| | - Jianping Lu
- Department of Radiology, Changhai Hospital, Naval Medical University, 168 Changhai Road, Shanghai, 200433, China.
| | - Fu Shen
- Department of Radiology, Changhai Hospital, Naval Medical University, 168 Changhai Road, Shanghai, 200433, China.
| |
Collapse
|
12
|
Kensen CM, Simões R, Betgen A, Wiersema L, Lambregts DM, Peters FP, Marijnen CA, van der Heide UA, Janssen TM. Incorporating patient-specific information for the development of rectal tumor auto-segmentation models for online adaptive magnetic resonance Image-guided radiotherapy. Phys Imaging Radiat Oncol 2024; 32:100648. [PMID: 39319094 PMCID: PMC11421252 DOI: 10.1016/j.phro.2024.100648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 08/29/2024] [Accepted: 09/11/2024] [Indexed: 09/26/2024] Open
Abstract
Background and purpose In online adaptive magnetic resonance image (MRI)-guided radiotherapy (MRIgRT), manual contouring of rectal tumors on daily images is labor-intensive and time-consuming. Automation of this task is complex due to substantial variation in tumor shape and location between patients. The aim of this work was to investigate different approaches of propagating patient-specific prior information to the online adaptive treatment fractions to improve deep-learning based auto-segmentation of rectal tumors. Materials and methods 243 T2-weighted MRI scans of 49 rectal cancer patients treated on the 1.5T MR-Linear accelerator (MR-Linac) were utilized to train models to segment rectal tumors. As benchmark, an MRI_only auto-segmentation model was trained. Three approaches of including a patient-specific prior were studied: 1. include the segmentations of fraction 1 as extra input channel for the auto-segmentation of subsequent fractions, 2. fine-tuning of the MRI_only model to fraction 1 (PSF_1) and 3. fine-tuning of the MRI_only model on all earlier fractions (PSF_cumulative). Auto-segmentations were compared to the manual segmentation using geometric similarity metrics. Clinical impact was assessed by evaluating post-treatment target coverage. Results All patient-specific methods outperformed the MRI_only segmentation approach. Median 95th percentile Hausdorff (95HD) were 22.0 (range: 6.1-76.6) mm for MRI_only segmentation, 9.9 (range: 2.5-38.2) mm for MRI+prior segmentation, 6.4 (range: 2.4-17.8) mm for PSF_1 and 4.8 (range: 1.7-26.9) mm for PSF_cumulative. PSF_cumulative was found to be superior to PSF_1 from fraction 4 onward (p = 0.014). Conclusion Patient-specific fine-tuning of automatically segmented rectal tumors, using images and segmentations from all previous fractions, yields superior quality compared to other auto-segmentation approaches.
Collapse
Affiliation(s)
- Chavelli M. Kensen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Rita Simões
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Anja Betgen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Lisa Wiersema
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Doenja M.J. Lambregts
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Femke P. Peters
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Corrie A.M. Marijnen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Uulke A. van der Heide
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Tomas M. Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| |
Collapse
|
13
|
Schultz KS, Hughes ML, Akram WM, Mongiu AK. Artificial intelligence for the colorectal surgeon in 2024 – A narrative review of Prevalence, Policies, and (needed) Protections. SEMINARS IN COLON AND RECTAL SURGERY 2024; 35:101037. [DOI: 10.1016/j.scrs.2024.101037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
|
14
|
Yang M, Yang M, Yang L, Wang Z, Ye P, Chen C, Fu L, Xu S. Deep learning for MRI lesion segmentation in rectal cancer. Front Med (Lausanne) 2024; 11:1394262. [PMID: 38983364 PMCID: PMC11231084 DOI: 10.3389/fmed.2024.1394262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 06/14/2024] [Indexed: 07/11/2024] Open
Abstract
Rectal cancer (RC) is a globally prevalent malignant tumor, presenting significant challenges in its management and treatment. Currently, magnetic resonance imaging (MRI) offers superior soft tissue contrast and radiation-free effects for RC patients, making it the most widely used and effective detection method. In early screening, radiologists rely on patients' medical radiology characteristics and their extensive clinical experience for diagnosis. However, diagnostic accuracy may be hindered by factors such as limited expertise, visual fatigue, and image clarity issues, resulting in misdiagnosis or missed diagnosis. Moreover, the distribution of surrounding organs in RC is extensive with some organs having similar shapes to the tumor but unclear boundaries; these complexities greatly impede doctors' ability to diagnose RC accurately. With recent advancements in artificial intelligence, machine learning techniques like deep learning (DL) have demonstrated immense potential and broad prospects in medical image analysis. The emergence of this approach has significantly enhanced research capabilities in medical image classification, detection, and segmentation fields with particular emphasis on medical image segmentation. This review aims to discuss the developmental process of DL segmentation algorithms along with their application progress in lesion segmentation from MRI images of RC to provide theoretical guidance and support for further advancements in this field.
Collapse
Affiliation(s)
- Mingwei Yang
- Department of General Surgery, Nanfang Hospital Zengcheng Campus, Guangzhou, Guangdong, China
| | - Miyang Yang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Lanlan Yang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
| | - Zhaochu Wang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
| | - Peiyun Ye
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Chujie Chen
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Liyuan Fu
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Shangwen Xu
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| |
Collapse
|
15
|
Bangolo A, Wadhwani N, Nagesh VK, Dey S, Tran HHV, Aguilar IK, Auda A, Sidiqui A, Menon A, Daoud D, Liu J, Pulipaka SP, George B, Furman F, Khan N, Plumptre A, Sekhon I, Lo A, Weissman S. Impact of artificial intelligence in the management of esophageal, gastric and colorectal malignancies. Artif Intell Gastrointest Endosc 2024; 5:90704. [DOI: 10.37126/aige.v5.i2.90704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/28/2024] [Accepted: 03/04/2024] [Indexed: 05/11/2024] Open
Abstract
The incidence of gastrointestinal malignancies has increased over the past decade at an alarming rate. Colorectal and gastric cancers are the third and fifth most commonly diagnosed cancers worldwide but are cited as the second and third leading causes of mortality. Early institution of appropriate therapy from timely diagnosis can optimize patient outcomes. Artificial intelligence (AI)-assisted diagnostic, prognostic, and therapeutic tools can assist in expeditious diagnosis, treatment planning/response prediction, and post-surgical prognostication. AI can intercept neoplastic lesions in their primordial stages, accurately flag suspicious and/or inconspicuous lesions with greater accuracy on radiologic, histopathological, and/or endoscopic analyses, and eliminate over-dependence on clinicians. AI-based models have shown to be on par, and sometimes even outperformed experienced gastroenterologists and radiologists. Convolutional neural networks (state-of-the-art deep learning models) are powerful computational models, invaluable to the field of precision oncology. These models not only reliably classify images, but also accurately predict response to chemotherapy, tumor recurrence, metastasis, and survival rates post-treatment. In this systematic review, we analyze the available evidence about the diagnostic, prognostic, and therapeutic utility of artificial intelligence in gastrointestinal oncology.
Collapse
Affiliation(s)
- Ayrton Bangolo
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Nikita Wadhwani
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Vignesh K Nagesh
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Shraboni Dey
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Hadrian Hoang-Vu Tran
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Izage Kianifar Aguilar
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Auda Auda
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Aman Sidiqui
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Aiswarya Menon
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Deborah Daoud
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - James Liu
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Sai Priyanka Pulipaka
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Blessy George
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Flor Furman
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Nareeman Khan
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Adewale Plumptre
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Imranjot Sekhon
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Abraham Lo
- Department of Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| | - Simcha Weissman
- Department of Internal Medicine, Palisades Medical Center, North Bergen, NJ 07047, United States
| |
Collapse
|
16
|
Ferreira Silvério N, van den Wollenberg W, Betgen A, Wiersema L, Marijnen C, Peters F, van der Heide UA, Simões R, Janssen T. Evaluation of Deep Learning Clinical Target Volumes Auto-Contouring for Magnetic Resonance Imaging-Guided Online Adaptive Treatment of Rectal Cancer. Adv Radiat Oncol 2024; 9:101483. [PMID: 38706833 PMCID: PMC11066509 DOI: 10.1016/j.adro.2024.101483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 02/11/2024] [Indexed: 05/07/2024] Open
Abstract
Purpose Segmentation of clinical target volumes (CTV) on medical images can be time-consuming and is prone to interobserver variation (IOV). This is a problem for online adaptive radiation therapy, where CTV segmentation must be performed every treatment fraction, leading to longer treatment times and logistic challenges. Deep learning (DL)-based auto-contouring has the potential to speed up CTV contouring, but its current clinical use is limited. One reason for this is that it can be time-consuming to verify the accuracy of CTV contours produced using auto-contouring, and there is a risk of bias being introduced. To be accepted by clinicians, auto-contouring must be trustworthy. Therefore, there is a need for a comprehensive commissioning framework when introducing DL-based auto-contouring in clinical practice. We present such a framework and apply it to an in-house developed DL model for auto-contouring of the CTV in rectal cancer patients treated with MRI-guided online adaptive radiation therapy. Methods and Materials The framework for evaluating DL-based auto-contouring consisted of 3 steps: (1) Quantitative evaluation of the model's performance and comparison with IOV; (2) Expert observations and corrections; and (3) Evaluation of the impact on expected volumetric target coverage. These steps were performed on independent data sets. The framework was applied to an in-house trained nnU-Net model, using the data of 44 rectal cancer patients treated at our institution. Results The framework established that the model's performance after expert corrections was comparable to IOV, and although the model introduced a bias, this had no relevant impact on clinical practice. Additionally, we found a substantial time gain without reducing quality as determined by volumetric target coverage. Conclusions Our framework provides a comprehensive evaluation of the performance and clinical usability of target auto-contouring models. Based on the results, we conclude that the model is eligible for clinical use.
Collapse
Affiliation(s)
| | | | - Anja Betgen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Lisa Wiersema
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Corrie Marijnen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Femke Peters
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Uulke A. van der Heide
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Rita Simões
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Tomas Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| |
Collapse
|
17
|
Xu Z, Li W, Dong X, Chen Y, Zhang D, Wang J, Zhou L, He G. Precision medicine in colorectal cancer: Leveraging multi-omics, spatial omics, and artificial intelligence. Clin Chim Acta 2024; 559:119686. [PMID: 38663471 DOI: 10.1016/j.cca.2024.119686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 04/22/2024] [Accepted: 04/22/2024] [Indexed: 05/03/2024]
Abstract
Colorectal cancer (CRC) is a leading cause of cancer-related deaths. Recent advancements in genomic technologies and analytical approaches have revolutionized CRC research, enabling precision medicine. This review highlights the integration of multi-omics, spatial omics, and artificial intelligence (AI) in advancing precision medicine for CRC. Multi-omics approaches have uncovered molecular mechanisms driving CRC progression, while spatial omics have provided insights into the spatial heterogeneity of gene expression in CRC tissues. AI techniques have been utilized to analyze complex datasets, identify new treatment targets, and enhance diagnosis and prognosis. Despite the tumor's heterogeneity and genetic and epigenetic complexity, the fusion of multi-omics, spatial omics, and AI shows the potential to overcome these challenges and advance precision medicine in CRC. The future lies in integrating these technologies to provide deeper insights and enable personalized therapies for CRC patients.
Collapse
Affiliation(s)
- Zishan Xu
- Department of Pathology, Xinxiang Medical University, Xinxiang 453000, China
| | - Wei Li
- School of Forensic Medicine, Xinxiang Medical University, Xinxiang 453000, China
| | - Xiangyang Dong
- Department of Pathology, Xinxiang Medical University, Xinxiang 453000, China
| | - Yingying Chen
- School of Basic Medical Sciences, Xinxiang Medical University, Xinxiang 453000, China
| | - Dan Zhang
- Department of Pathology, Xinxiang Medical University, Xinxiang 453000, China
| | - Jingnan Wang
- Xinxiang Medical University SanQuan Medical College, Xinxiang 453003, China
| | - Lin Zhou
- Department of Breast and Thyroid Surgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China.
| | - Guoyang He
- Department of Pathology, Xinxiang Medical University, Xinxiang 453000, China.
| |
Collapse
|
18
|
Ma Y, Ma D, Xu X, Li J, Guan Z. Progress of MRI in predicting the circumferential resection margin of rectal cancer: A narrative review. Asian J Surg 2024; 47:2122-2131. [PMID: 38331609 DOI: 10.1016/j.asjsur.2024.01.131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 01/02/2024] [Accepted: 01/19/2024] [Indexed: 02/10/2024] Open
Abstract
Rectal cancer (RC) is the third most frequently diagnosed cancer worldwide, and the status of its circumferential resection margin (CRM) is of paramount significance for treatment strategies and prognosis. CRM involvement is defined as tumor touching or within 1 mm from the outermost part of tumor or outer border of the mesorectal or lymph node deposits to the resection margin. The incidence of involved CRM varied from 5.4 % to 36 %, which may associate with an in consistent definition of CRM, the quality of surgeries, and the different examination modalities. Although T and N status are essential factors in determining whether a patient should receive neoadjuvant therapy before surgery, CRM status is a powerful predictor of local and distant recurrence as well as survival rate. This review explores the significance of CRM, the various assessment methods, and the role of magnetic resonance imaging (MRI) and artificial intelligence-based MRI in predicting CRM status. MRI showed potential advantage in predicting CRM status with a high sensitivity and specificity compared to computed tomography (CT). We also discuss MRI advancements in RC imaging, including conventional MRI with body coil, high-resolution MRI with phased-array coil, and endorectal MRI. Along with a discussion of artificial intelligence-based MRI techniques to predict the CRM status of RCs before and after treatments.
Collapse
Affiliation(s)
- Yanqing Ma
- Department of Radiology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, 310014, China.
| | - Dongnan Ma
- Yangming College of Ningbo University, Ningbo, Zhejiang, 315010, China.
| | - Xiren Xu
- Department of Radiology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, 310014, China.
| | - Jie Li
- Department of Radiology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, 310014, China.
| | - Zheng Guan
- Department of Radiology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, 310014, China.
| |
Collapse
|
19
|
Liu Y, Sun BJT, Zhang C, Li B, Yu XX, Du Y. Preoperative prediction of perineural invasion of rectal cancer based on a magnetic resonance imaging radiomics model: A dual-center study. World J Gastroenterol 2024; 30:2233-2248. [PMID: 38690027 PMCID: PMC11056922 DOI: 10.3748/wjg.v30.i16.2233] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 02/08/2024] [Accepted: 03/20/2024] [Indexed: 04/26/2024] Open
Abstract
BACKGROUND Perineural invasion (PNI) has been used as an important pathological indicator and independent prognostic factor for patients with rectal cancer (RC). Preoperative prediction of PNI status is helpful for individualized treatment of RC. Recently, several radiomics studies have been used to predict the PNI status in RC, demonstrating a good predictive effect, but the results lacked generalizability. The preoperative prediction of PNI status is still challenging and needs further study. AIM To establish and validate an optimal radiomics model for predicting PNI status preoperatively in RC patients. METHODS This retrospective study enrolled 244 postoperative patients with pathologically confirmed RC from two independent centers. The patients underwent pre-operative high-resolution magnetic resonance imaging (MRI) between May 2019 and August 2022. Quantitative radiomics features were extracted and selected from oblique axial T2-weighted imaging (T2WI) and contrast-enhanced T1WI (T1CE) sequences. The radiomics signatures were constructed using logistic regression analysis and the predictive potential of various sequences was compared (T2WI, T1CE and T2WI + T1CE fusion sequences). A clinical-radiomics (CR) model was established by combining the radiomics features and clinical risk factors. The internal and external validation groups were used to validate the proposed models. The area under the receiver operating characteristic curve (AUC), DeLong test, net reclassification improvement (NRI), integrated discrimination improvement (IDI), calibration curve, and decision curve analysis (DCA) were used to evaluate the model performance. RESULTS Among the radiomics models, the T2WI + T1CE fusion sequences model showed the best predictive performance, in the training and internal validation groups, the AUCs of the fusion sequence model were 0.839 [95% confidence interval (CI): 0.757-0.921] and 0.787 (95%CI: 0.650-0.923), which were higher than those of the T2WI and T1CE sequence models. The CR model constructed by combining clinical risk factors had the best predictive performance. In the training and internal and external validation groups, the AUCs of the CR model were 0.889 (95%CI: 0.824-0.954), 0.889 (95%CI: 0.803-0.976) and 0.894 (95%CI: 0.814-0.974). Delong test, NRI, and IDI showed that the CR model had significant differences from other models (P < 0.05). Calibration curves demonstrated good agreement, and DCA revealed significant benefits of the CR model. CONCLUSION The CR model based on preoperative MRI radiomics features and clinical risk factors can preoperatively predict the PNI status of RC noninvasively, which facilitates individualized treatment of RC patients.
Collapse
Affiliation(s)
- Yan Liu
- Department of Radiology, The Affiliated Hospital of North Sichuan Medical College, Nanchong 637000, Sichuan Province, China
| | - Bai-Jin-Tao Sun
- Department of Radiology, The Affiliated Hospital of North Sichuan Medical College, Nanchong 637000, Sichuan Province, China
| | - Chuan Zhang
- Department of Radiology, The Affiliated Hospital of North Sichuan Medical College, Nanchong 637000, Sichuan Province, China
| | - Bing Li
- Department of Radiology, The Affiliated Hospital of North Sichuan Medical College, Nanchong 637000, Sichuan Province, China
| | - Xiao-Xuan Yu
- Department of Radiology, The Affiliated Hospital of North Sichuan Medical College, Nanchong 637000, Sichuan Province, China
| | - Yong Du
- Department of Radiology, The Affiliated Hospital of North Sichuan Medical College, Nanchong 637000, Sichuan Province, China.
| |
Collapse
|
20
|
Flory MN, Napel S, Tsai EB. Artificial Intelligence in Radiology: Opportunities and Challenges. Semin Ultrasound CT MR 2024; 45:152-160. [PMID: 38403128 DOI: 10.1053/j.sult.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Artificial intelligence's (AI) emergence in radiology elicits both excitement and uncertainty. AI holds promise for improving radiology with regards to clinical practice, education, and research opportunities. Yet, AI systems are trained on select datasets that can contain bias and inaccuracies. Radiologists must understand these limitations and engage with AI developers at every step of the process - from algorithm initiation and design to development and implementation - to maximize benefit and minimize harm that can be enabled by this technology.
Collapse
Affiliation(s)
- Marta N Flory
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Sandy Napel
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Emily B Tsai
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA.
| |
Collapse
|
21
|
Ali H, Muzammil MA, Dahiya DS, Ali F, Yasin S, Hanif W, Gangwani MK, Aziz M, Khalaf M, Basuli D, Al-Haddad M. Artificial intelligence in gastrointestinal endoscopy: a comprehensive review. Ann Gastroenterol 2024; 37:133-141. [PMID: 38481787 PMCID: PMC10927620 DOI: 10.20524/aog.2024.0861] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 12/05/2023] [Indexed: 02/14/2025] Open
Abstract
Integrating artificial intelligence (AI) into gastrointestinal (GI) endoscopy heralds a significant leap forward in managing GI disorders. AI-enabled applications, such as computer-aided detection and computer-aided diagnosis, have significantly advanced GI endoscopy, improving early detection, diagnosis and personalized treatment planning. AI algorithms have shown promise in the analysis of endoscopic data, critical in conditions with traditionally low diagnostic sensitivity, such as indeterminate biliary strictures and pancreatic cancer. Convolutional neural networks can markedly improve the diagnostic process when integrated with cholangioscopy or endoscopic ultrasound, especially in the detection of malignant biliary strictures and cholangiocarcinoma. AI's capacity to analyze complex image data and offer real-time feedback can streamline endoscopic procedures, reduce the need for invasive biopsies, and decrease associated adverse events. However, the clinical implementation of AI faces challenges, including data quality issues and the risk of overfitting, underscoring the need for further research and validation. As the technology matures, AI is poised to become an indispensable tool in the gastroenterologist's arsenal, necessitating the integration of robust, validated AI applications into routine clinical practice. Despite remarkable advances, challenges such as operator-dependent accuracy and the need for intricate examinations persist. This review delves into the transformative role of AI in enhancing endoscopic diagnostic accuracy, particularly highlighting its utility in the early detection and personalized treatment of GI diseases.
Collapse
Affiliation(s)
- Hassam Ali
- Department of Gastroenterology and Hepatology, ECU Health Medical Center/Brody School of Medicine, Greenville, North Carolina, USA (Hassam Ali, Muhammad Khalaf)
| | - Muhammad Ali Muzammil
- Department of Internal Medicine, Dow University of Health Sciences, Sindh, PK (Muhammad Ali Muzammil)
| | - Dushyant Singh Dahiya
- Division of Gastroenterology, Hepatology & Motility, The University of Kansas School of Medicine, Kansas City, Kansas, USA (Dushyant Singh Dahiya)
| | - Farishta Ali
- Department of Internal Medicine, Khyber Girls Medical College, Peshawar, PK (Farishta Ali)
| | - Shafay Yasin
- Department of Internal Medicine, Quaid-e-Azam Medical College, Punjab, PK (Shafay Yasin, Waqar Hanif)
| | - Waqar Hanif
- Department of Internal Medicine, Quaid-e-Azam Medical College, Punjab, PK (Shafay Yasin, Waqar Hanif)
| | - Manesh Kumar Gangwani
- Department of Medicine, University of Toledo Medical Center, Toledo, OH, USA (Manesh Kumar Gangwani)
| | - Muhammad Aziz
- Department of Gastroenterology and Hepatology, The University of Toledo Medical Center, Toledo, OH, USA (Muhammad Aziz)
| | - Muhammad Khalaf
- Department of Gastroenterology and Hepatology, ECU Health Medical Center/Brody School of Medicine, Greenville, North Carolina, USA (Hassam Ali, Muhammad Khalaf)
| | - Debargha Basuli
- Department of Internal Medicine, East Carolina University/Brody School of Medicine, Greenville, North Carolina, USA (Debargha Basuli)
| | - Mohammad Al-Haddad
- Division of Gastroenterology and Hepatology, Indiana University School of Medicine, Indianapolis, IN, USA (Mohammad Al-Haddad)
| |
Collapse
|
22
|
Al-Hayali A, Komeili A, Azad A, Sathiadoss P, Schieda N, Ukwatta E. Machine learning based prediction of image quality in prostate MRI using rapid localizer images. J Med Imaging (Bellingham) 2024; 11:026001. [PMID: 38435711 PMCID: PMC10905647 DOI: 10.1117/1.jmi.11.2.026001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 10/17/2023] [Accepted: 01/29/2024] [Indexed: 03/05/2024] Open
Abstract
Purpose Diagnostic performance of prostate MRI depends on high-quality imaging. Prostate MRI quality is inversely proportional to the amount of rectal gas and distention. Early detection of poor-quality MRI may enable intervention to remove gas or exam rescheduling, saving time. We developed a machine learning based quality prediction of yet-to-be acquired MRI images solely based on MRI rapid localizer sequence, which can be acquired in a few seconds. Approach The dataset consists of 213 (147 for training and 64 for testing) prostate sagittal T2-weighted (T2W) MRI localizer images and rectal content, manually labeled by an expert radiologist. Each MRI localizer contains seven two-dimensional (2D) slices of the patient, accompanied by manual segmentations of rectum for each slice. Cascaded and end-to-end deep learning models were used to predict the quality of yet-to-be T2W, DWI, and apparent diffusion coefficient (ADC) MRI images. Predictions were compared to quality scores determined by the experts using area under the receiver operator characteristic curve and intra-class correlation coefficient. Results In the test set of 64 patients, optimal versus suboptimal exams occurred in 95.3% (61/64) versus 4.7% (3/64) for T2W, 90.6% (58/64) versus 9.4% (6/64) for DWI, and 89.1% (57/64) versus 10.9% (7/64) for ADC. The best performing segmentation model was 2D U-Net with ResNet-34 encoder and ImageNet weights. The best performing classifier was the radiomics based classifier. Conclusions A radiomics based classifier applied to localizer images achieves accurate diagnosis of subsequent image quality for T2W, DWI, and ADC prostate MRI sequences.
Collapse
Affiliation(s)
- Abdullah Al-Hayali
- University of Guelph, School of Engineering, Guelph Imaging AI Lab, Guelph, Ontario, Canada
| | - Amin Komeili
- University of Calgary, Department of Biomedical Engineering, Calgary, Alberta, Canada
| | - Azar Azad
- A.I. Vali Inc., Toronto, Ontario, Canada
| | - Paul Sathiadoss
- University of Ottawa, Department of Radiology, Ottawa, Ontario, Canada
| | - Nicola Schieda
- University of Ottawa, Department of Radiology, Ottawa, Ontario, Canada
| | - Eranga Ukwatta
- University of Guelph, School of Engineering, Guelph Imaging AI Lab, Guelph, Ontario, Canada
| |
Collapse
|
23
|
Zhao G, Chen X, Zhu M, Liu Y, Wang Y. Exploring the application and future outlook of Artificial intelligence in pancreatic cancer. Front Oncol 2024; 14:1345810. [PMID: 38450187 PMCID: PMC10915754 DOI: 10.3389/fonc.2024.1345810] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 01/29/2024] [Indexed: 03/08/2024] Open
Abstract
Pancreatic cancer, an exceptionally malignant tumor of the digestive system, presents a challenge due to its lack of typical early symptoms and highly invasive nature. The majority of pancreatic cancer patients are diagnosed when curative surgical resection is no longer possible, resulting in a poor overall prognosis. In recent years, the rapid progress of Artificial intelligence (AI) in the medical field has led to the extensive utilization of machine learning and deep learning as the prevailing approaches. Various models based on AI technology have been employed in the early screening, diagnosis, treatment, and prognostic prediction of pancreatic cancer patients. Furthermore, the development and application of three-dimensional visualization and augmented reality navigation techniques have also found their way into pancreatic cancer surgery. This article provides a concise summary of the current state of AI technology in pancreatic cancer and offers a promising outlook for its future applications.
Collapse
Affiliation(s)
- Guohua Zhao
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Liaoning, China
| | - Xi Chen
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Liaoning, China
- Department of Clinical integration of traditional Chinese and Western medicine, Liaoning University of Traditional Chinese Medicine, Liaoning, China
| | - Mengying Zhu
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Liaoning, China
- Department of Clinical integration of traditional Chinese and Western medicine, Liaoning University of Traditional Chinese Medicine, Liaoning, China
| | - Yang Liu
- Department of Ophthalmology, First Hospital of China Medical University, Liaoning, China
| | - Yue Wang
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Liaoning, China
| |
Collapse
|
24
|
Liu S, Peng S, Zhang M, Wang Z, Li L. Multimodal integration for Barrett's esophagus. iScience 2024; 27:108437. [PMID: 38292435 PMCID: PMC10827497 DOI: 10.1016/j.isci.2023.108437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 07/10/2023] [Accepted: 11/09/2023] [Indexed: 02/01/2024] Open
Abstract
The esophageal adenocarcinoma is facing a worldwide challenge: early prediction and risk assessment in clinical Barrett's esophagus (BE). In recent years, the growing interests have been witnessed in prediction and risk assessment in clinical BE. However, the resolution is limited, and the system is huge and expensive for the existing devices. Inspired by the principle of collaboration between human eye vision and brain cortex in data processing, here we propose multimodal learning framework to tackle tasks from various modalities, which can benefit from each other. To our findings, the experimental result indicates that low-level modality can directly affect high-level modality and form the final risk grading based on contribution, which maximizes the clinical performance of medical professionals based on our findings.
Collapse
Affiliation(s)
- Shubin Liu
- School of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| | - Shiyu Peng
- Department of Gastroenterology, First Affiliated Hospital of Shihezi University, Xinjiang 832061, China
| | - Mengxuan Zhang
- Faculty of Science, The University of Melbourne, Parkville, VIC 3010, Australia
| | - Ziyuan Wang
- School of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| | - Lei Li
- School of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| |
Collapse
|
25
|
Kim M, Park T, Oh BY, Kim MJ, Cho BJ, Son IT. Performance reporting design in artificial intelligence studies using image-based TNM staging and prognostic parameters in rectal cancer: a systematic review. Ann Coloproctol 2024; 40:13-26. [PMID: 38414120 PMCID: PMC10915525 DOI: 10.3393/ac.2023.00892.0127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 01/15/2024] [Accepted: 01/16/2024] [Indexed: 02/29/2024] Open
Abstract
PURPOSE The integration of artificial intelligence (AI) and magnetic resonance imaging in rectal cancer has the potential to enhance diagnostic accuracy by identifying subtle patterns and aiding tumor delineation and lymph node assessment. According to our systematic review focusing on convolutional neural networks, AI-driven tumor staging and the prediction of treatment response facilitate tailored treat-ment strategies for patients with rectal cancer. METHODS This paper summarizes the current landscape of AI in the imaging field of rectal cancer, emphasizing the performance reporting design based on the quality of the dataset, model performance, and external validation. RESULTS AI-driven tumor segmentation has demonstrated promising results using various convolutional neural network models. AI-based predictions of staging and treatment response have exhibited potential as auxiliary tools for personalized treatment strategies. Some studies have indicated superior performance than conventional models in predicting microsatellite instability and KRAS status, offer-ing noninvasive and cost-effective alternatives for identifying genetic mutations. CONCLUSION Image-based AI studies for rectal can-cer have shown acceptable diagnostic performance but face several challenges, including limited dataset sizes with standardized data, the need for multicenter studies, and the absence of oncologic relevance and external validation for clinical implantation. Overcoming these pitfalls and hurdles is essential for the feasible integration of AI models in clinical settings for rectal cancer, warranting further research.
Collapse
Affiliation(s)
- Minsung Kim
- Department of Surgery, Hallym University Sacred Heart Hospital, Hallym University College of Medicine, Anyang, Korea
| | - Taeyong Park
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang, Korea
| | - Bo Young Oh
- Department of Surgery, Hallym University Sacred Heart Hospital, Hallym University College of Medicine, Anyang, Korea
| | - Min Jeong Kim
- Department of Radiology, Hallym University Sacred Heart Hospital, Hallym University College of Medicine, Anyang, Korea
| | - Bum-Joo Cho
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang, Korea
| | - Il Tae Son
- Department of Surgery, Hallym University Sacred Heart Hospital, Hallym University College of Medicine, Anyang, Korea
| |
Collapse
|
26
|
Bibault JE, Giraud P. Deep learning for automated segmentation in radiotherapy: a narrative review. Br J Radiol 2024; 97:13-20. [PMID: 38263838 PMCID: PMC11027240 DOI: 10.1093/bjr/tqad018] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 08/10/2023] [Accepted: 10/27/2023] [Indexed: 01/25/2024] Open
Abstract
The segmentation of organs and structures is a critical component of radiation therapy planning, with manual segmentation being a laborious and time-consuming task. Interobserver variability can also impact the outcomes of radiation therapy. Deep neural networks have recently gained attention for their ability to automate segmentation tasks, with convolutional neural networks (CNNs) being a popular approach. This article provides a descriptive review of the literature on deep learning (DL) techniques for segmentation in radiation therapy planning. This review focuses on five clinical sub-sites and finds that U-net is the most commonly used CNN architecture. The studies using DL for image segmentation were included in brain, head and neck, lung, abdominal, and pelvic cancers. The majority of DL segmentation articles in radiation therapy planning have concentrated on normal tissue structures. N-fold cross-validation was commonly employed, without external validation. This research area is expanding quickly, and standardization of metrics and independent validation are critical to benchmarking and comparing proposed methods.
Collapse
Affiliation(s)
- Jean-Emmanuel Bibault
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique—Hôpitaux de Paris, Université de Paris Cité, Paris, 75015, France
- INSERM UMR 1138, Centre de Recherche des Cordeliers, Paris, 75006, France
| | - Paul Giraud
- INSERM UMR 1138, Centre de Recherche des Cordeliers, Paris, 75006, France
- Radiation Oncology Department, Pitié Salpêtrière Hospital, Assistance Publique—Hôpitaux de Paris, Paris Sorbonne Universités, Paris, 75013, France
| |
Collapse
|
27
|
Yang H, Liu H, Lin J, Xiao H, Guo Y, Mei H, Ding Q, Yuan Y, Lai X, Wu K, Wu S. An automatic texture feature analysis framework of renal tumor: surgical, pathological, and molecular evaluation based on multi-phase abdominal CT. Eur Radiol 2024; 34:355-366. [PMID: 37528301 DOI: 10.1007/s00330-023-10016-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 06/06/2023] [Accepted: 06/12/2023] [Indexed: 08/03/2023]
Abstract
OBJECTIVES To determine whether the texture feature analysis of multi-phase abdominal CT can provide a robust prediction of benign and malignant, histological subtype, pathological stage, nephrectomy risk, pathological grade, and Ki67 index in renal tumor. METHODS A total of 1051 participants with renal tumor were split into the internal cohort (850 patients from four different hospitals) and the external testing cohort (201 patients from another local hospital). The proposed framework comprised a 3D-kidney and tumor segmentation model by 3D-UNet, a feature extractor for the regions of interest based on radiomics and image dimension reduction, and the six classifiers by XGBoost. A quantitative model interpretation method called SHAP was used to explore the contribution of each feature. RESULTS The proposed multi-phase abdominal CT model provides robust prediction for benign and malignant, histological subtype, pathological stage, nephrectomy risk, pathological grade, and Ki67 index in the internal validation set, with the AUROC values of 0.88 ± 0.1, 0.90 ± 0.1, 0.91 ± 0.1, 0.89 ± 0.1, 0.84 ± 0.1, and 0.88 ± 0.1, respectively. The external testing set also showed impressive results, with AUROC values of 0.83 ± 0.1, 0.83 ± 0.1, 0.85 ± 0.1, 0.81 ± 0.1, 0.79 ± 0.1, and 0.81 ± 0.1, respectively. The radiomics feature including the first-order statistics, the tumor size-related morphology, and the shape-related tumor features contributed most to the model predictions. CONCLUSIONS Automatic texture feature analysis of abdominal multi-phase CT provides reliable predictions for multi-tasks, suggesting the potential usage of clinical application. CLINICAL RELEVANCE STATEMENT The automatic texture feature analysis framework, based on multi-phase abdominal CT, provides robust and reliable predictions for multi-tasks. These valuable insights can serve as a guiding tool for clinical diagnosis and treatment, making medical imaging an essential component in the process. KEY POINTS • The automatic texture feature analysis framework based on multi-phase abdominal CT can provide more accurate prediction of benign and malignant, histological subtype, pathological stage, nephrectomy risk, pathological grade, and Ki67 index in renal tumor. • The quantitative decomposition of the prediction model was conducted to explore the contribution of the extracted feature. • The study involving 1051 patients from 5 medical centers, along with a heterogeneous external data testing strategy, can be seamlessly transferred to various tasks involving new datasets.
Collapse
Affiliation(s)
- Huancheng Yang
- Luohu Clinical Institute, Shantou University Medical College, Shantou, 515000, China
- Department of Urology, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China
- Shenzhen Following Precision Medical Research Institute, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China
- Shantou University Medical College, Shantou University, Shantou, 515000, China
| | - Hanlin Liu
- Department of Radiology, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China
| | - Jiashan Lin
- Luohu Clinical Institute, Shantou University Medical College, Shantou, 515000, China
- Shantou University Medical College, Shantou University, Shantou, 515000, China
- Department of Urology, Peking University Shenzhen Hospital, Shenzhen, 518036, China
| | - Hongwei Xiao
- Department of Urology, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China
- Shenzhen Following Precision Medical Research Institute, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China
| | - Yiqi Guo
- Luohu Clinical Institute, Shantou University Medical College, Shantou, 515000, China
- Shantou University Medical College, Shantou University, Shantou, 515000, China
| | - Hangru Mei
- Department of Urology, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China
- Shenzhen Following Precision Medical Research Institute, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China
| | - Qiuxia Ding
- Department of Urology, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China
- Shenzhen Following Precision Medical Research Institute, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China
| | - Yangguang Yuan
- Department of Radiology, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China
| | - Xiaohui Lai
- Department of Radiology, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China
| | - Kai Wu
- Department of Urology, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China.
- Shenzhen Following Precision Medical Research Institute, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China.
| | - Song Wu
- Luohu Clinical Institute, Shantou University Medical College, Shantou, 515000, China.
- Shenzhen Following Precision Medical Research Institute, The Third Affiliated Hospital of Shenzhen University (Luohu Hospital Group), Shenzhen, 518000, China.
- Shantou University Medical College, Shantou University, Shantou, 515000, China.
- Department of Urology, Health Science Center, South China Hospital, Shenzhen University, Shenzhen, 518116, China.
| |
Collapse
|
28
|
Li D, Wang J, Yang J, Zhao J, Yang X, Cui Y, Zhang K. RTAU-Net: A novel 3D rectal tumor segmentation model based on dual path fusion and attentional guidance. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107842. [PMID: 37832426 DOI: 10.1016/j.cmpb.2023.107842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 09/18/2023] [Accepted: 10/01/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND AND OBJECTIVE According to the Global Cancer Statistics 2020, colorectal cancer has the third-highest diagnosis rate (10.0 %) and the second-highest mortality rate (9.4 %) among the 36 types. Rectal cancer accounts for a large proportion of colorectal cancer. The size and shape of the rectal tumor can directly affect the diagnosis and treatment by doctors. The existing rectal tumor segmentation methods are based on two-dimensional slices, which cannot analyze a patient's tumor as a whole and lose the correlation between slices of MRI image, so the practical application value is not high. METHODS In this paper, a three-dimensional rectal tumor segmentation model is proposed. Firstly, image preprocessing is performed to reduce the effect caused by the unbalanced proportion of background region and target region, and improve the quality of the image. Secondly, a dual-path fusion network is designed to extract both global features and local detail features of rectal tumors. The network includes two encoders, a residual encoder for enhancing the spatial detail information and feature representation of the tumor and a transformer encoder for extracting global contour information of the tumor. In the decoding stage, we merge the information extracted from the dual paths and decode them. In addition, for the problem of the complex morphology and different sizes of rectal tumors, a multi-scale fusion channel attention mechanism is designed, which can capture important contextual information of different scales. Finally, visualize the 3D rectal tumor segmentation results. RESULTS The RTAU-Net is evaluated on the data set provided by Shanxi Provincial Cancer Hospital and Xinhua Hospital. The experimental results showed that the Dice of tumor segmentation reached 0.7978 and 0.6792, respectively, which improved by 2.78 % and 7.02 % compared with suboptimal model. CONCLUSIONS Although the morphology of rectal tumors varies, RTAU-Net can precisely localize rectal tumors and learn the contour and details of tumors, which can relieve physicians' workload and improve diagnostic accuracy.
Collapse
Affiliation(s)
- Dengao Li
- College of Data Science, Taiyuan University of Technology, Taiyuan 030024, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan University of Technology, Taiyuan, Shanxi, China; Intelligent Perception Engineering Technology Center of Shanxi, Taiyuan University of Technology, Taiyuan, Shanxi, China.
| | - Juan Wang
- College of Data Science, Taiyuan University of Technology, Taiyuan 030024, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan University of Technology, Taiyuan, Shanxi, China; Intelligent Perception Engineering Technology Center of Shanxi, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Jicheng Yang
- Computer technology, Ocean University of China, Qingdao 266100, China
| | - Jumin Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan 030024, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan University of Technology, Taiyuan, Shanxi, China; Intelligent Perception Engineering Technology Center of Shanxi, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Xiaotang Yang
- Department of Radiology, Shanxi Province Cancer Hospital, Shanxi Medical University, Taiyuan 030013, China
| | - Yanfen Cui
- Department of Radiology, Shanxi Province Cancer Hospital, Shanxi Medical University, Taiyuan 030013, China
| | - Kenan Zhang
- College of Data Science, Taiyuan University of Technology, Taiyuan 030024, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan University of Technology, Taiyuan, Shanxi, China; Intelligent Perception Engineering Technology Center of Shanxi, Taiyuan University of Technology, Taiyuan, Shanxi, China
| |
Collapse
|
29
|
Solbakken AM, Sellevold S, Spasojevic M, Julsrud L, Emblemsvåg HL, Reims HM, Sørensen O, Thorgersen EB, Fauske L, Ågren JSM, Brennhovd B, Ryder T, Larsen SG, Flatmark K. Navigation-Assisted Surgery for Locally Advanced Primary and Recurrent Rectal Cancer. Ann Surg Oncol 2023; 30:7602-7611. [PMID: 37481493 PMCID: PMC10562504 DOI: 10.1245/s10434-023-13964-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 07/03/2023] [Indexed: 07/24/2023]
Abstract
BACKGROUND In some surgical disciplines, navigation-assisted surgery has become standard of care, but in rectal cancer, indications for navigation and the utility of different technologies remain undetermined. METHODS The NAVI-LARRC prospective study (NCT04512937; IDEAL Stage 2a) evaluated feasibility of navigation in patients with locally advanced primary (LARC) and recurrent rectal cancer (LRRC). Included patients had advanced tumours with high risk of incomplete (R1/R2) resection, and navigation was considered likely to improve the probability of complete resection (R0). Tumours were classified according to pelvic compartmental involvement, as suggested by the Royal Marsden group. The BrainlabTM navigation platform was used for preoperative segmentation of tumour and pelvic anatomy, and for intraoperative navigation with optical tracking. R0 resection rates, surgeons' experiences, and adherence to the preoperative resection plan were assessed. RESULTS Seventeen patients with tumours involving the posterior/lateral compartments underwent navigation-assisted procedures. Fifteen patients required abdominosacral resection, and 3 had resection of the sciatic nerve. R0 resection was obtained in 6/8 (75%) LARC and 6/9 (69%) LRRC cases. Preoperative segmentation was time-consuming (median 3.5 h), but intraoperative navigation was accurate. Surgeons reported navigation to be feasible, and adherence to the resection plan was satisfactory. CONCLUSIONS Navigation-assisted surgery using optical tracking was feasible. The preoperative planning was time-consuming, but intraoperative navigation was accurate and resulted in acceptable R0 resection rates. Selected patients are likely to benefit from navigation-assisted surgery.
Collapse
Affiliation(s)
- Arne M Solbakken
- Department of Gastroenterological Surgery, The Norwegian Radium Hospital, Oslo University Hospital, Oslo, Norway.
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway.
| | - Simen Sellevold
- Department of Orthopaedic Oncology, The Norwegian Radium Hospital, Oslo University Hospital, Oslo, Norway
| | - Milan Spasojevic
- Department of Gastroenterological Surgery, The Norwegian Radium Hospital, Oslo University Hospital, Oslo, Norway
| | - Lars Julsrud
- Department of Radiology, The Norwegian Radium Hospital, Oslo University Hospital, Oslo, Norway
| | - Hanne-Line Emblemsvåg
- Department of Radiology, The Norwegian Radium Hospital, Oslo University Hospital, Oslo, Norway
| | - Henrik M Reims
- Department of Pathology, Rikshospitalet, Oslo University Hospital, Oslo, Norway
| | - Olaf Sørensen
- Department of Gastroenterological Surgery, The Norwegian Radium Hospital, Oslo University Hospital, Oslo, Norway
| | - Ebbe B Thorgersen
- Department of Gastroenterological Surgery, The Norwegian Radium Hospital, Oslo University Hospital, Oslo, Norway
| | - Lena Fauske
- Department of Oncology, The Norwegian Radium Hospital, Oslo University Hospital, Oslo, Norway
- Department of Interdisciplinary Health Sciences, Institute of Health and Society, University of Oslo, Oslo, Norway
| | | | - Bjørn Brennhovd
- Department of Urology, The Norwegian Radium Hospital, Oslo University Hospital, Oslo, Norway
| | - Truls Ryder
- Department of Oncologic Plastic Surgery, The Norwegian Radium Hospital, Oslo University Hospital, Oslo, Norway
| | - Stein G Larsen
- Department of Gastroenterological Surgery, The Norwegian Radium Hospital, Oslo University Hospital, Oslo, Norway
| | - Kjersti Flatmark
- Department of Gastroenterological Surgery, The Norwegian Radium Hospital, Oslo University Hospital, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Tumour Biology, Institute for Cancer Research, The Norwegian Radium Hospital, Oslo University Hospital, Oslo, Norway
| |
Collapse
|
30
|
Rodríguez Outeiral R, Ferreira Silvério N, González PJ, Schaake EE, Janssen T, van der Heide UA, Simões R. A network score-based metric to optimize the quality assurance of automatic radiotherapy target segmentations. Phys Imaging Radiat Oncol 2023; 28:100500. [PMID: 37869474 PMCID: PMC10587515 DOI: 10.1016/j.phro.2023.100500] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 10/06/2023] [Accepted: 10/09/2023] [Indexed: 10/24/2023] Open
Abstract
Background and purpose Existing methods for quality assurance of the radiotherapy auto-segmentations focus on the correlation between the average model entropy and the Dice Similarity Coefficient (DSC) only. We identified a metric directly derived from the output of the network and correlated it with clinically relevant metrics for contour accuracy. Materials and Methods Magnetic Resonance Imaging auto-segmentations were available for the gross tumor volume for cervical cancer brachytherapy (106 segmentations) and for the clinical target volume for rectal cancer external-beam radiotherapy (77 segmentations). The nnU-Net's output before binarization was taken as a score map. We defined a metric as the mean of the voxels in the score map above a threshold (λ). Comparisons were made with the mean and standard deviation over the score map and with the mean over the entropy map. The DSC, the 95th Hausdorff distance, the mean surface distance (MSD) and the surface DSC were computed for segmentation quality. Correlations between the studied metrics and model quality were assessed with the Pearson correlation coefficient (r). The area under the curve (AUC) was determined for detecting segmentations that require reviewing. Results For both tasks, our metric (λ = 0.30) correlated more strongly with the segmentation quality than the mean over the entropy map (for surface DSC, r > 0.65 vs. r < 0.60). The AUC was above 0.84 for detecting MSD values above 2 mm. Conclusions Our metric correlated strongly with clinically relevant segmentation metrics and detected segmentations that required reviewing, indicating its potential for automatic quality assurance of radiotherapy target auto-segmentations.
Collapse
Affiliation(s)
- Roque Rodríguez Outeiral
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Nicole Ferreira Silvério
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Patrick J. González
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Eva E. Schaake
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Tomas Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Uulke A. van der Heide
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| | - Rita Simões
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands
| |
Collapse
|
31
|
den Boer R, Siang KNW, Yuen M, Borggreve A, Defize I, van Lier A, Ruurda J, van Hillegersberg R, Mook S, Meijer G. A robust semi-automatic delineation workflow using denoised diffusion weighted magnetic resonance imaging for response assessment of patients with esophageal cancer treated with neoadjuvant chemoradiotherapy. Phys Imaging Radiat Oncol 2023; 28:100489. [PMID: 37822533 PMCID: PMC10562188 DOI: 10.1016/j.phro.2023.100489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 08/24/2023] [Accepted: 08/25/2023] [Indexed: 10/13/2023] Open
Abstract
Background and Purpose Diffusion weighted magnetic resonance imaging (DW-MRI) can be prognostic for response to neoadjuvant chemotherapy (nCRT) in patients with esophageal cancer. However, manual tumor delineation is labor intensive and subjective. Furthermore, noise in DW-MRI images will propagate into the corresponding apparent diffusion coefficient (ADC) signal. In this study a workflow is investigated that combines a denoising algorithm with semi-automatic segmentation for quantifying ADC changes. Materials and Methods Twenty patients with esophageal cancer who underwent nCRT before esophagectomy were included. One baseline and five weekly DW-MRI scans were acquired for every patient during nCRT. A self-supervised learning denoising algorithm, Patch2Self, was used to denoise the DWI-MRI images. A semi-automatic delineation workflow (SADW) was next developed and compared with a manually adjusted workflow (MAW). The agreement between workflows was determined using the Dice coefficients and Brand Altman plots. The prognostic value of ADCmean increases (%/week) for pathologic complete response (pCR) was assessed using c-statistics. Results The median Dice coefficient between the SADW and MAW was 0.64 (interquartile range 0.20). For the MAW, the c-statistic for predicting pCR was 0.80 (95% confidence interval (CI):0.56-1.00). The SADW showed a c-statistic of 0.84 (95%CI:0.63-1.00) after denoising. No statistically significant differences in c-statistics were observed between the workflows or after applying denoising. Conclusions The SADW resulted in non-inferior prognostic value for pCR compared to the more laborious MAW, allowing broad scale applications. The effect of denoising on the prognostic value for pCR needs to be investigated in larger cohorts.
Collapse
Affiliation(s)
- Robin den Boer
- University Medical Center Utrecht, Department of Radiation Oncology, Utrecht, The Netherlands
| | - Kelvin Ng Wei Siang
- Erasmus MC Cancer Institute, University Medical Center Rotterdam, Department of Radiotherapy, Rotterdam, The Netherlands
- Holland Proton Therapy Center, Department of Medical Physics & Informatics, Delft, The Netherlands
| | - Mandy Yuen
- University Medical Center Utrecht, Department of Radiation Oncology, Utrecht, The Netherlands
| | - Alicia Borggreve
- University Medical Center Utrecht, Department of Radiation Oncology, Utrecht, The Netherlands
| | - Ingmar Defize
- University Medical Center Utrecht, Department of Radiation Oncology, Utrecht, The Netherlands
| | - Astrid van Lier
- University Medical Center Utrecht, Department of Radiation Oncology, Utrecht, The Netherlands
| | - Jelle Ruurda
- University Medical Center Utrecht, Department of Surgery, Utrecht, The Netherlands
| | | | - Stella Mook
- University Medical Center Utrecht, Department of Radiation Oncology, Utrecht, The Netherlands
| | - Gert Meijer
- University Medical Center Utrecht, Department of Radiation Oncology, Utrecht, The Netherlands
| |
Collapse
|
32
|
Said D, Carbonell G, Stocker D, Hectors S, Vietti-Violi N, Bane O, Chin X, Schwartz M, Tabrizian P, Lewis S, Greenspan H, Jégou S, Schiratti JB, Jehanno P, Taouli B. Semiautomated segmentation of hepatocellular carcinoma tumors with MRI using convolutional neural networks. Eur Radiol 2023; 33:6020-6032. [PMID: 37071167 DOI: 10.1007/s00330-023-09613-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 02/09/2023] [Accepted: 02/26/2023] [Indexed: 04/19/2023]
Abstract
OBJECTIVE To assess the performance of convolutional neural networks (CNNs) for semiautomated segmentation of hepatocellular carcinoma (HCC) tumors on MRI. METHODS This retrospective single-center study included 292 patients (237 M/55F, mean age 61 years) with pathologically confirmed HCC between 08/2015 and 06/2019 and who underwent MRI before surgery. The dataset was randomly divided into training (n = 195), validation (n = 66), and test sets (n = 31). Volumes of interest (VOIs) were manually placed on index lesions by 3 independent radiologists on different sequences (T2-weighted imaging [WI], T1WI pre-and post-contrast on arterial [AP], portal venous [PVP], delayed [DP, 3 min post-contrast] and hepatobiliary phases [HBP, when using gadoxetate], and diffusion-weighted imaging [DWI]). Manual segmentation was used as ground truth to train and validate a CNN-based pipeline. For semiautomated segmentation of tumors, we selected a random pixel inside the VOI, and the CNN provided two outputs: single slice and volumetric outputs. Segmentation performance and inter-observer agreement were analyzed using the 3D Dice similarity coefficient (DSC). RESULTS A total of 261 HCCs were segmented on the training/validation sets, and 31 on the test set. The median lesion size was 3.0 cm (IQR 2.0-5.2 cm). Mean DSC (test set) varied depending on the MRI sequence with a range between 0.442 (ADC) and 0.778 (high b-value DWI) for single-slice segmentation; and between 0.305 (ADC) and 0.667 (T1WI pre) for volumetric-segmentation. Comparison between the two models showed better performance in single-slice segmentation, with statistical significance on T2WI, T1WI-PVP, DWI, and ADC. Inter-observer reproducibility of segmentation analysis showed a mean DSC of 0.71 in lesions between 1 and 2 cm, 0.85 in lesions between 2 and 5 cm, and 0.82 in lesions > 5 cm. CONCLUSION CNN models have fair to good performance for semiautomated HCC segmentation, depending on the sequence and tumor size, with better performance for the single-slice approach. Refinement of volumetric approaches is needed in future studies. KEY POINTS • Semiautomated single-slice and volumetric segmentation using convolutional neural networks (CNNs) models provided fair to good performance for hepatocellular carcinoma segmentation on MRI. • CNN models' performance for HCC segmentation accuracy depends on the MRI sequence and tumor size, with the best results on diffusion-weighted imaging and T1-weighted imaging pre-contrast, and for larger lesions.
Collapse
Affiliation(s)
- Daniela Said
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Radiology, Clínica Universidad de los Andes, Santiago, Chile
| | - Guillermo Carbonell
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Radiology, University Hospital Virgen de La Arrixaca, Murcia, Spain
| | - Daniel Stocker
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland
| | - Stefanie Hectors
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Naik Vietti-Violi
- Department of Radiology, Lausanne University Hospital, Lausanne, Switzerland
| | - Octavia Bane
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Xing Chin
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Myron Schwartz
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Parissa Tabrizian
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Sara Lewis
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, 1470 Madison Ave, New York, NY, 10029, USA
| | - Hayit Greenspan
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | | | | | | | - Bachir Taouli
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, 1470 Madison Ave, New York, NY, 10029, USA.
| |
Collapse
|
33
|
Lin YC, Lin G, Pandey S, Yeh CH, Wang JJ, Lin CY, Ho TY, Ko SF, Ng SH. Fully automated segmentation and radiomics feature extraction of hypopharyngeal cancer on MRI using deep learning. Eur Radiol 2023; 33:6548-6556. [PMID: 37338554 PMCID: PMC10415433 DOI: 10.1007/s00330-023-09827-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 03/29/2023] [Accepted: 04/14/2023] [Indexed: 06/21/2023]
Abstract
OBJECTIVES To use convolutional neural network for fully automated segmentation and radiomics features extraction of hypopharyngeal cancer (HPC) tumor in MRI. METHODS MR images were collected from 222 HPC patients, among them 178 patients were used for training, and another 44 patients were recruited for testing. U-Net and DeepLab V3 + architectures were used for training the models. The model performance was evaluated using the dice similarity coefficient (DSC), Jaccard index, and average surface distance. The reliability of radiomics parameters of the tumor extracted by the models was assessed using intraclass correlation coefficient (ICC). RESULTS The predicted tumor volumes by DeepLab V3 + model and U-Net model were highly correlated with those delineated manually (p < 0.001). The DSC of DeepLab V3 + model was significantly higher than that of U-Net model (0.77 vs 0.75, p < 0.05), particularly in those small tumor volumes of < 10 cm3 (0.74 vs 0.70, p < 0.001). For radiomics extraction of the first-order features, both models exhibited high agreement (ICC: 0.71-0.91) with manual delineation. The radiomics extracted by DeepLab V3 + model had significantly higher ICCs than those extracted by U-Net model for 7 of 19 first-order features and for 8 of 17 shape-based features (p < 0.05). CONCLUSION Both DeepLab V3 + and U-Net models produced reasonable results in automated segmentation and radiomic features extraction of HPC on MR images, whereas DeepLab V3 + had a better performance than U-Net. CLINICAL RELEVANCE STATEMENT The deep learning model, DeepLab V3 + , exhibited promising performance in automated tumor segmentation and radiomics extraction for hypopharyngeal cancer on MRI. This approach holds great potential for enhancing the radiotherapy workflow and facilitating prediction of treatment outcomes. KEY POINTS • DeepLab V3 + and U-Net models produced reasonable results in automated segmentation and radiomic features extraction of HPC on MR images. • DeepLab V3 + model was more accurate than U-Net in automated segmentation, especially on small tumors. • DeepLab V3 + exhibited higher agreement for about half of the first-order and shape-based radiomics features than U-Net.
Collapse
Affiliation(s)
- Yu-Chun Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
- Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, Taiwan
- Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
- Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Sumit Pandey
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
| | - Chih-Hua Yeh
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
| | - Jiun-Jie Wang
- Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, Taiwan
| | - Chien-Yu Lin
- Department of Radiation Oncology, Chang Gung Memorial Hospital at Linkou and Chang Gung University, Taoyuan, Taiwan
| | - Tsung-Ying Ho
- Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital and Chang Gung University, Taoyuan, Taiwan
| | - Sheung-Fat Ko
- Department of Radiology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung, Taiwan
| | - Shu-Hang Ng
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan.
| |
Collapse
|
34
|
Hagiwara A, Fujita S, Kurokawa R, Andica C, Kamagata K, Aoki S. Multiparametric MRI: From Simultaneous Rapid Acquisition Methods and Analysis Techniques Using Scoring, Machine Learning, Radiomics, and Deep Learning to the Generation of Novel Metrics. Invest Radiol 2023; 58:548-560. [PMID: 36822661 PMCID: PMC10332659 DOI: 10.1097/rli.0000000000000962] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/10/2023] [Indexed: 02/25/2023]
Abstract
ABSTRACT With the recent advancements in rapid imaging methods, higher numbers of contrasts and quantitative parameters can be acquired in less and less time. Some acquisition models simultaneously obtain multiparametric images and quantitative maps to reduce scan times and avoid potential issues associated with the registration of different images. Multiparametric magnetic resonance imaging (MRI) has the potential to provide complementary information on a target lesion and thus overcome the limitations of individual techniques. In this review, we introduce methods to acquire multiparametric MRI data in a clinically feasible scan time with a particular focus on simultaneous acquisition techniques, and we discuss how multiparametric MRI data can be analyzed as a whole rather than each parameter separately. Such data analysis approaches include clinical scoring systems, machine learning, radiomics, and deep learning. Other techniques combine multiple images to create new quantitative maps associated with meaningful aspects of human biology. They include the magnetic resonance g-ratio, the inner to the outer diameter of a nerve fiber, and the aerobic glycolytic index, which captures the metabolic status of tumor tissues.
Collapse
Affiliation(s)
- Akifumi Hagiwara
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Shohei Fujita
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ryo Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Division of Neuroradiology, Department of Radiology, University of Michigan, Ann Arbor, Michigan
| | - Christina Andica
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Koji Kamagata
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Shigeki Aoki
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| |
Collapse
|
35
|
Panico A, Gatta G, Salvia A, Grezia GD, Fico N, Cuccurullo V. Radiomics in Breast Imaging: Future Development. J Pers Med 2023; 13:jpm13050862. [PMID: 37241032 DOI: 10.3390/jpm13050862] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 05/02/2023] [Accepted: 05/16/2023] [Indexed: 05/28/2023] Open
Abstract
Breast cancer is the most common and most commonly diagnosed non-skin cancer in women. There are several risk factors related to habits and heredity, and screening is essential to reduce the incidence of mortality. Thanks to screening and increased awareness among women, most breast cancers are diagnosed at an early stage, increasing the chances of cure and survival. Regular screening is essential. Mammography is currently the gold standard for breast cancer diagnosis. In mammography, we can encounter problems with the sensitivity of the instrument; in fact, in the case of a high density of glands, the ability to detect small masses is reduced. In fact, in some cases, the lesion may not be particularly evident, it may be hidden, and it is possible to incur false negatives as partial details that may escape the radiologist's eye. The problem is, therefore, substantial, and it makes sense to look for techniques that can increase the quality of diagnosis. In recent years, innovative techniques based on artificial intelligence have been used in this regard, which are able to see where the human eye cannot reach. In this paper, we can see the application of radiomics in mammography.
Collapse
Affiliation(s)
- Alessandra Panico
- Radiology Division, Department of Precision Medicine, Università della Campania "Luigi Vanvitelli", 80138 Naples, Italy
| | - Gianluca Gatta
- Radiology Division, Department of Precision Medicine, Università della Campania "Luigi Vanvitelli", 80138 Naples, Italy
| | - Antonio Salvia
- Radiology Division, Department of Precision Medicine, Università della Campania "Luigi Vanvitelli", 80138 Naples, Italy
| | | | - Noemi Fico
- Department of Physics "Ettore Pancini", Università di Napoli Federico II, 80126 Naples, Italy
| | - Vincenzo Cuccurullo
- Nuclear Medicine Unit, Department of Precision Medicine, Università della Campania "Luigi Vanvitelli", 80138 Naples, Italy
| |
Collapse
|
36
|
Sha X, Wang H, Sha H, Xie L, Zhou Q, Zhang W, Yin Y. Clinical target volume and organs at risk segmentation for rectal cancer radiotherapy using the Flex U-Net network. Front Oncol 2023; 13:1172424. [PMID: 37324028 PMCID: PMC10266488 DOI: 10.3389/fonc.2023.1172424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 05/05/2023] [Indexed: 06/17/2023] Open
Abstract
Purpose/Objectives The aim of this study was to improve the accuracy of the clinical target volume (CTV) and organs at risk (OARs) segmentation for rectal cancer preoperative radiotherapy. Materials/Methods Computed tomography (CT) scans from 265 rectal cancer patients treated at our institution were collected to train and validate automatic contouring models. The regions of CTV and OARs were delineated by experienced radiologists as the ground truth. We improved the conventional U-Net and proposed Flex U-Net, which used a register model to correct the noise caused by manual annotation, thus refining the performance of the automatic segmentation model. Then, we compared its performance with that of U-Net and V-Net. The Dice similarity coefficient (DSC), Hausdorff distance (HD), and average symmetric surface distance (ASSD) were calculated for quantitative evaluation purposes. With a Wilcoxon signed-rank test, we found that the differences between our method and the baseline were statistically significant (P< 0.05). Results Our proposed framework achieved DSC values of 0.817 ± 0.071, 0.930 ± 0.076, 0.927 ± 0.03, and 0.925 ± 0.03 for CTV, the bladder, Femur head-L and Femur head-R, respectively. Conversely, the baseline results were 0.803 ± 0.082, 0.917 ± 0.105, 0.923 ± 0.03 and 0.917 ± 0.03, respectively. Conclusion In conclusion, our proposed Flex U-Net can enable satisfactory CTV and OAR segmentation for rectal cancer and yield superior performance compared to conventional methods. This method provides an automatic, fast and consistent solution for CTV and OAR segmentation and exhibits potential to be widely applied for radiation therapy planning for a variety of cancers.
Collapse
Affiliation(s)
- Xue Sha
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Hui Wang
- Department of Radiation Oncology, Qingdao Central Hospital, Qingdao, Shandong, China
| | - Hui Sha
- Hunan Cancer Hospital, Xiangya School of Medicine, Central South University, Changsha, Hunan, China
| | - Lu Xie
- Manteia Technologies Co., Ltd, Xiamen, Fujian, China
| | - Qichao Zhou
- Manteia Technologies Co., Ltd, Xiamen, Fujian, China
| | - Wei Zhang
- Manteia Technologies Co., Ltd, Xiamen, Fujian, China
| | - Yong Yin
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| |
Collapse
|
37
|
DeSilvio T, Antunes JT, Bera K, Chirra P, Le H, Liska D, Stein SL, Marderstein E, Hall W, Paspulati R, Gollamudi J, Purysko AS, Viswanath SE. Region-specific deep learning models for accurate segmentation of rectal structures on post-chemoradiation T2w MRI: a multi-institutional, multi-reader study. Front Med (Lausanne) 2023; 10:1149056. [PMID: 37250635 PMCID: PMC10213753 DOI: 10.3389/fmed.2023.1149056] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 03/27/2023] [Indexed: 05/31/2023] Open
Abstract
Introduction For locally advanced rectal cancers, in vivo radiological evaluation of tumor extent and regression after neoadjuvant therapy involves implicit visual identification of rectal structures on magnetic resonance imaging (MRI). Additionally, newer image-based, computational approaches (e.g., radiomics) require more detailed and precise annotations of regions such as the outer rectal wall, lumen, and perirectal fat. Manual annotations of these regions, however, are highly laborious and time-consuming as well as subject to inter-reader variability due to tissue boundaries being obscured by treatment-related changes (e.g., fibrosis, edema). Methods This study presents the application of U-Net deep learning models that have been uniquely developed with region-specific context to automatically segment each of the outer rectal wall, lumen, and perirectal fat regions on post-treatment, T2-weighted MRI scans. Results In multi-institutional evaluation, region-specific U-Nets (wall Dice = 0.920, lumen Dice = 0.895) were found to perform comparably to multiple readers (wall inter-reader Dice = 0.946, lumen inter-reader Dice = 0.873). Additionally, when compared to a multi-class U-Net, region-specific U-Nets yielded an average 20% improvement in Dice scores for segmenting each of the wall, lumen, and fat; even when tested on T2-weighted MRI scans that exhibited poorer image quality, or from a different plane, or were accrued from an external institution. Discussion Developing deep learning segmentation models with region-specific context may thus enable highly accurate, detailed annotations for multiple rectal structures on post-chemoradiation T2-weighted MRI scans, which is critical for improving evaluation of tumor extent in vivo and building accurate image-based analytic tools for rectal cancers.
Collapse
Affiliation(s)
- Thomas DeSilvio
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Jacob T. Antunes
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Kaustav Bera
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Prathyush Chirra
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Hoa Le
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - David Liska
- Department of Colorectal Surgery, Cleveland Clinic, Cleveland, OH, United States
| | - Sharon L. Stein
- Department of Surgery, University Hospitals Cleveland Medical Center, Cleveland, OH, United States
| | - Eric Marderstein
- Northeast Ohio Veterans Affairs Medical Center, Cleveland, OH, United States
| | - William Hall
- Department of Radiation Oncology and Surgery, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Rajmohan Paspulati
- Department of Diagnostic Imaging and Interventional Radiology, Moffitt Cancer Center, Tampa, FL, United States
| | | | - Andrei S. Purysko
- Section of Abdominal Imaging and Nuclear Radiology Department, Cleveland Clinic, Cleveland, OH, United States
| | - Satish E. Viswanath
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| |
Collapse
|
38
|
Shen J, Lu S, Qu R, Zhao H, Zhang L, Chang A, Zhang Y, Fu W, Zhang Z. A boundary-guided transformer for measuring distance from rectal tumor to anal verge on magnetic resonance images. PATTERNS (NEW YORK, N.Y.) 2023; 4:100711. [PMID: 37123445 PMCID: PMC10140608 DOI: 10.1016/j.patter.2023.100711] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 10/17/2022] [Accepted: 02/24/2023] [Indexed: 03/29/2023]
Abstract
Accurate measurement of the distance from the tumor's lowest boundary to the anal verge (DTAV) provides an important reference value for treatment of rectal cancer, but the standard measurement method (colonoscopy) causes substantial pain. Therefore, we propose a method for automatically measuring the DTAV on sagittal magnetic resonance (MR) images. We designed a boundary-guided transformer that can accurately segment the rectum and tumor. From the segmentation results, we estimated the DTAV by automatically extracting the anterior rectal wall from the tumor's lowest point to the anal verge and then calculating its physical length. Experiments were conducted on a rectal tumor MR imaging (MRI) dataset to evaluate the efficacy of our method. The results showed that our method outperformed surgeons with 6 years of experience (p < 0.001). Furthermore, by referring to our segmentation results, attending and resident surgeons could improve their measurement precision and efficiency.
Collapse
Affiliation(s)
- Jianjun Shen
- Department of Electronics, Tsinghua University, Beijing 100084, China
| | - Siyi Lu
- Department of General Surgery, Peking University Third Hospital, Beijing 100191, China
- Cancer Center, Peking University Third Hospital, Beijing 100191 China
| | - Ruize Qu
- Department of General Surgery, Peking University Third Hospital, Beijing 100191, China
- Cancer Center, Peking University Third Hospital, Beijing 100191 China
| | - Hao Zhao
- Intel Labs, Beijing 100190, China
| | - Li Zhang
- Department of Electronics, Tsinghua University, Beijing 100084, China
| | - An Chang
- Department of Electronics, Tsinghua University, Beijing 100084, China
| | - Yu Zhang
- School of Astronautics, Beihang University, Beijing 102206, China
| | - Wei Fu
- Department of General Surgery, Peking University Third Hospital, Beijing 100191, China
- Cancer Center, Peking University Third Hospital, Beijing 100191 China
| | - Zhipeng Zhang
- Department of General Surgery, Peking University Third Hospital, Beijing 100191, China
- Cancer Center, Peking University Third Hospital, Beijing 100191 China
| |
Collapse
|
39
|
Liu Y, Wei X, Feng X, Liu Y, Feng G, Du Y. Repeatability of radiomics studies in colorectal cancer: a systematic review. BMC Gastroenterol 2023; 23:125. [PMID: 37059990 PMCID: PMC10105401 DOI: 10.1186/s12876-023-02743-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 03/22/2023] [Indexed: 04/16/2023] Open
Abstract
BACKGROUND Recently, radiomics has been widely used in colorectal cancer, but many variable factors affect the repeatability of radiomics research. This review aims to analyze the repeatability of radiomics studies in colorectal cancer and to evaluate the current status of radiomics in the field of colorectal cancer. METHODS The included studies in this review by searching from the PubMed and Embase databases. Then each study in our review was evaluated using the Radiomics Quality Score (RQS). We analyzed the factors that may affect the repeatability in the radiomics workflow and discussed the repeatability of the included studies. RESULTS A total of 188 studies was included in this review, of which only two (2/188, 1.06%) studies controlled the influence of individual factors. In addition, the median score of RQS was 11 (out of 36), range-1 to 27. CONCLUSIONS The RQS score was moderately low, and most studies did not consider the repeatability of radiomics features, especially in terms of Intra-individual, scanners, and scanning parameters. To improve the generalization of the radiomics model, it is necessary to further control the variable factors of repeatability.
Collapse
Affiliation(s)
- Ying Liu
- School of Medical Imaging, North Sichuan Medical College, Sichuan Province, Nanchong City, 637000, China
| | - Xiaoqin Wei
- School of Medical Imaging, North Sichuan Medical College, Sichuan Province, Nanchong City, 637000, China
| | | | - Yan Liu
- Department of Radiology, the Affiliated Hospital of North Sichuan Medical College, 1 Maoyuannan Road, Sichuan Province, 637000, Nanchong City, China
| | - Guiling Feng
- Department of Radiology, the Affiliated Hospital of North Sichuan Medical College, 1 Maoyuannan Road, Sichuan Province, 637000, Nanchong City, China
| | - Yong Du
- Department of Radiology, the Affiliated Hospital of North Sichuan Medical College, 1 Maoyuannan Road, Sichuan Province, 637000, Nanchong City, China.
| |
Collapse
|
40
|
Takemasa I, Hamabe A, Miyo M, Akizuki E, Okuya K. Essential updates 2020/2021: Advancing precision medicine for comprehensive rectal cancer treatment. Ann Gastroenterol Surg 2023; 7:198-215. [PMID: 36998300 PMCID: PMC10043777 DOI: 10.1002/ags3.12646] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 11/16/2022] [Accepted: 11/23/2022] [Indexed: 12/28/2022] Open
Abstract
In the paradigm shift related to rectal cancer treatment, we have to understand a variety of new emerging topics to provide appropriate treatment for individual patients as precision medicine. However, information on surgery, genomic medicine, and pharmacotherapy is highly specialized and subdivided, creating a barrier to achieving thorough knowledge. In this review, we summarize the perspective for rectal cancer treatment and management from the current standard-of-care to the latest findings to help optimize treatment strategy.
Collapse
Affiliation(s)
- Ichiro Takemasa
- Department of Surgery, Surgical Oncology and ScienceSapporo Medical UniversitySapporoJapan
| | - Atsushi Hamabe
- Department of Surgery, Surgical Oncology and ScienceSapporo Medical UniversitySapporoJapan
- Department of Gastroenterological Surgery, Graduate School of MedicineOsaka UniversityOsakaJapan
| | - Masaaki Miyo
- Department of Surgery, Surgical Oncology and ScienceSapporo Medical UniversitySapporoJapan
| | - Emi Akizuki
- Department of Surgery, Surgical Oncology and ScienceSapporo Medical UniversitySapporoJapan
| | - Koichi Okuya
- Department of Surgery, Surgical Oncology and ScienceSapporo Medical UniversitySapporoJapan
| |
Collapse
|
41
|
Dual parallel net: A novel deep learning model for rectal tumor segmentation via CNN and transformer with Gaussian Mixture prior. J Biomed Inform 2023; 139:104304. [PMID: 36736447 DOI: 10.1016/j.jbi.2023.104304] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/27/2022] [Accepted: 01/29/2023] [Indexed: 02/05/2023]
Abstract
Segmentation of rectal cancerous regions from Magnetic Resonance (MR) images can help doctor define the extent of the rectal cancer and judge the severity of rectal cancer, so rectal tumor segmentation is crucial to improve the accuracy of rectal cancer diagnosis. However, accurate segmentation of rectal cancerous regions remains a challenging task due to the shape of rectal tumor has significant variations and the tumor and surrounding tissue are indistinguishable. In addition, in the early research on rectal tumor segmentation, most deep learning methods were based on convolutional neural networks (CNNs), and traditional CNN have small receptive field, which can only capture local information while ignoring the global information of the image. Nevertheless, the global information plays a crucial role in rectal tumor segmentation, so traditional CNN-based methods usually cannot achieve satisfactory segmentation results. In this paper, we propose an encoder-decoder network named Dual Parallel Net (DuPNet), which fuses transformer and classical CNN for capturing both global and local information. Meanwhile, as for capture features at different scales as well as to avoid accuracy loss and parameters reduction, we design a feature adaptive block (FAB) in skip connection between encoder and decoder. Furthermore, in order to utilize the apriori information of rectal tumor shape effectively, we design a Gaussian Mixture prior and embed it in self-attention mechanism of the transformer, leading to robust feature representation and accurate segmentation results. We have performed extensive ablation experiments to verify the effectiveness of our proposed dual parallel encoder, FAB and Gaussian Mixture prior on the dataset from the Shanxi Cancer Hospital. In the experimental comparison with the state-of-the-art methods, our method achieved a Mean Intersection over Union (MIoU) of 89.34% on the test set. In addition to that, we evaluated the generalizability of our method on the dataset from Xinhua Hospital, the promising results verify the superiority of our method.
Collapse
|
42
|
Yang S, Li A, Li P, Yun Z, Lin G, Cheng J, Xu S, Qiu B. Automatic segmentation of inferior alveolar canal with ambiguity classification in panoramic images using deep learning. Heliyon 2023; 9:e13694. [PMID: 36852021 PMCID: PMC9957750 DOI: 10.1016/j.heliyon.2023.e13694] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 02/01/2023] [Accepted: 02/08/2023] [Indexed: 02/13/2023] Open
Abstract
Background Manual segmentation of the inferior alveolar canal (IAC) in panoramic images requires considerable time and labor even for dental experts having extensive experience. The objective of this study was to evaluate the performance of automatic segmentation of IAC with ambiguity classification in panoramic images using a deep learning method. Methods Among 1366 panoramic images, 1000 were selected as the training dataset and the remaining 336 were assigned to the testing dataset. The radiologists divided the testing dataset into four groups according to the quality of the visible segments of IAC. The segmentation time, dice similarity coefficient (DSC), precision, and recall rate were calculated to evaluate the efficiency and segmentation performance of deep learning-based automatic segmentation. Results Automatic segmentation achieved a DSC of 85.7% (95% confidence interval [CI] 75.4%-90.3%), precision of 84.1% (95% CI 78.4%-89.3%), and recall of 87.7% (95% CI 77.7%-93.4%). Compared with manual annotation (5.9s per image), automatic segmentation significantly increased the efficiency of IAC segmentation (33 ms per image). The DSC and precision values of group 4 (most visible) were significantly better than those of group 1 (least visible). The recall values of groups 3 and 4 were significantly better than those of group 1. Conclusions The deep learning-based method achieved high performance for IAC segmentation in panoramic images under different visibilities and was positively correlated with IAC image clarity.
Collapse
Affiliation(s)
- Shuo Yang
- Center of Oral Implantology, Stomatological Hospital, Southern Medical University, Guangzhou, China
| | - An Li
- Center of Oral Implantology, Stomatological Hospital, Southern Medical University, Guangzhou, China
| | - Ping Li
- Center of Oral Implantology, Stomatological Hospital, Southern Medical University, Guangzhou, China
| | - Zhaoqiang Yun
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Guoye Lin
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Jun Cheng
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Shulan Xu
- Center of Oral Implantology, Stomatological Hospital, Southern Medical University, Guangzhou, China
| | - Bingjiang Qiu
- Department of Radiology & Guangdong Cardiovascular Institute & Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China.,Data Science Center in Health (DASH) & 3D Lab, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| |
Collapse
|
43
|
Lin YC, Lin Y, Huang YL, Ho CY, Chiang HJ, Lu HY, Wang CC, Wang JJ, Ng SH, Lai CH, Lin G. Generalizable transfer learning of automated tumor segmentation from cervical cancers toward a universal model for uterine malignancies in diffusion-weighted MRI. Insights Imaging 2023; 14:14. [PMID: 36690870 PMCID: PMC9871146 DOI: 10.1186/s13244-022-01356-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 12/04/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE To investigate the generalizability of transfer learning (TL) of automated tumor segmentation from cervical cancers toward a universal model for cervical and uterine malignancies in diffusion-weighted magnetic resonance imaging (DWI). METHODS In this retrospective multicenter study, we analyzed pelvic DWI data from 169 and 320 patients with cervical and uterine malignancies and divided them into the training (144 and 256) and testing (25 and 64) datasets, respectively. A pretrained model was established using DeepLab V3 + from the cervical cancer dataset, followed by TL experiments adjusting the training data sizes and fine-tuning layers. The model performance was evaluated using the dice similarity coefficient (DSC). RESULTS In predicting tumor segmentation for all cervical and uterine malignancies, TL models improved the DSCs from the pretrained cervical model (DSC 0.43) when adding 5, 13, 26, and 51 uterine cases for training (DSC improved from 0.57, 0.62, 0.68, 0.70, p < 0.001). Following the crossover at adding 128 cases (DSC 0.71), the model trained by combining data from adding all the 256 patients exhibited the highest DSCs for the combined cervical and uterine datasets (DSC 0.81) and cervical only dataset (DSC 0.91). CONCLUSIONS TL may improve the generalizability of automated tumor segmentation of DWI from a specific cancer type toward multiple types of uterine malignancies especially in limited case numbers.
Collapse
Affiliation(s)
- Yu-Chun Lin
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan ,grid.145695.a0000 0004 1798 0922Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, 33302 Taiwan ,grid.454210.60000 0004 1756 1461Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Yenpo Lin
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Yen-Ling Huang
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Chih-Yi Ho
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Hsin-Ju Chiang
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan ,grid.454210.60000 0004 1756 1461Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Hsin-Ying Lu
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan ,grid.454210.60000 0004 1756 1461Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Chun-Chieh Wang
- grid.145695.a0000 0004 1798 0922Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, 33302 Taiwan ,grid.145695.a0000 0004 1798 0922Department of Radiation Oncology, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Jiun-Jie Wang
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan ,grid.145695.a0000 0004 1798 0922Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, 33302 Taiwan
| | - Shu-Hang Ng
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Chyong-Huey Lai
- grid.145695.a0000 0004 1798 0922Gynecologic Cancer Research Center, Department of Obstetrics and Gynecology, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Gigin Lin
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan ,grid.454210.60000 0004 1756 1461Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan ,grid.145695.a0000 0004 1798 0922Gynecologic Cancer Research Center, Department of Obstetrics and Gynecology, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| |
Collapse
|
44
|
Akai H, Yasaka K, Sugawara H, Tajima T, Kamitani M, Furuta T, Akahane M, Yoshioka N, Ohtomo K, Abe O, Kiryu S. Acceleration of knee magnetic resonance imaging using a combination of compressed sensing and commercially available deep learning reconstruction: a preliminary study. BMC Med Imaging 2023; 23:5. [PMID: 36624404 PMCID: PMC9827641 DOI: 10.1186/s12880-023-00962-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 01/04/2023] [Indexed: 01/10/2023] Open
Abstract
PURPOSE To evaluate whether deep learning reconstruction (DLR) accelerates the acquisition of 1.5-T magnetic resonance imaging (MRI) knee data without image deterioration. MATERIALS AND METHODS Twenty-one healthy volunteers underwent MRI of the right knee on a 1.5-T MRI scanner. Proton-density-weighted images with one or four numbers of signal averages (NSAs) were obtained via compressed sensing, and DLR was applied to the images with 1 NSA to obtain 1NSA-DLR images. The 1NSA-DLR and 4NSA images were compared objectively (by deriving the signal-to-noise ratios of the lateral and the medial menisci and the contrast-to-noise ratios of the lateral and the medial menisci and articular cartilages) and subjectively (in terms of the visibility of the anterior cruciate ligament, the medial collateral ligament, the medial and lateral menisci, and bone) and in terms of image noise, artifacts, and overall diagnostic acceptability. The paired t-test and Wilcoxon signed-rank test were used for statistical analyses. RESULTS The 1NSA-DLR images were obtained within 100 s. The signal-to-noise ratios (lateral: 3.27 ± 0.30 vs. 1.90 ± 0.13, medial: 2.71 ± 0.24 vs. 1.80 ± 0.15, both p < 0.001) and contrast-to-noise ratios (lateral: 2.61 ± 0.51 vs. 2.18 ± 0.58, medial 2.19 ± 0.32 vs. 1.97 ± 0.36, both p < 0.001) were significantly higher for 1NSA-DLR than 4NSA images. Subjectively, all anatomical structures (except bone) were significantly clearer on the 1NSA-DLR than on the 4NSA images. Also, in the former images, the noise was lower, and the overall diagnostic acceptability was higher. CONCLUSION Compared with the 4NSA images, the 1NSA-DLR images exhibited less noise, higher overall image quality, and allowed more precise visualization of the menisci and ligaments.
Collapse
Affiliation(s)
- Hiroyuki Akai
- grid.26999.3d0000 0001 2151 536XDepartment of Radiology, Institute of Medical Science, University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639 Japan ,Present Address: Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124 Japan
| | - Koichiro Yasaka
- Present Address: Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124 Japan ,grid.26999.3d0000 0001 2151 536XDepartment of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655 Japan
| | - Haruto Sugawara
- grid.26999.3d0000 0001 2151 536XDepartment of Radiology, Institute of Medical Science, University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639 Japan
| | - Taku Tajima
- Present Address: Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124 Japan ,grid.415958.40000 0004 1771 6769Department of Radiology, International University of Health and Welfare Mita Hospital, 1-4-3 Mita, Minato-ku, Tokyo, 108-8329 Japan
| | - Masaru Kamitani
- grid.26999.3d0000 0001 2151 536XDepartment of Radiology, Institute of Medical Science, University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639 Japan
| | - Toshihiro Furuta
- grid.26999.3d0000 0001 2151 536XDepartment of Radiology, Institute of Medical Science, University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639 Japan
| | - Masaaki Akahane
- Present Address: Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124 Japan
| | - Naoki Yoshioka
- Present Address: Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124 Japan
| | - Kuni Ohtomo
- grid.411731.10000 0004 0531 3030International University of Health and Welfare, 2600-1 Kiakanemaru, Ohtawara, Tochigi 324-8501 Japan
| | - Osamu Abe
- grid.26999.3d0000 0001 2151 536XDepartment of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655 Japan
| | - Shigeru Kiryu
- Present Address: Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124 Japan
| |
Collapse
|
45
|
Wong C, Fu Y, Li M, Mu S, Chu X, Fu J, Lin C, Zhang H. MRI-Based Artificial Intelligence in Rectal Cancer. J Magn Reson Imaging 2023; 57:45-56. [PMID: 35993550 DOI: 10.1002/jmri.28381] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 07/19/2022] [Accepted: 07/20/2022] [Indexed: 02/03/2023] Open
Abstract
Rectal cancer (RC) accounts for approximately one-third of colorectal cancer (CRC), with death rates increasing in patients younger than 50 years old. Magnetic resonance imaging (MRI) is routinely performed for tumor evaluation. However, the semantic features from images alone remain insufficient to guide treatment decisions. Functional MRIs are useful for revealing microstructural and functional abnormalities and nevertheless have low or modest repeatability and reproducibility. Therefore, during the preoperative evaluation and follow-up treatment of patients with RC, novel noninvasive imaging markers are needed to describe tumor characteristics to guide treatment strategies and achieve individualized diagnosis and treatment. In recent years, the development of artificial intelligence (AI) has created new tools for RC evaluation based on MRI. In this review, we summarize the research progress of AI in the evaluation of staging, prediction of high-risk factors, genotyping, response to therapy, recurrence, metastasis, prognosis, and segmentation with RC. We further discuss the challenges of clinical application, including improvement in imaging, model performance, and the biological meaning of features, which may also be major development directions in the future. EVIDENCE LEVEL: 5 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Chinting Wong
- Department of Nuclear Medicine, The First Hospital of Jilin University, Changchun, China
| | - Yu Fu
- Department of Radiology, The First Hospital of Jilin University, Jilin Provincial Key Laboratory of Medical Imaging and Big Data, Changchun, China
| | - Mingyang Li
- Department of Radiology, The First Hospital of Jilin University, Jilin Provincial Key Laboratory of Medical Imaging and Big Data, Changchun, China
| | - Shengnan Mu
- Department of Radiology, The First Hospital of Jilin University, Jilin Provincial Key Laboratory of Medical Imaging and Big Data, Changchun, China
| | - Xiaotong Chu
- Department of Radiology, The First Hospital of Jilin University, Jilin Provincial Key Laboratory of Medical Imaging and Big Data, Changchun, China
| | - Jiahui Fu
- Department of Radiology, The First Hospital of Jilin University, Jilin Provincial Key Laboratory of Medical Imaging and Big Data, Changchun, China
| | - Chenghe Lin
- Department of Nuclear Medicine, The First Hospital of Jilin University, Changchun, China
| | - Huimao Zhang
- Department of Radiology, The First Hospital of Jilin University, Jilin Provincial Key Laboratory of Medical Imaging and Big Data, Changchun, China
| |
Collapse
|
46
|
Goodburn RJ, Philippens MEP, Lefebvre TL, Khalifa A, Bruijnen T, Freedman JN, Waddington DEJ, Younus E, Aliotta E, Meliadò G, Stanescu T, Bano W, Fatemi‐Ardekani A, Wetscherek A, Oelfke U, van den Berg N, Mason RP, van Houdt PJ, Balter JM, Gurney‐Champion OJ. The future of MRI in radiation therapy: Challenges and opportunities for the MR community. Magn Reson Med 2022; 88:2592-2608. [PMID: 36128894 PMCID: PMC9529952 DOI: 10.1002/mrm.29450] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 08/17/2022] [Accepted: 08/22/2022] [Indexed: 01/11/2023]
Abstract
Radiation therapy is a major component of cancer treatment pathways worldwide. The main aim of this treatment is to achieve tumor control through the delivery of ionizing radiation while preserving healthy tissues for minimal radiation toxicity. Because radiation therapy relies on accurate localization of the target and surrounding tissues, imaging plays a crucial role throughout the treatment chain. In the treatment planning phase, radiological images are essential for defining target volumes and organs-at-risk, as well as providing elemental composition (e.g., electron density) information for radiation dose calculations. At treatment, onboard imaging informs patient setup and could be used to guide radiation dose placement for sites affected by motion. Imaging is also an important tool for treatment response assessment and treatment plan adaptation. MRI, with its excellent soft tissue contrast and capacity to probe functional tissue properties, holds great untapped potential for transforming treatment paradigms in radiation therapy. The MR in Radiation Therapy ISMRM Study Group was established to provide a forum within the MR community to discuss the unmet needs and fuel opportunities for further advancement of MRI for radiation therapy applications. During the summer of 2021, the study group organized its first virtual workshop, attended by a diverse international group of clinicians, scientists, and clinical physicists, to explore our predictions for the future of MRI in radiation therapy for the next 25 years. This article reviews the main findings from the event and considers the opportunities and challenges of reaching our vision for the future in this expanding field.
Collapse
Affiliation(s)
- Rosie J. Goodburn
- Joint Department of PhysicsInstitute of Cancer Research and Royal Marsden NHS Foundation TrustLondonUnited Kingdom
| | | | - Thierry L. Lefebvre
- Department of PhysicsUniversity of CambridgeCambridgeUnited Kingdom
- Cancer Research UK Cambridge Research InstituteUniversity of CambridgeCambridgeUnited Kingdom
| | - Aly Khalifa
- Department of Medical BiophysicsUniversity of TorontoTorontoOntarioCanada
| | - Tom Bruijnen
- Department of RadiotherapyUniversity Medical Center UtrechtUtrechtNetherlands
| | | | - David E. J. Waddington
- Faculty of Medicine and Health, Sydney School of Health Sciences, ACRF Image X InstituteThe University of SydneySydneyNew South WalesAustralia
| | - Eyesha Younus
- Department of Medical Physics, Odette Cancer CentreSunnybrook Health Sciences CentreTorontoOntarioCanada
| | - Eric Aliotta
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Gabriele Meliadò
- Unità Operativa Complessa di Fisica SanitariaAzienda Ospedaliera Universitaria Integrata VeronaVeronaItaly
| | - Teo Stanescu
- Department of Radiation Oncology, University of Toronto and Medical Physics, Princess Margaret Cancer CentreUniversity Health NetworkTorontoOntarioCanada
| | - Wajiha Bano
- Joint Department of PhysicsInstitute of Cancer Research and Royal Marsden NHS Foundation TrustLondonUnited Kingdom
| | - Ali Fatemi‐Ardekani
- Department of PhysicsJackson State University (JSU)JacksonMississippiUSA
- SpinTecxJacksonMississippiUSA
- Department of Radiation OncologyCommunity Health Systems (CHS) Cancer NetworkJacksonMississippiUSA
| | - Andreas Wetscherek
- Joint Department of PhysicsInstitute of Cancer Research and Royal Marsden NHS Foundation TrustLondonUnited Kingdom
| | - Uwe Oelfke
- Joint Department of PhysicsInstitute of Cancer Research and Royal Marsden NHS Foundation TrustLondonUnited Kingdom
| | - Nico van den Berg
- Department of RadiotherapyUniversity Medical Center UtrechtUtrechtNetherlands
| | - Ralph P. Mason
- Department of RadiologyUniversity of Texas Southwestern Medical CenterDallasTexasUSA
| | - Petra J. van Houdt
- Department of Radiation OncologyNetherlands Cancer InstituteAmsterdamNetherlands
| | - James M. Balter
- Department of Radiation OncologyUniversity of MichiganAnn ArborMichiganUSA
| | - Oliver J. Gurney‐Champion
- Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam UMCUniversity of AmsterdamAmsterdamNetherlands
| |
Collapse
|
47
|
Chen R, Fu Y, Yi X, Pei Q, Zai H, Chen BT. Application of Radiomics in Predicting Treatment Response to Neoadjuvant Chemoradiotherapy in Locally Advanced Rectal Cancer: Strategies and Challenges. JOURNAL OF ONCOLOGY 2022; 2022:1590620. [PMID: 36471884 PMCID: PMC9719428 DOI: 10.1155/2022/1590620] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 10/30/2022] [Accepted: 11/09/2022] [Indexed: 08/01/2023]
Abstract
Neoadjuvant chemoradiotherapy (nCRT) followed by total mesorectal excision is the standard treatment for locally advanced rectal cancer (LARC). A noninvasive preoperative prediction method should greatly assist in the evaluation of response to nCRT and for the development of a personalized strategy for patients with LARC. Assessment of nCRT relies on imaging and radiomics can extract valuable quantitative data from medical images. In this review, we examined the status of radiomic application for assessing response to nCRT in patients with LARC and indicated a potential direction for future research.
Collapse
Affiliation(s)
- Rui Chen
- Department of Radiology, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Yan Fu
- Department of Radiology, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Xiaoping Yi
- Department of Radiology, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Qian Pei
- Department of Radiology, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Hongyan Zai
- Department of Radiology, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Bihong T. Chen
- Department of Diagnostic Radiology, City of Hope National Medical Center, Duarte, CA, USA
| |
Collapse
|
48
|
Yi S, Wei Y, Luo X, Chen D. Diagnosis of rectal cancer based on the Xception-MS network. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac8f11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 09/01/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. Accurate T staging of rectal cancer based on ultrasound images is convenient for doctors to determine the appropriate treatment. To effectively solve the problems of low efficiency and accuracy of traditional methods for T staging diagnosis of rectal cancer, a deep-learning-based Xception-MS diagnostic model is proposed in this paper. Approach. The proposed diagnostic model consists of three steps. First, the model preprocesses rectal cancer images to solve the problem of data imbalance and deficiency of sample size, and reduces the risk of model overfitting. Second, a new Xception-MS network with stronger feature extraction capability, which is a combination of the Xception network and MS module, is proposed. The MS module is a new function module that can more effectively extract multi-scale information from rectal cancer images. In addition, to solve the deficiency of the small sample size of rectal cancer, the proposed network is combined with transfer learning technology. At last, the output layer of the network is modified, in which the global average pooling and a fully connected softmax layer are employed to replace the original ones, and then the rectal cancer 4 classification (T1, T2, T3, T4 staging) is output. Main results. Experiments of rectal cancer T staging are conducted on a dataset of 1078 rectal cancer images in 4 categories collected from the Department of Colorectal Surgery of the Third Affiliated Hospital of Kunming Medical University. The experimental results show that the accuracy, precision, recall and F1 values obtained by the model are 94.66%, 94.70%, 94.65%, and 94.67%, respectively. Significance. The experimental results show that our model is superior to the existing classification models, can effectively and automatically classify ultrasound images of rectal cancer, and can better assist doctors in the diagnosis of rectal cancer.
Collapse
|
49
|
Gurney-Champion OJ, Landry G, Redalen KR, Thorwarth D. Potential of Deep Learning in Quantitative Magnetic Resonance Imaging for Personalized Radiotherapy. Semin Radiat Oncol 2022; 32:377-388. [DOI: 10.1016/j.semradonc.2022.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
50
|
Rasheed K, Qayyum A, Ghaly M, Al-Fuqaha A, Razi A, Qadir J. Explainable, trustworthy, and ethical machine learning for healthcare: A survey. Comput Biol Med 2022; 149:106043. [PMID: 36115302 DOI: 10.1016/j.compbiomed.2022.106043] [Citation(s) in RCA: 72] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 08/15/2022] [Accepted: 08/20/2022] [Indexed: 12/18/2022]
Abstract
With the advent of machine learning (ML) and deep learning (DL) empowered applications for critical applications like healthcare, the questions about liability, trust, and interpretability of their outputs are raising. The black-box nature of various DL models is a roadblock to clinical utilization. Therefore, to gain the trust of clinicians and patients, we need to provide explanations about the decisions of models. With the promise of enhancing the trust and transparency of black-box models, researchers are in the phase of maturing the field of eXplainable ML (XML). In this paper, we provided a comprehensive review of explainable and interpretable ML techniques for various healthcare applications. Along with highlighting security, safety, and robustness challenges that hinder the trustworthiness of ML, we also discussed the ethical issues arising because of the use of ML/DL for healthcare. We also describe how explainable and trustworthy ML can resolve all these ethical problems. Finally, we elaborate on the limitations of existing approaches and highlight various open research problems that require further development.
Collapse
Affiliation(s)
- Khansa Rasheed
- IHSAN Lab, Information Technology University of the Punjab (ITU), Lahore, Pakistan.
| | - Adnan Qayyum
- IHSAN Lab, Information Technology University of the Punjab (ITU), Lahore, Pakistan.
| | - Mohammed Ghaly
- Research Center for Islamic Legislation and Ethics (CILE), College of Islamic Studies, Hamad Bin Khalifa University (HBKU), Doha, Qatar.
| | - Ala Al-Fuqaha
- Information and Computing Technology Division, College of Science and Engineering, Hamad Bin Khalifa University (HBKU), Doha, Qatar.
| | - Adeel Razi
- Turner Institute for Brain and Mental Health, Monash University, Clayton, Australia; Monash Biomedical Imaging, Monash University, Clayton, Australia; Wellcome Centre for Human Neuroimaging, UCL, London, United Kingdom; CIFAR Azrieli Global Scholars program, CIFAR, Toronto, Canada.
| | - Junaid Qadir
- Department of Computer Science and Engineering, College of Engineering, Qatar University, Doha, Qatar.
| |
Collapse
|