1
|
Delmoral JC, R S Tavares JM. Semantic Segmentation of CT Liver Structures: A Systematic Review of Recent Trends and Bibliometric Analysis : Neural Network-based Methods for Liver Semantic Segmentation. J Med Syst 2024; 48:97. [PMID: 39400739 PMCID: PMC11473507 DOI: 10.1007/s10916-024-02115-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 10/02/2024] [Indexed: 10/15/2024]
Abstract
The use of artificial intelligence (AI) in the segmentation of liver structures in medical images has become a popular research focus in the past half-decade. The performance of AI tools in screening for this task may vary widely and has been tested in the literature in various datasets. However, no scientometric report has provided a systematic overview of this scientific area. This article presents a systematic and bibliometric review of recent advances in neuronal network modeling approaches, mainly of deep learning, to outline the multiple research directions of the field in terms of algorithmic features. Therefore, a detailed systematic review of the most relevant publications addressing fully automatic semantic segmenting liver structures in Computed Tomography (CT) images in terms of algorithm modeling objective, performance benchmark, and model complexity is provided. The review suggests that fully automatic hybrid 2D and 3D networks are the top performers in the semantic segmentation of the liver. In the case of liver tumor and vasculature segmentation, fully automatic generative approaches perform best. However, the reported performance benchmark indicates that there is still much to be improved in segmenting such small structures in high-resolution abdominal CT scans.
Collapse
Affiliation(s)
- Jessica C Delmoral
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s/n, 4200-465, Porto, Portugal
| | - João Manuel R S Tavares
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Departamento de Engenharia Mecânica, Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s/n, 4200-465, Porto, Portugal.
| |
Collapse
|
2
|
Deeb M, Gangadhar A, Rabindranath M, Rao K, Brudno M, Sidhu A, Wang B, Bhat M. The emerging role of generative artificial intelligence in transplant medicine. Am J Transplant 2024; 24:1724-1730. [PMID: 38901561 DOI: 10.1016/j.ajt.2024.06.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 05/26/2024] [Accepted: 06/11/2024] [Indexed: 06/22/2024]
Abstract
Generative artificial intelligence (AI), a subset of machine learning that creates new content based on training data, has witnessed tremendous advances in recent years. Practical applications have been identified in health care in general, and there is significant opportunity in transplant medicine for generative AI to simplify tasks in research, medical education, and clinical practice. In addition, patients stand to benefit from patient education that is more readily provided by generative AI applications. This review aims to catalyze the development and adoption of generative AI in transplantation by introducing basic AI and generative AI concepts to the transplant clinician and summarizing its current and potential applications within the field. We provide an overview of applications to the clinician, researcher, educator, and patient. We also highlight the challenges involved in bringing these applications to the bedside and need for ongoing refinement of generative AI applications to sustainably augment the transplantation field.
Collapse
Affiliation(s)
- Maya Deeb
- Ajmera Transplant Program, University Health Network Toronto, Ontario, Canada; Division of Gastroenterology and Hepatology, Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Anirudh Gangadhar
- Ajmera Transplant Program, University Health Network Toronto, Ontario, Canada
| | | | - Khyathi Rao
- Ajmera Transplant Program, University Health Network Toronto, Ontario, Canada
| | - Michael Brudno
- DATA Team, University Health Network, Toronto, Ontario, Canada
| | - Aman Sidhu
- Ajmera Transplant Program, University Health Network Toronto, Ontario, Canada
| | - Bo Wang
- DATA Team, University Health Network, Toronto, Ontario, Canada
| | - Mamatha Bhat
- Ajmera Transplant Program, University Health Network Toronto, Ontario, Canada; Division of Gastroenterology and Hepatology, Department of Medicine, University of Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
3
|
Majumder S, Katz S, Kontos D, Roshkovan L. State of the art: radiomics and radiomics-related artificial intelligence on the road to clinical translation. BJR Open 2024; 6:tzad004. [PMID: 38352179 PMCID: PMC10860524 DOI: 10.1093/bjro/tzad004] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 09/15/2023] [Accepted: 10/30/2023] [Indexed: 02/16/2024] Open
Abstract
Radiomics and artificial intelligence carry the promise of increased precision in oncologic imaging assessments due to the ability of harnessing thousands of occult digital imaging features embedded in conventional medical imaging data. While powerful, these technologies suffer from a number of sources of variability that currently impede clinical translation. In order to overcome this impediment, there is a need to control for these sources of variability through harmonization of imaging data acquisition across institutions, construction of standardized imaging protocols that maximize the acquisition of these features, harmonization of post-processing techniques, and big data resources to properly power studies for hypothesis testing. For this to be accomplished, it will be critical to have multidisciplinary and multi-institutional collaboration.
Collapse
Affiliation(s)
- Shweta Majumder
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, United States
| | - Sharyn Katz
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, United States
| | - Despina Kontos
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, United States
| | - Leonid Roshkovan
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, United States
| |
Collapse
|
4
|
Paladugu PS, Ong J, Nelson N, Kamran SA, Waisberg E, Zaman N, Kumar R, Dias RD, Lee AG, Tavakkoli A. Generative Adversarial Networks in Medicine: Important Considerations for this Emerging Innovation in Artificial Intelligence. Ann Biomed Eng 2023; 51:2130-2142. [PMID: 37488468 DOI: 10.1007/s10439-023-03304-z] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 07/03/2023] [Indexed: 07/26/2023]
Abstract
The advent of artificial intelligence (AI) and machine learning (ML) has revolutionized the field of medicine. Although highly effective, the rapid expansion of this technology has created some anticipated and unanticipated bioethical considerations. With these powerful applications, there is a necessity for framework regulations to ensure equitable and safe deployment of technology. Generative Adversarial Networks (GANs) are emerging ML techniques that have immense applications in medical imaging due to their ability to produce synthetic medical images and aid in medical AI training. Producing accurate synthetic images with GANs can address current limitations in AI development for medical imaging and overcome current dataset type and size constraints. Offsetting these constraints can dramatically improve the development and implementation of AI medical imaging and restructure the practice of medicine. As observed with its other AI predecessors, considerations must be taken into place to help regulate its development for clinical use. In this paper, we discuss the legal, ethical, and technical challenges for future safe integration of this technology in the healthcare sector.
Collapse
Affiliation(s)
- Phani Srivatsav Paladugu
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Nicolas Nelson
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
| | - Sharif Amit Kamran
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Ethan Waisberg
- University College Dublin School of Medicine, Belfield, Dublin, Ireland
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | | | - Roger Daglius Dias
- Department of Emergency Medicine, Harvard Medical School, Boston, MA, USA
- STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Boston, MA, USA
| | - Andrew Go Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, USA
- The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX, USA
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, USA
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX, USA
- University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Texas A&M College of Medicine, Bryan, TX, USA
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA.
| |
Collapse
|
5
|
Rezaei M, Näppi JJ, Bischl B, Yoshida H. Bayesian uncertainty estimation for detection of long-tailed and unseen conditions in medical images. J Med Imaging (Bellingham) 2023; 10:054501. [PMID: 37818179 PMCID: PMC10560997 DOI: 10.1117/1.jmi.10.5.054501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 09/19/2023] [Accepted: 09/20/2023] [Indexed: 10/12/2023] Open
Abstract
Purpose Deep supervised learning provides an effective approach for developing robust models for various computer-aided diagnosis tasks. However, there is often an underlying assumption that the frequencies of the samples between the different classes of the training dataset are either similar or balanced. In real-world medical data, the samples of positive classes often occur too infrequently to satisfy this assumption. Thus, there is an unmet need for deep-learning systems that can automatically identify and adapt to the real-world conditions of imbalanced data. Approach We propose a deep Bayesian ensemble learning framework to address the representation learning problem of long-tailed and out-of-distribution (OOD) samples when training from medical images. By estimating the relative uncertainties of the input data, our framework can adapt to imbalanced data for learning generalizable classifiers. We trained and tested our framework on four public medical imaging datasets with various imbalance ratios and imaging modalities across three different learning tasks: semantic medical image segmentation, OOD detection, and in-domain generalization. We compared the performance of our framework with those of state-of-the-art comparator methods. Results Our proposed framework outperformed the comparator models significantly across all performance metrics (pairwise t -test: p < 0.01 ) in the semantic segmentation of high-resolution CT and MR images as well as in the detection of OOD samples (p < 0.01 ), thereby showing significant improvement in handling the associated long-tailed data distribution. The results of the in-domain generalization also indicated that our framework can enhance the prediction of retinal glaucoma, contributing to clinical decision-making processes. Conclusions Training of the proposed deep Bayesian ensemble learning framework with dynamic Monte-Carlo dropout and a combination of losses yielded the best generalization to unseen samples from imbalanced medical imaging datasets across different learning tasks.
Collapse
Affiliation(s)
- Mina Rezaei
- LMU Munich, Department of Statistics, Munich, Germany
- Munich Center for Machine Learning, Munich, Germany
| | - Janne J. Näppi
- Massachusetts General Hospital, Harvard Medical School, 3D Imaging Research, Department of Radiology, Boston, Massachusetts, United States
| | - Bernd Bischl
- LMU Munich, Department of Statistics, Munich, Germany
- Munich Center for Machine Learning, Munich, Germany
| | - Hiroyuki Yoshida
- Massachusetts General Hospital, Harvard Medical School, 3D Imaging Research, Department of Radiology, Boston, Massachusetts, United States
| |
Collapse
|
6
|
Li J, Liao G, Sun W, Sun J, Sheng T, Zhu K, von Deneen KM, Zhang Y. A 2.5D semantic segmentation of the pancreas using attention guided dual context embedded U-Net. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.01.044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
7
|
Cheng R, Crouzier M, Hug F, Tucker K, Juneau P, McCreedy E, Gandler W, McAuliffe MJ, Sheehan FT. Automatic quadriceps and patellae segmentation of MRI with cascaded U 2 -Net and SASSNet deep learning model. Med Phys 2022; 49:443-460. [PMID: 34755359 PMCID: PMC8758556 DOI: 10.1002/mp.15335] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 10/18/2021] [Accepted: 10/20/2021] [Indexed: 01/03/2023] Open
Abstract
PURPOSE Automatic muscle segmentation is critical for advancing our understanding of human physiology, biomechanics, and musculoskeletal pathologies, as it allows for timely exploration of large multi-dimensional image sets. Segmentation models are rarely developed/validated for the pediatric model. As such, autosegmentation is not available to explore how muscle architectural changes during development and how disease/pathology affects the developing musculoskeletal system. Thus, we aimed to develop and validate an end-to-end, fully automated, deep learning model for accurate segmentation of the rectus femoris and vastus lateral, medialis, and intermedialis using a pediatric database. METHODS We developed a two-stage cascaded deep learning model in a coarse-to-fine manner. In the first stage, the U2 -Net roughly detects the muscle subcompartment region. Then, in the second stage, the shape-aware 3D semantic segmentation method SASSNet refines the cropped target regions to generate the more finer and accurate segmentation masks. We utilized multifeature image maps in both stages to stabilize performance and validated their use with an ablation study. The second-stage SASSNet was independently run and evaluated with three different cropped region resolutions: the original image resolution, and images downsampled 2× and 4× (high, mid, and low). The relationship between image resolution and segmentation accuracy was explored. In addition, the patella was included as a comparator to past work. We evaluated segmentation accuracy using leave-one-out testing on a database of 3D MR images (0.43 × 0.43 × 2 mm) from 40 pediatric participants (age 15.3 ± 1.9 years, 55.8 ± 11.8 kg, 164.2 ± 7.9 cm, 38F/2 M). RESULTS The mid-resolution second stage produced the best results for the vastus medialis, rectus femoris, and patella (Dice similarity coefficient = 95.0%, 95.1%, 93.7%), whereas the low-resolution second stage produced the best results for the vastus lateralis and vastus intermedialis (DSC = 94.5% and 93.7%). In comparing the low- to mid-resolution cases, the vasti intermedialis, vastus medialis, rectus femoris, and patella produced significant differences (p = 0.0015, p = 0.0101, p < 0.0001, p = 0.0003) and the vasti lateralis did not (p = 0.2177). The high-resolution stage 2 had significantly lower accuracy (1.0 to 4.4 dice percentage points) compared to both the mid- and low-resolution routines (p value ranged from < 0.001 to 0.04). The one exception was the rectus femoris, where there was no difference between the low- and high-resolution cases. The ablation study demonstrated that the multifeature is more reliable than the single feature. CONCLUSIONS Our successful implementation of this two-stage segmentation pipeline provides a critical tool for expanding pediatric muscle physiology and clinical research. With a relatively small and variable dataset, our fully automatic segmentation technique produces accuracies that matched or exceeded the current state of the art. The two-stage segmentation avoids memory issues and excessive run times by using a first stage focused on cropping out unnecessary data. The excellent Dice similarity coefficients improve upon previous template-based automatic and semiautomatic methodologies targeting the leg musculature. More importantly, with a naturally variable dataset (size, shape, etc.), the proposed model demonstrates slightly improved accuracies, compared to previous neural networks methods.
Collapse
Affiliation(s)
- Ruida Cheng
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - Marion Crouzier
- University of Nantes, Movement, Interactions, Performance, MIP, EA 4334, F-44000 Nantes, France,The University of Queensland, School of Biomedical Sciences, Brisbane
| | - François Hug
- Institut Universitaire de France (IUF), Paris, France,Université Côte d’Azur, LAMHESS, Nice, France
| | - Kylie Tucker
- The University of Queensland, School of Biomedical Sciences, Brisbane
| | - Paul Juneau
- NIH Library, Office of Research Services, National Institutes of Health, Bethesda, MD, USA
| | - Evan McCreedy
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - William Gandler
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - Matthew J. McAuliffe
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - Frances T. Sheehan
- Rehabilitation Medicine Department, National Institutes of Health Clinical Center, Bethesda, MD, USA
| |
Collapse
|
8
|
Li Y, Zhou D, Liu TT, Shen XZ. Application of deep learning in image recognition and diagnosis of gastric cancer. Artif Intell Gastrointest Endosc 2021; 2:12-24. [DOI: 10.37126/aige.v2.i2.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/30/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
In recent years, artificial intelligence has been extensively applied in the diagnosis of gastric cancer based on medical imaging. In particular, using deep learning as one of the mainstream approaches in image processing has made remarkable progress. In this paper, we also provide a comprehensive literature survey using four electronic databases, PubMed, EMBASE, Web of Science, and Cochrane. The literature search is performed until November 2020. This article provides a summary of the existing algorithm of image recognition, reviews the available datasets used in gastric cancer diagnosis and the current trends in applications of deep learning theory in image recognition of gastric cancer. covers the theory of deep learning on endoscopic image recognition. We further evaluate the advantages and disadvantages of the current algorithms and summarize the characteristics of the existing image datasets, then combined with the latest progress in deep learning theory, and propose suggestions on the applications of optimization algorithms. Based on the existing research and application, the label, quantity, size, resolutions, and other aspects of the image dataset are also discussed. The future developments of this field are analyzed from two perspectives including algorithm optimization and data support, aiming to improve the diagnosis accuracy and reduce the risk of misdiagnosis.
Collapse
Affiliation(s)
- Yu Li
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Da Zhou
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Tao-Tao Liu
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Xi-Zhong Shen
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| |
Collapse
|