Observational Study
Copyright ©The Author(s) 2023. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Radiol. Dec 28, 2023; 15(12): 359-369
Published online Dec 28, 2023. doi: 10.4329/wjr.v15.i12.359
Methods for improving colorectal cancer annotation efficiency for artificial intelligence-observer training
Matthew Grudza, Brandon Salinel, Sarah Zeien, Matthew Murphy, Jake Adkins, Corey T Jensen, Curtis Bay, Vikram Kodibagkar, Phillip Koo, Tomislav Dragovich, Michael A Choti, Madappa Kundranda, Tanveer Syeda-Mahmood, Hong-Zhi Wang, John Chang
Matthew Grudza, School of Biological Health and Systems Engineering, Arizona State University, Tempe, AZ 85287, United States
Brandon Salinel, Phillip Koo, John Chang, Department of Radiology, Banner MD Anderson Cancer Center, Gilbert, AZ 85234, United States
Sarah Zeien, Matthew Murphy, School of Osteopathic Medicine, A.T. Still University, Mesa, AZ 85206, United States
Jake Adkins, Department of Abdominal Imaging, MD Anderson Cancer Center, Houston, TX 77030, United States
Corey T Jensen, Department of Abdominal Imaging, University Texas MD Anderson Cancer Center, Houston, TX 77030, United States
Curtis Bay, Department of Interdisciplinary Sciences, A.T. Still University, Mesa, AZ 85206, United States
Vikram Kodibagkar, School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ 85287, United States
Tomislav Dragovich, Madappa Kundranda, Division of Cancer Medicine, Banner MD Anderson Cancer Center, Gilbert, AZ 85234, United States
Michael A Choti, Department of Surgical Oncology, Banner MD Anderson Cancer Center, Gilbert, AZ 85234, United States
Tanveer Syeda-Mahmood, Hong-Zhi Wang, IBM Almaden Research Center, IBM, San Jose, CA 95120, United States
Author contributions: Grudza M and Kodibagkar V contributed to the data analysis and initial write up of the manuscript; Salinel B, Zeien S, and Murphy M contributed to the ground truth of the training and testing dataset; Adkins J and Jensen CT contributed to the data curating from MD Anderson; Syeda-Mahmood T and Wang HZ contributed to the AI Model development and training; Bay C contributed to the statistical analysis of the data; Koo P, Dragovich T, Choti MA, and Kundranda M contributed to the Banner MD Anderson data collection and manuscript revision; Chang J conceived and oversaw the entire project and the manuscript writeup.
Institutional review board statement: The study was reviewed and approved by the Banner MD Anderson Cancer Center IRB.
Informed consent statement: The study was approved by Banner MD Anderson Cancer Center IRB with exemption for individual consent due to retrospective nature of the data collection.
Conflict-of-interest statement: All the authors report no relevant conflicts of interest for this article.
Data sharing statement: Dataset can be available by contacting the corresponding author at john.chang@bannerhealth.com.
STROBE statement: The authors have read the STROBE Statement-checklist of items, and the manuscript was prepared and revised according to the STROBE Statement-checklist of items.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: John Chang, MD, PhD, Doctor, Department of Radiology, Banner MD Anderson Cancer Center, 2940 E. Banner Gateway Drive, Suite 315, Gilbert, AZ 85234, United States. changresearch1@gmail.com
Received: October 3, 2023
Peer-review started: October 3, 2023
First decision: October 9, 2023
Revised: November 13, 2023
Accepted: December 5, 2023
Article in press: December 5, 2023
Published online: December 28, 2023
Processing time: 83 Days and 5.1 Hours
Abstract
BACKGROUND

Missing occult cancer lesions accounts for the most diagnostic errors in retrospective radiology reviews as early cancer can be small or subtle, making the lesions difficult to detect. Second-observer is the most effective technique for reducing these events and can be economically implemented with the advent of artificial intelligence (AI).

AIM

To achieve appropriate AI model training, a large annotated dataset is necessary to train the AI models. Our goal in this research is to compare two methods for decreasing the annotation time to establish ground truth: Skip-slice annotation and AI-initiated annotation.

METHODS

We developed a 2D U-Net as an AI second observer for detecting colorectal cancer (CRC) and an ensemble of 5 differently initiated 2D U-Net for ensemble technique. Each model was trained with 51 cases of annotated CRC computed tomography of the abdomen and pelvis, tested with 7 cases, and validated with 20 cases from The Cancer Imaging Archive cases. The sensitivity, false positives per case, and estimated Dice coefficient were obtained for each method of training. We compared the two methods of annotations and the time reduction associated with the technique. The time differences were tested using Friedman’s two-way analysis of variance.

RESULTS

Sparse annotation significantly reduces the time for annotation particularly skipping 2 slices at a time (P < 0.001). Reduction of up to 2/3 of the annotation does not reduce AI model sensitivity or false positives per case. Although initializing human annotation with AI reduces the annotation time, the reduction is minimal, even when using an ensemble AI to decrease false positives.

CONCLUSION

Our data support the sparse annotation technique as an efficient technique for reducing the time needed to establish the ground truth.

Keywords: Artificial intelligence; Colorectal cancer; Detection

Core Tip: Minimizing diagnostic errors for colorectal cancer may be most effectively performed with artificial intelligence (AI) second observer. Supervised training of AI-observer will require high volume of annotated training cases. Comparing skip-slice annotation and AI-initiated annotation shows that skipping slices does not affect the training outcome while AI-initiated annotation does not significantly improve annotation time.