Copyright
©The Author(s) 2019.
World J Gastrointest Oncol. Dec 15, 2019; 11(12): 1218-1230
Published online Dec 15, 2019. doi: 10.4251/wjgo.v11.i12.1218
Published online Dec 15, 2019. doi: 10.4251/wjgo.v11.i12.1218
Ref. | Research question/Purpose | Method used | Key findings | Conclusions |
Yasaka et al[31], 2018 | Can deep learning techniques enable the classification of liver masses into categories? | A retrospective study. CNN together with dynamic contrast enhanced CT image set of liver masses were used over 3 phases (55 536 images obtained from 460 patients used in training and 100 liver mass image sets were used in testing). | The CNN model enabled classification of liver masses into 5 categories. The median area under the receiver operating characteristic curve to differentiate between categories was 0.92. | Deep learning with CNN enabled classification of liver masses and differentiate between 5 categories at dynamic CT with high accuracy. |
Ben-Cohen et al[32], 2018 | Can a proposed CNN enable the detection of liver metastasis in CT examinations of the liver? | The method involved both a global context and a fully convolutional network and a local patch level analysis with superpixel sparse based classification. A total of 20 patients with a total of 68 lesions and a testing set with CT examinations from 14 patients with overall 55 lesions were included. | The true positive rate was 94.6% with 2.9 false positive per case. | The system enhanced the detection of small metastasis in the liver and is clinically promising. |
Trivizakis et al[33], 2019 | Evaluate the use of a novel 3D CNN model in tissue classification of medical images and application for discriminating between primary and metastatic liver tumours. | The proposed model consisted of 4 consecutive strided 3D convolutional layers with 3 × 3 × 3 Kernel size and ReLU for activation functions. | The classification performance was 83% for 3D vs 69.6% and 65.2% for two 2D CNNs models; demonstrating significant tissue classification accuracy improvement compared to two 2D CNNs. | The proposed 3D CNN architecture can differentiate between primary and secondary liver tumours. |
Bharti et al[34], 2018 | Test a proposed model to differentiate between chronic liver and cirrhosis and presence of hepatocellular carcinoma (HCC). | The system is based on higher order features, a hierarchical organisation and multi-resolution, which enabled the characterisation of echotexture and roughness of liver surface. | The proposed CNN feature was able to differentiate four liver stages acquired from ultrasound images. The classification accuracy of the model in differentiating between the four groups was 96.6%. | The model is able to differentiate between normal liver, chronic liver disease, cirrhosis and HCC evolved on top of cirrhosis. |
Li et al[35], 2017 | Can a joint multiple fully connected CNN with extreme learning machine (MFC-CNN-ELM) architecture be used in hepatocellular carcinoma (HCC) nuclei grading? | Centre-proliferation segmentation (CPS) method was used and labels of grayscale image patch were marked under the guidance of three pathologists. A multiple fully convolutional neural network (MFC-CNN) was designed to extract the multi-form feature vectors of each input image automatically. Finally, the CNN-ELM model was used to grade HCC nuclei. | External validation using ICPR 2014 HEP-2 cell shows the good generalisation of MFC-CNN-EL architecture. | The proposed MFC-CNN-ELM has superior performance in grading HCC nuclei compared with related works. |
Li et al[36], 2018 | Can a proposed structured convolutional extreme learning machine (SC-ELM) and case-based shape template (CBST) method used in HCC nucleus segmentation? | The model SC-ELM was developed for global segmentation of pathology images and is used for coarse segmentation. | The model was used for experimentation with 12.7 liver pathology images. | The proposed model demonstrated higher performance compared with related and published work. |
Frid-Adar et al[37], 2018 | Can generated medical images be used for synthetic data augmentation and improving the performance of CNN medical image classification? | Using a limited data of CT images of 182 liver lesions (53 cysts, 64 metastasis and 65 haemangiomas), and generating synthetic medical images using recently presented deep learning Generative Adversarial Networks (GANs), the authors then used CNN to compare classification. Performance was compared with synthetic data augmentation and classic data augmentation. | The classification performance on using classic data augmentation yielded 78.6% sensitivity and 88.4% specificity. The synthetic data augmentation resulted in an increase to 85.7% for sensitivity and 92.4% for specificity. | The approach of synthetic data augmentation can be generalized to other classification applications. |
Vivanti et al[38], 2018 | Can CNN-based method enable robust automatic liver tumour delineation in longitudinal CT studies? | The inputs are baseline scans and the tumour delineation, a follow-up scan, and a liver tumour global CNN. | Results from 222 tumours from 31 patients yielded an average overlap error of 17% (SD = 11.2) and surface distance of 2.1 mm (SD = 1.8) which is far better than stand-alone segmentation. | The method shows that follow-up framework yields accurate tumour tracking with a small training dataset. |
Vivanti et al[39], 2017 | Automatic detection and segmentation of new tumours in longitudinal CT scan studies and for liver tumours burden quantification. | Inputs were the baseline and follow-up CT scans. The outputs were the new tumours segmentations in the follow-up scans, and measurement of tumour burden quantification and tumour burden change. | 246 tumours of which 97 were new tumours from 37 longitudinal liver CT studies, yielded positive new tumours detection rate of 86% vs 72% with stand-alone detection, and a tumour burden volume overlap error of 16%. | Compared to existing methods, this method enables accurate and reliable detection of known tumours and detection of new tumours in the follow-up scans. |
Todoroki et al[40], 2018 | Can a deep CNN method detect liver masses regardless to their types and phases (stages) from CT images? | The proposed method is based on deep learning which used multi-layered CNN. The Tumour detection methods comprised two steps: (1) Segmentation of the liver from CT liver images; and (2) Calculation of probability of each pixel in the segmented liver. | 3D multi-phase contrast-enhanced CT liver images of 75 cases were used in the study. The cases comprised 5 types of lesions-cysts, focal nodular hyperplasia, hepatocellular carcinoma, haemangioma, and metastases, each type was represented in 15 cases. The detection results for each type was superior to detected boundaries by doctors and outperformed other methods. | The proposed deep CNN tumour detection method demonstrated an ability to discriminate between the 5 different types, and the performance outperformed that of other convolutional methods. |
Zhang et al[41], 2018 | Can a proposed 3D multi-parameter magnetic resonance images and a novel deep CNN method classify different types of liver tissue in patients with hepatocellular carcinoma? | A novel deep CNN incorporating auto-context elements with U-net-like architecture was developed. The model uses multi-level hierarchical architecture and multi-phase training procedures. | Liver magnetic resonance images from 20 patients yielded promising results in classifying liver tissues. The method was compared with a benchmark method, multi-resolution input single-phase training and single-resolution-input, single-phase training and demonstrated higher discriminatory performance. | The multi-resolution input, the auto-context design, and the multi-phase training procedure have helped in improving the performance of the model. |
- Citation: Azer SA. Deep learning with convolutional neural networks for identification of liver masses and hepatocellular carcinoma: A systematic review. World J Gastrointest Oncol 2019; 11(12): 1218-1230
- URL: https://www.wjgnet.com/1948-5204/full/v11/i12/1218.htm
- DOI: https://dx.doi.org/10.4251/wjgo.v11.i12.1218