Body part recognition is essential in automatic medical image analysis as it is a prerequisite step for anatomy identification and organ segmentation[i],[ii]. Accurate body part classification facilitates organ detection and segmentation by reducing the search range for an organ of interest. As a result, we can quickly and efficiently identify an organ of interest at a higher accuracy when compared to current text-based body part information in DICOM (Digital Imaging and Communications in Medicine) headers[iii].
Multiple techniques have been developed using multi-class random regression and decision forests to classify multiple anatomical structures ranging from 6-10 organs on tomographic (CT) scans[iv],[v]. These classifiers can discriminate even between similar structures such as the aortic arch and heart1. However, these prior works focus on a general anatomical body part classification, but organ-specific classification has not been studied for applications such as organ dose estimation.
Accordingly, we present a machine learning powered automatic organ classifier for CT datasets with a deep convolutional neural network (CNN) followed by an organ dose calculation. We labeled 16 different organs from axial views of CT images. A 22-layer deep CNN using the NVIDIA Deep Learning GPU Training System (DIGITS) was trained and validated with a 646 CT scan dataset. The resultant classified organ was automatically mapped to the slab number of a mathematical hermaphrodite phantom to determine the scan range of ImPACT CT dose calculator[vi]. This technique can be used for patient-specific organ dose estimation since the locations and sizes of organs for each patient can be calculated independently, rather than other simulation based methods.
[i] Yan, Zhennan, Yiqiang Zhan, Zhigang Peng, Shu Liao, Yoshihisa Shinagawa, Shaoting Zhang, Dimitris N. Metaxas, and Xiang Sean Zhou. "Multi-Instance Deep Learning: Discover Discriminative Local Anatomies for Bodypart Recognition." IEEE transactions on medical imaging 35, no. 5 (2016): 1332-1343.
[ii] Roth, Holger R., Christopher T. Lee, Hoo-Chang Shin, Ari Seff, Lauren Kim, Jianhua Yao, Le Lu, and Ronald M. Summers. "Anatomy-specific classification of medical images using deep convolutional nets." In 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), pp. 101-104. IEEE, 2015.
[iii] Gueld, Mark O., Michael Kohnen, Daniel Keysers, Henning Schubert, Berthold B. Wein, Joerg Bredno, and Thomas M. Lehmann. "Quality of DICOM header information for image categorization." In Medical Imaging 2002, pp. 280-287. International Society for Optics and Photonics, 2002.
[iv] Criminisi, Antonio, Jamie Shotton, and Stefano Bucciarelli. "Decision forests with long-range spatial context for organ localization in CT volumes." In Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 69-80. 2009.
[v] Criminisi, Antonio, Duncan Robertson, Ender Konukoglu, Jamie Shotton, Sayan Pathak, Steve White, and Khan Siddiqui. "Regression forests for efficient anatomy detection and localization in computed tomography scans." Medical image analysis 17, no. 8 (2013): 1293-1303.
[vi] ImPACT, C. T. "Patient Dosimetry Calculator, version 1.0.5." National Radiation Protection Board (2011).
7 Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang et al. "Imagenet large scale visual recognition challenge." International Journal of Computer Vision 115, no. 3 (2015): 211-252.
A peripherally inserted central catheter (PICC) is a thin, flexible plastic tube that provides medium-term intravenous access for medicine, fluid, and chemotherapy administration. These are inserted into arm veins and threaded into the patient until the catheter tip reaches a large vein near the heart. As malpositioned PICCs can have potentially, serious complications, final position of all PICCs are always confirmed with a chest radiograph immediately after insertion. This radiograph requires timely and accurate interpretation by a highly trained domain expert in medical imaging interpretation – a Radiologist. Although the error rate for radiologists misinterpreting PICC location is likely extremely low, delays in interpretation can be substantial—particularly when this radiograph is one of many to be interpreted with imaging studies from many different modalities and different patients also requiring diagnostic attention. However, machine intelligence techniques can help prioritize and triage the review of radiographs to the top of a radiologist’s queue, improving workflow and turn-around-time (TAT). Such prioritization does not require high specificity, but rather high sensitivity; they should alert the radiologist to all potentially important radiographs requiring immediate attention with a low false negative rate.
Computer Aided Detection (CAD) is the current FDA-approved approach to aid radiologists in the interpretation of medical images and decrease misses. Recently new advances in deep learning technology applied to medical imaging have showed much promise in the development of new tools to aid in image interpretation , including improving the performance of CAD with deep convolutional neural networks (DCNN). DCNNs can automatically extract salient features from vast datasets and classify data into output classes with the extracted features. DCNNs have been applied to many medical image analyses, including automatic pulmonary nodule detection , cerebral microhemorrhage detection  and brain tumor segmentation . However, a system for PICC line detection has not been previously emphasized in the literature.
In this paper, we propose a deep learning driven platform to assist radiologists in rapidly detecting and confirming PICC placement, with emphasis on incorrect placement accelerating recognition and avoiding serious complications. We first developed a preprocessing pipeline to reduce numerous false positives due to the inherent noise in radiographs to isolate the region of interest, while keeping a low false negative rate. We then utilized a patch-based approach that splices an image into smaller image patches, classifies them with a trained model, and creates a result image annotated with the trajectory of the PICC and the tip location.
OBJECTIVE. The purpose of this study was to determine whether use of iterative image reconstruction algorithms improves the accuracy of coronary CT angiography (CCTA) compared with intravascular ultrasound (IVUS) in semiautomated plaque burden assessment.
MATERIALS AND METHODS. CCTA and IVUS images of seven coronary arteries were acquired ex vivo. CT images were reconstructed with filtered back projection (FBP) and adaptive statistical (ASIR) and model-based (MBIR) iterative reconstruction algorithms. Cross-sectional images of the arteries were coregistered between CCTA and IVUS in 1-mm increments. In CCTA, fully automated (without manual corrections) and semiautomated (allowing manual corrections of vessel wall boundaries) plaque burden assessments were performed for each of the reconstruction algorithms with commercially available software. In IVUS, plaque burden was measured manually. Agreement between CCTA and IVUS was determined with Pearson correlation.
RESULTS. A total of 173 corresponding cross sections were included. The mean plaque burden measured with IVUS was 63.39% ± 10.63%. With CCTA and the fully automated technique, it was 54.90% ± 11.70% with FBP, 53.34% ± 13.11% with ASIR, and 55.35% ± 12.22% with MBIR. With CCTA and the semiautomated technique mean plaque burden was 54.90% ± 11.76%, 53.40% ± 12.85%, 57.09% ± 11.05%. Manual correction of the semiautomated assessments was performed in 39% of all cross sections and improved plaque burden correlation with the IVUS assessment independently of reconstruction algorithm (p < 0.0001). Furthermore, MBIR was superior to FBP and ASIR independently of assessment method (semiautomated, r = 0.59 for FBP, r = 0.52 for ASIR, r = 0.78 for MBIR, all p < 0.001; fully automated, r = 0.40 for FBP, r = 0.37 for ASIR, r = 0.53 for MBIR, all p < 0.001).
CONCLUSION. For the quantification of plaque burden with CCTA, MBIR led to better correlation with IVUS than did traditional reconstruction algorithms such as FBP, independently of the use of a fully automated or semiautomated assessment approach. The highest accuracy for quantifying plaque burden with CCTA can be achieved by using MBIR data with semiautomated assessment.
Skeletal maturity progresses through discrete phases, a fact that is used routinely in pediatrics where bone age assessments (BAAs) are compared to chronological age in the evaluation of endocrine and metabolic disorders. While central to many disease evaluations, little has changed to improve the tedious process since its introduction in 1950. In this study, we propose a fully automated deep learning pipeline to segment a region of interest, standardize and preprocess input radiographs, and perform BAA. Our models use an ImageNet pretrained, fine-tuned convolutional neural network (CNN) to achieve 57.32 and 61.40% accuracies for the female and male cohorts on our held-out test images. Female test radiographs were assigned a BAA within 1 year 90.39% and within 2 years 98.11% of the time. Male test radiographs were assigned 94.18% within 1 year and 99.00% within 2 years. Using the input occlusion method, attention maps were created which reveal what features the trained model uses to perform BAA. These correspond to what human experts look at when manually performing BAA. Finally, the fully automated BAA system was deployed in the clinical environment as a decision supporting system for more accurate and efficient BAAs at much faster interpretation time (<2 s) than the conventional method.
To evaluate the importance of clinical and imaging features for machine learning predictions of multidisciplinary hepatocellular carcinoma (HCC) tumor board recommendations at a large academic center.
We created a HIPAA-compliant tumor board registry containing clinical and imaging characteristics of the cases presented to the multidisciplinary team which included diagnostic radiology, interventional radiology, radiation oncology, surgical oncology, medical oncology and transplant team. We then evaluated 50 consecutive cases using Machine Learning algorithms for highly predictive imaging and clinical features including: number of enhancing lesions, largest lesion size, OPTN classification, MELD score, Child-Pugh score, and tumor board treatment recommendation. Machine learning analysis was conducted using models based on a Leo Breiman’s CART decision tree, logistic regression and several ensemble models.
Factors that were considered highly predictive using our machine learning algorithmic representation (>10%) of the multidisciplinary HCC tumor board included: largest lesion size (16.68%), segments involved (14.56% ), enhancing lesions (11.96%), patient age (13.72%) and number of OPTN 5 lesions (11.64%). Factors that were not highly predictive (<5%) included OPTN 3 lesions, MELD score, gender and Child-Pugh score.
Specific imaging-derived and clinical parameters show a high predictive impact for multidisciplinary tumor board recommendations. Machine learning may represent a viable tool for identifying important trends in tumor board recommendations, increasing awareness of problem areas and improving decision objectivity and transparency. It could also expedite crucial decision-making. Continuous accumulation of data improves prediction confidence and accuracy without data overfitting. Implementation of this type of model can aid Interventional Oncologist predict the likelihood that a specific recommendation is supported by an expert HCC tumor board. Moreover, it should also be noted that additional features may emerge with the expansion of the dataset and based on institutional preferences for specific treatment modalities, expertise, and/or patient population.
Artificial Intelligence (AI) to support the medical decision-making process has long been both an interest and concern of physicians and the public. However, the introduction of open source software, supercomputers, and a variety of industry innovations has accelerated the progress of the development of AI in clinical decision support systems. This article summarizes the current trends and challenges in the medical field, and presents how AI can improve healthcare systems by increasing efficiency and decreasing costs. At the same time, it emphasizes the centrality of the role of physicians in utilizing AI as a tool to supplement their decisions as they provide patient-oriented care.
Purpose: To compare standard of care and reduced dose (RD) abdominal computed tomography (CT) images reconstructed with filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), model-based iterative reconstruction (MBIR) techniques.
Materials and Methods: In an Institutional Review Board–approved, prospective clinical study, 28 patients (mean age 59 ± 13 years ), undergoing clinically indicated routine abdominal CT on a 64-channel multi-detector CT scanner, gave written informed consent for acquisition of an additional RD (<1 milli-Sievert) abdomen CT series. Sinogram data of RD series were reconstructed with FBP, ASIR, and MBIR and compared with FBP images of standard dose abdomen CT. Two radiologists performed randomized, independent, and blinded comparison for lesion detection, lesion margin, visibility of normal structures, and diagnostic confidence.
Results: Mean CT dose index volume was 10 ± 3.4 mGy and 1.3 ± 0.3 mGy for standard and RD CT, respectively. There were 73 “true positive” lesions detected on standard of care CT. Nine lesions (<8 mm in size) were missed on RD abdominal CT images which included liver lesions, liver cysts, kidney cysts, and paracolonic abscess. These lesions were missed regardless of patient size and types of iterative reconstruction techniques used for reconstruction of RD data sets. The visibility of lesion margin was suboptimal in (23/28) patients with RD FBP, (15/28) patients with RD ASIR, and (14/28) patients with RD MBIR compared to standard of care FBP images (P < 0.001). Diagnostic confidence for the assessment of lesions on RD images was suboptimal in most patients regardless of iterative reconstruction techniques.
Conclusions: Clinically significant lesions (< 8 mm) can be missed on abdominal CT examinations acquired at a CT dose index volume of 1.3 mGy regardless of patients' size and reconstruction techniques (FBP, ASIR, and MBIR).
The use of Convolutional Neural Networks (CNN) in natural image classification systems has produced very impressive results. Combined with the inherent nature of medical images that make them ideal for deep-learning, further application of such systems to medical image classification holds much promise. However, the usefulness and potential impact of such a system can be completely negated if it does not reach a target accuracy. In this paper, we present a study on determining the optimum size of the training data set necessary to achieve high classification accuracy with low variance in medical image classification systems. The CNN was applied to classify axial Computed Tomography (CT) images into six anatomical classes. We trained the CNN using six different sizes of training data set (5, 10 , 20 , 50 , 100, and 200) and then tested the resulting system with a total of 6000 CT images. All images were acquired from the Massachusetts General Hospital (MGH) Picture Archiving and Communication System (PACS). Using this data, we employ the learning curve approach to predict classification accuracy at a given training sample size. Our research will present a general methodology for determining the training data set size necessary to achieve a certain target classification accuracy that can be easily applied to other problems within such systems.
To assess lesion detection and image quality parameters of a knowledge-based Iterative Model Reconstruction (IMR) in reduced dose (RD) abdominal CT examinations.
Materials and methods
This IRB-approved prospective study included 82 abdominal CT examinations performed for 41 consecutive patients (mean age, 62 ± 12 years; F:M 28:13) who underwent a RD CT (SSDE, 1.5 mGy ± 0.4 [∼0.9 mSv] at 120 kV with 17–20 mAs/slice) immediately after their standard dose (SD) CT exam (10 mGy ± 3 [∼6 mSv] at 120 kV with automatic exposure control) on 256 MDCT (iCT, Philips Healthcare). SD data were reconstructed using filtered back projection (FBP). RD data were reconstructed with FBP and IMR. Four radiologists used a five-point scale (1 = image quality better than SD CT to 5 = image quality unacceptable) to assess both subjective image quality and artifacts. Lesions were first detected on RD FBP images. RD IMR and RD FBP images were then compared side-by-side to SD-FBP images in an independent, randomized and blinded fashion. Friedman's test and intraclass correlation coefficient were used for data analysis. Objective measurements included image noise and attenuation as well as noise spectral density (NSD) curves to assess the noise in frequency domain were obtained. In addition, a low-contrast phantom study was performed.
All true lesions (ranging from 32 to 55) on SD FBP images were detected on RD IMR images across all patients. RD FBP images were unacceptable for subjective image quality. Subjective ratings showed acceptable image quality for IMR for organ margins, soft-tissue structures, and retroperitoneal lymphadenopathy, compared to RD FBP in patients with a BMI ≤25 kg/m2 (median-range, 2–3). Irrespective of patient BMI, subjective ratings for hepatic/renal cysts, stones and colonic diverticula were significantly better with RD IMR images (P < 0.01). Objective image noise for RD FBP was 57–66% higher, and for RD IMR was 8–56% lower than that for SD-FBP (P < 0.01). NSD showed significantly lower noise in the frequency domain with IMR in all patients compared to FBP.
IMR considerably improved both objective and subjective image quality parameters of RD abdominal CT images compared to FBP in patients with BMI less than or equal to 25 kg/m2.
Purpose: To assess lesion detection and image quality of ultralow-dose (ULD) abdominal computed tomography (CT) reconstructed with filtered back projection (FBP) and 2 iterative reconstruction techniques: hybrid-based iDose, and image-based SafeCT.
Materials and Methods: In this institutional review board–approved ongoing prospective clinical study, 41 adult patients provided written informed consent for an additional ULD abdominal CT examination immediately after standard dose (SD) CT exam on a 256-slice multidetector computed tomography (iCT, Philips-Healthcare). The SD examination (size-specific dose estimate, 10 ± 3 mGy) was performed at 120 kV with automatic exposure control, and reconstructed with FBP. The ULD examination (1.5 ± 0.4 mGy) was performed at 120 kV and fixed tube current of 17 to 20 mAs/slice to achieve ULD radiation dose, with the rest of the scan parameters same as SD examination. The ULD data were reconstructed with (a) FBP, (b) iDose, and (c) SafeCT. Lesions were detected on ULD FBP series and compared to SD FBP “reference-standard” series. True lesions, pseudolesions, and missed lesions were recorded. Four abdominal radiologists independently blindly performed subjective image quality. Objective image quality included image noise calculation and noise spectral density plots.
Results: All true lesions (n, 52: liver metastases, renal cysts, diverticulosis) in SD FBP images were detected in ULD images. Although there were no missed or pseudolesions on ULD iDose and ULD SafeCT images, appearance of small low-contrast hepatic lesions was suboptimal. The ULD FBP images were unacceptable across all patients for both lesion detection and image quality. In patients with a body mass index (BMI) of 25 kg/m2 or less, ULD iDose and ULD SafeCT images were acceptable for image quality that was close to SD FBP for both normal and abnormal abdominal and pelvic structures. With increasing BMI, the image quality of ULD images was deemed unacceptable due to photo starvation. Evaluation of kidney stones with ULD iDose/SafeCT images was found acceptable regardless of patient size. Image noise levels were significantly lower in ULD iDose and ULD SafeCT images compared to ULD FBP (P < 0.01).
Conclusions: Preliminary results show that ULD abdominal CT reconstructed with iterative reconstruction techniques is achievable in smaller patients (BMI ≤ 25 kg/m2) but remains a challenge for overweight to obese patients. Lesion detection is similar in full-dose SD FBP and ULD iDose/SafeCT images, with suboptimal visibility of low-contrast lesions in ULD images.