Publications

2022
Kim D, Chung JW, Choi J, Succi MD, Conklin J, Figueiro Longo MG, Ackman JB, Little BP, Petranovic M, Kalra MK, et al. Accurate auto-labeling of chest X-ray images based on quantitative similarity to an explainable AI model. Nature Communications. 2022;13 :1867. Publisher's VersionAbstract
The inability to accurately, efficiently label large, open-access medical imaging datasets limits the widespread implementation of artificial intelligence models in healthcare. There have been few attempts, however, to automate the annotation of such public databases; one approach, for example, focused on labor-intensive, manual labeling of subsets of these datasets to be used to train new models. In this study, we describe a method for standardized, automated labeling based on similarity to a previously validated, explainable AI (xAI) model-derived-atlas, for which the user can specify a quantitative threshold for a desired level of accuracy (the probability-of-similarity, pSim metric). We show that our xAI model, by calculating the pSim values for each clinical output label based on comparison to its training-set derived reference atlas, can automatically label the external datasets to a user-selected, high level of accuracy, equaling or exceeding that of human experts. We additionally show that, by fine-tuning the original model using the automatically labelled exams for retraining, performance can be preserved or improved, resulting in a highly accurate, more generalized model.
41467_2022_29437_fig1_htm.pdf
Choi J, Jeon S, Kim D, Chua M, Do S. A scalable artificial intelligence platform that automatically finds copy number variations (CNVs) in journal articles and transforms them into a database: CNV extraction, transformation, and loading AI (CNV-ETLAI). Computers in Biology and Medicine. 2022;144. Publisher's VersionAbstract

Background

Although copy number variations (CNVs) are infrequent, each anomaly is unique, and multiple CNVs can appear simultaneously. Growing evidence suggests that CNVs contribute to a wide range of diseases. When CNVs are detected, assessment of their clinical significance requires a thorough literature review. This process can be extremely time-consuming and may delay disease diagnosis. Therefore, we have developed CNV Extraction, Transformation, and Loading Artificial Intelligence (CNV-ETLAI), an innovative tool that allows experts to classify and interpret CNVs accurately and efficiently.

Methods

We combined text, table, and image processing algorithms to develop an artificial intelligence platform that automatically extracts, transforms, and organizes CNV information into a database. To validate CNV-ETLAI, we compared its performance to ground truth datasets labeled by a human expert. In addition, we analyzed the CNV data, which was collected using CNV-ETLAI via a crowdsourcing approach.

Results

In comparison to a human expert, CNV-ETLAI improved CNV detection accuracy by 4% and performed the analysis 60 times faster. This performance can improve even further with upscaling of the CNV-ETLAI database as usage increases. 5,800 CNVs from 2,313 journal articles were collected. Total CNV frequency for the whole chromosome was highest for chromosome X, whereas CNV frequency per 1 Mb of genomic length was highest for chromosome 22.

Conclusions

We have developed, tested, and shared CNV-ETLAI for research and clinical purposes (https://lmic.mgh.harvard.edu/CNV-ETLAI). Use of CNV-ETLAI is expected to ease and accelerate diagnostic classification and interpretation of CNVs.

1-s2.0-s001048252200124x-ga1_lrg.jpg
2021
Fourman M, Ghaednia H, Lans A, Lloyd S, Sweeney A, Detels K, Dijkstra H, Oosterhoff JHF, Ramsey DC, Synho Do PD, et al. Applications of augmented and virtual reality in spine surgery and education: A review. Seminars in Spine Surgery. 2021;33 (2) :100875. Publisher's VersionAbstract
As the complexity and minimally invasive nature of spine surgery continues to grow, so must the surgeon's ability to “view” and interact with the surgical field. Augmented reality (AR) provides a digital overlay of a real-world environment, helping the surgeon to visualize deep anatomic landmarks and surgical trajectory, such as for an osteotomy cut or pedicle screw. In contrast, virtual reality (VR) is an entirely digital environment that can be used for simulated surgeries or technical trainings without the need for a physical patient. Here we review the current clinical applications of AR and VR in spine surgery and education.
2020
Do S, Song KD, Chung JW. Basics of Deep Learning: A Radiologist's Guide to Understanding Published Radiology Articles on Deep Learning. Korean Journal of Radiology. 2020;21 (1) :33-41. Publisher's VersionAbstract
Artificial intelligence has been applied to many industries, including medicine. Among the various techniques in artificial intelligence, deep learning has attained the highest popularity in medical imaging in recent years. Many articles on deep learning have been published in radiologic journals. However, radiologists may have difficulty in understanding and interpreting these studies because the study methods of deep learning differ from those of traditional radiology. This review article aims to explain the concepts and terms that are frequently used in deep learning radiology articles, facilitating general radiologists' understanding.
2019
Song KD, Kim M, Do S. The Latest Trends in the Useof Deep Learning in Radiology Illustrated Through the Stages of Deep Learning Algorithm Development. Journal of the Korean Society of Radiology. 2019;80 (2) :202-212. Publisher's VersionAbstract
Recently, considerable progress has been made in interpreting perceptual information through artificial intelligence, allowing better interpretation of highly complex data by machines. Furthermore, the applications of artificial intelligence, represented by deep learning technology, to the fields of medical and biomedical research are increasing exponentially. In this article, we will explain the stages of deep learning algorithm development in the field of medical imaging, namely topic selection, data collection, data exploration and refinement, algorithm development, algorithm evaluation, and clinical application; we will also discuss the latest trends for each stage.
Yune S, Lee H, Kim M, Tajmir SH, Gee MS, Do S. Beyond Human Perception: Sexual Dimorphism in Hand and Wrist Radiographs Is Discernible by a Deep Learning Model. Journal of Digital Imaging . 2019;32 (4). Publisher's VersionAbstract
Despite the well-established impact of sex and sex hormones on bone structure and density, there has been limited description of sexual dimorphism in the hand and wrist in the literature. We developed a deep convolutional neural network (CNN) model to predict sex based on hand radiographs of children and adults aged between 5 and 70 years. Of the 1531 radiographs tested, the algorithm predicted sex correctly in 95.9% (κ = 0.92) of the cases. Two human radiologists achieved 58% (κ = 0.15) and 46% (κ = − 0.07) accuracy. The class activation maps (CAM) showed that the model mostly focused on the 2nd and 3rd metacarpal base or thumb sesamoid in women, and distal radioulnar joint, distal radial physis and epiphysis, or 3rd metacarpophalangeal joint in men. The radiologists reviewed 70 cases (35 females and 35 males) labeled with sex along with heat maps generated by CAM, but they could not find any patterns that distinguish the two sexes. A small sample of patients (n = 44) with sexual developmental disorders or transgender identity was selected for a preliminary exploration of application of the model. The model prediction agreed with phenotypic sex in only 77.8% (κ = 0.54) of these cases. To the best of our knowledge, this is the first study that demonstrated a machine learning model to perform a task in which human experts could not fulfill.
Parakh A, Lee H, Lee JH, Elsner BH, Sahani DV, Do S. Urinary Stone Detection on CT Images Using Deep Convolutional Neural Networks: Evaluation of Model Performance and Generalization. Radiology: Artificial Intelligence. 2019;1 (4). Publisher's VersionAbstract

Purpose

To investigate the diagnostic accuracy of cascading convolutional neural network (CNN) for urinary stone detection on unenhanced CT images and to evaluate the performance of pretrained models enriched with labeled CT images across different scanners.

Materials and Methods

This HIPAA-compliant, institutional review board–approved, retrospective clinical study used unenhanced abdominopelvic CT scans from 535 adults suspected of having urolithiasis. The scans were obtained on two scanners (scanner 1 [hereafter S1] and scanner 2 [hereafter S2]). A radiologist reviewed clinical reports and labeled cases for determination of reference standard. Stones were present on 279 (S1, 131; S2, 148) and absent on 256 (S1, 158; S2, 98) scans. One hundred scans (50 from each scanner) were randomly reserved as the test dataset, and the rest were used for developing a cascade of two CNNs: The first CNN identified the extent of the urinary tract, and the second CNN detected presence of stone. Nine variations of models were developed through the combination of different training data sources (S1, S2, or both [hereafter SB]) with (ImageNet, GrayNet) and without (Random) pretrained CNNs. First, models were compared for generalizability at the section level. Second, models were assessed by using area under the receiver operating characteristic curve (AUC) and accuracy at the patient level with test dataset from both scanners (n = 100).

Results

The GrayNet-pretrained model showed higher classifier exactness than did ImageNet-pretrained or Random-initialized models when tested by using data from the same or different scanners at section level. At the patient level, the AUC for stone detection was 0.92–0.95, depending on the model. Accuracy of GrayNet-SB (95%) was higher than that of ImageNet-SB (91%) and Random-SB (88%). For stones larger than 4 mm, all models showed similar performance (false-negative results: two of 34). For stones smaller than 4 mm, the number of false-negative results for GrayNet-SB, ImageNet-SB, and Random-SB were one of 16, three of 16, and five of 16, respectively. GrayNet-SB identified stones in all 22 test cases that had obstructive uropathy.

Conclusion

A cascading model of CNNs can detect urinary tract stones on unenhanced CT scans with a high accuracy (AUC, 0.954). Performance and generalization of CNNs across scanners can be enhanced by using transfer learning with datasets enriched with labeled medical images.

Lee H, Huang C, Yune S, Tajmir SH, Kim M, Do S. Machine Friendly Machine Learning: Interpretation of Computed Tomography Without Image Reconstruction. Scientific Reports. 2019;9 (1) :1-9. Publisher's VersionAbstract
Recent advancements in deep learning for automated image processing and classification have accelerated many new applications for medical image analysis. However, most deep learning algorithms have been developed using reconstructed, human-interpretable medical images. While image reconstruction from raw sensor data is required for the creation of medical images, the reconstruction process only uses a partial representation of all the data acquired. Here, we report the development of a system to directly process raw computed tomography (CT) data in sinogram-space, bypassing the intermediary step of image reconstruction. Two classification tasks were evaluated for their feasibility of sinogram-space machine learning: body region identification and intracranial hemorrhage (ICH) detection. Our proposed SinoNet, a convolutional neural network optimized for interpreting sinograms, performed favorably compared to conventional reconstructed image-space-based systems for both tasks, regardless of scanning geometries in terms of projections or detectors. Further, SinoNet performed significantly better when using sparsely sampled sinograms than conventional networks operating in image-space. As a result, sinogram-space algorithms could be used in field settings for triage (presence of ICH), especially where low radiation dose is desired. These findings also demonstrate another strength of deep learning where it can analyze and interpret sinograms that are virtually impossible for human experts.
41598_2019_51779_fig3_htm.pdf
Tajmir SH, Lee H, Shailam R, Gale HI, Nguyen JC, Westra SJ, Lim R, Yune S, Gee MS, Do S. Artificial intelligence-assisted interpretation of bone age radiographs improves accuracy and decreases variability. Skeletal Radiology. 2019;48 :275-283. Publisher's VersionAbstract

Objective

Radiographic bone age assessment (BAA) is used in the evaluation of pediatric endocrine and metabolic disorders. We previously developed an automated artificial intelligence (AI) deep learning algorithm to perform BAA using convolutional neural networks. We compared the BAA performance of a cohort of pediatric radiologists with and without AI assistance.

Materials and methods

Six board-certified, subspecialty trained pediatric radiologists interpreted 280 age- and gender-matched bone age radiographs ranging from 5 to 18 years. Three of those radiologists then performed BAA with AI assistance. Bone age accuracy and root mean squared error (RMSE) were used as measures of accuracy. Intraclass correlation coefficient evaluated inter-rater variation.

Results

AI BAA accuracy was 68.2% overall and 98.6% within 1 year, and the mean six-reader cohort accuracy was 63.6 and 97.4% within 1 year. AI RMSE was 0.601 years, while mean single-reader RMSE was 0.661 years. Pooled RMSE decreased from 0.661 to 0.508 years, all individually decreasing with AI assistance. ICC without AI was 0.9914 and with AI was 0.9951.

Conclusions

AI improves radiologist’s bone age assessment by increasing accuracy and decreasing variability and RMSE. The utilization of AI by radiologists improves performance compared to AI alone, a radiologist alone, or a pooled cohort of experts. This suggests that AI may optimally be utilized as an adjunct to radiologist interpretation of imaging studies to improve performance.

Sim Y, Chung MJ, Kotter E, Yune S, Kim M, Do S, Han K, Kim H, Yang S, Lee D-jae, et al. Deep Convolutional Neural Network–based Software Improves Radiologist Detection of Malignant Lung Nodules on Chest Radiographs. Radiology. 2019;294 (1) :199-209. Publisher's VersionAbstract

Background

Multicenter studies are required to validate the added benefit of using deep convolutional neural network (DCNN) software for detecting malignant pulmonary nodules on chest radiographs.

Purpose

To compare the performance of radiologists in detecting malignant pulmonary nodules on chest radiographs when assisted by deep learning–based DCNN software with that of radiologists or DCNN software alone in a multicenter setting.

Materials and Methods

Investigators at four medical centers retrospectively identified 600 lung cancer–containing chest radiographs and 200 normal chest radiographs. Each radiograph with a lung cancer had at least one malignant nodule confirmed by CT and pathologic examination. Twelve radiologists from the four centers independently analyzed the chest radiographs and marked regions of interest. Commercially available deep learning–based computer-aided detection software separately trained, tested, and validated with 19 330 radiographs was used to find suspicious nodules. The radiologists then reviewed the images with the assistance of DCNN software. The sensitivity and number of false-positive findings per image of DCNN software, radiologists alone, and radiologists with the use of DCNN software were analyzed by using logistic regression and Poisson regression.

Results

The average sensitivity of radiologists improved (from 65.1% [1375 of 2112; 95% confidence interval {CI}: 62.0%, 68.1%] to 70.3% [1484 of 2112; 95% CI: 67.2%, 73.1%], P < .001) and the number of false-positive findings per radiograph declined (from 0.2 [488 of 2400; 95% CI: 0.18, 0.22] to 0.18 [422 of 2400; 95% CI: 0.16, 0.2], P < .001) when the radiologists re-reviewed radiographs with the DCNN software. For the 12 radiologists in this study, 104 of 2400 radiographs were positively changed (from false-negative to true-positive or from false-positive to true-negative) using the DCNN, while 56 of 2400 radiographs were changed negatively.

Conclusion

Radiologists had better performance with deep convolutional network software for the detection of malignant pulmonary nodules on chest radiographs than without.

2018
Lee H, Monsouri M, Tajmir S, Lev MH, Do S. A Deep-Learning System for Fully-Automated Peripherally Inserted Central Catheter (PICC) Tip Detection. Journal of Digital Imaging. 2018;31 :393-402. Publisher's VersionAbstract

A peripherally inserted central catheter (PICC) is a thin catheter that is inserted via arm veins and threaded near the heart, providing intravenous access. The final catheter tip position is always confirmed on a chest radiograph (CXR) immediately after insertion since malpositioned PICCs can cause potentially life-threatening complications. Although radiologists interpret PICC tip location with high accuracy, delays in interpretation can be significant. In this study, we proposed a fully-automated, deep-learning system with a cascading segmentation AI system containing two fully convolutional neural networks for detecting a PICC line and its tip location. A preprocessing module performed image quality and dimension normalization, and a post-processing module found the PICC tip accurately by pruning false positives. Our best model, trained on 400 training cases and selectively tuned on 50 validation cases, obtained absolute distances from ground truth with a mean of 3.10 mm, a standard deviation of 2.03 mm, and a root mean squares error (RMSE) of 3.71 mm on 150 held-out test cases. This system could help speed confirmation of PICC position and further be generalized to include other types of vascular access and therapeutic support devices.

Lee H, Yune S, Mansouri M, Kim M, Tajmir SH, Guerrier CE, Ebert SA, Pomerantz SR, Romero JM, Kamalian S, et al. An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets. Nature Biomedical Engineering. 2018;3 :173-182. Publisher's VersionAbstract
Owing to improvements in image recognition via deep learning, machine-learning algorithms could eventually be applied to automated medical diagnoses that can guide clinical decision-making. However, these algorithms remain a ‘black box’ in terms of how they generate the predictions from the input data. Also, high-performance deep learning requires large, high-quality training datasets. Here, we report the development of an understandable deep-learning system that detects acute intracranial haemorrhage (ICH) and classifies five ICH subtypes from unenhanced head computed-tomography scans. By using a dataset of only 904 cases for algorithm training, the system achieved a performance similar to that of expert radiologists in two independent test datasets containing 200 cases (sensitivity of 98% and specificity of 95%) and 196 cases (sensitivity of 92% and specificity of 95%). The system includes an attention map and a prediction basis retrieved from training data to enhance explainability, and an iterative process that mimics the workflow of radiologists. Our approach to algorithm development can facilitate the development of deep-learning systems for a variety of clinical applications and accelerate their adoption into clinical practice.
screen_shot_2022-04-07_at_3.03.01_pm.png
Thrall JH, Li X, Li Q, Cruz C, Synho Do PD, Dreyer K, Brink J. Artificial Intelligence and Machine Learning in Radiology: Opportunities, Challenges, Pitfalls, and Criteria for Success. Journal of the American College of Radiology. 2018;15 (3) :504-508. Publisher's VersionAbstract
Worldwide interest in artificial intelligence (AI) applications, including imaging, is high and growing rapidly, fueled by availability of large datasets (“big data”), substantial advances in computing power, and new deep-learning algorithms. Apart from developing new AI methods per se, there are many opportunities and challenges for the imaging community, including the development of a common nomenclature, better ways to share image data, and standards for validating AI program use across different imaging platforms and patient populations. AI surveillance programs may help radiologists prioritize work lists by identifying suspicious or positive cases for early review. AI programs can be used to extract “radiomic” information from images not discernible by visual inspection, potentially increasing the diagnostic and prognostic value derived from image datasets. Predictions have been made that suggest AI will put radiologists out of business. This issue has been overstated, and it is much more likely that radiologists will beneficially incorporate AI methods into their practices. Current limitations in availability of technical expertise and even computing power will be resolved over time and can also be addressed by remote access solutions. Success for AI in imaging will be measured by value created: increased diagnostic certainty, faster turnaround, better outcomes for patients, and better quality of work life for radiologists. AI offers a new and promising set of methods for analyzing image data. Radiologists will explore these new pathways and are likely to play a leading role in medical applications of AI.
Choy G, Khalilzadeh O, Michalski M, Do S, Samir AE, Pianykh OS, Geis JR, Pandharipande PV, Brink JA, Dreyer KJ. Current Applications and Future Impact of Machine Learning in Radiology. Radiology. 2018;288 (2). Publisher's VersionAbstract
Recent advances and future perspectives of machine learning techniques offer promising applications in medical imaging. Machine learning has the potential to improve different steps of the radiology workflow including order scheduling and triage, clinical decision support systems, detection and interpretation of findings, postprocessing and dose estimation, examination quality control, and radiology reporting. In this article, the authors review examples of current applications of machine learning and artificial intelligence techniques in diagnostic radiology. In addition, the future impact and natural extension of these techniques in radiology practice are discussed.
images_medium_radiol.2018171820.fig8_.gif
2017
Cho J, Lee E, Lee H, Liu B, Li X, Tajmir S, Sahani D, Do S. Machine Learning Powered Automatic Organ Classification for Patient Specific Organ Dose Estimation, in Society for Imaging Informatics in Medicine. Vol 2017. ; 2017.Abstract

 

 

Body part recognition is essential in automatic medical image analysis as it is a prerequisite step for anatomy identification and organ segmentation[i],[ii]. Accurate body part classification facilitates organ detection and segmentation by reducing the search range for an organ of interest. As a result, we can quickly and efficiently identify an organ of interest at a higher accuracy when compared to current text-based body part information in DICOM (Digital Imaging and Communications in Medicine) headers[iii].

 

Multiple techniques have been developed using multi-class random regression and decision forests to classify multiple anatomical structures ranging from 6-10 organs on tomographic (CT) scans[iv],[v]. These classifiers can discriminate even between similar structures such as the aortic arch and heart1.  However, these prior works focus on a general anatomical body part classification, but organ-specific classification has not been studied for applications such as organ dose estimation.

 

Accordingly, we present a machine learning powered automatic organ classifier for CT datasets with a deep convolutional neural network (CNN) followed by an organ dose calculation. We labeled 16 different organs from axial views of CT images. A 22-layer deep CNN using the NVIDIA Deep Learning GPU Training System (DIGITS) was trained and validated with a 646 CT scan dataset.  The resultant classified organ was automatically mapped to the slab number of a mathematical hermaphrodite phantom to determine the scan range of ImPACT CT dose calculator[vi]. This technique can be used for patient-specific organ dose estimation since the locations and sizes of organs for each patient can be calculated independently, rather than other simulation based methods.

 

[i]  Yan, Zhennan, Yiqiang Zhan, Zhigang Peng, Shu Liao, Yoshihisa Shinagawa, Shaoting Zhang, Dimitris N. Metaxas, and Xiang Sean Zhou. "Multi-Instance Deep Learning: Discover Discriminative Local Anatomies for Bodypart Recognition." IEEE transactions on medical imaging 35, no. 5 (2016): 1332-1343.

[ii]  Roth, Holger R., Christopher T. Lee, Hoo-Chang Shin, Ari Seff, Lauren Kim, Jianhua Yao, Le Lu, and Ronald M. Summers. "Anatomy-specific classification of medical images using deep convolutional nets." In 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), pp. 101-104. IEEE, 2015.

[iii]  Gueld, Mark O., Michael Kohnen, Daniel Keysers, Henning Schubert, Berthold B. Wein, Joerg Bredno, and Thomas M. Lehmann. "Quality of DICOM header information for image categorization." In Medical Imaging 2002, pp. 280-287. International Society for Optics and Photonics, 2002.

[iv]  Criminisi, Antonio, Jamie Shotton, and Stefano Bucciarelli. "Decision forests with long-range spatial context for organ localization in CT volumes." In Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 69-80. 2009.

[v]  Criminisi, Antonio, Duncan Robertson, Ender Konukoglu, Jamie Shotton, Sayan Pathak, Steve White, and Khan Siddiqui. "Regression forests for efficient anatomy detection and localization in computed tomography scans." Medical image analysis 17, no. 8 (2013): 1293-1303.

[vi]  ImPACT, C. T. "Patient Dosimetry Calculator, version 1.0.5." National Radiation Protection Board (2011).

7 Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang et al. "Imagenet large scale visual recognition challenge." International Journal of Computer Vision 115, no. 3 (2015): 211-252.

 

 

 

Lee H, Rogers J, Cho J, Daye D, Mishra V, Choy G, Tajmir S, Lev M, Do S. Machine Intelligence for Accurate X-ray Screening and Read-out Prioritization: PICC line Detection Study, in Society for Imaging Informatics in Medicine. Vol 2017. Pittsburgh, PA ; 2017.Abstract

 

A peripherally inserted central catheter (PICC) is a thin, flexible plastic tube that provides medium-term intravenous access for medicine, fluid, and chemotherapy administration.  These are inserted into arm veins and threaded into the patient until the catheter tip reaches a large vein near the heart. As malpositioned PICCs can have potentially, serious complications, final position of all PICCs are always confirmed with a chest radiograph immediately after insertion. This radiograph requires timely and accurate interpretation by a highly trained domain expert in medical imaging interpretation – a Radiologist.  Although the error rate for radiologists misinterpreting PICC location is likely extremely low, delays in interpretation can be substantial—particularly when this radiograph is one of many to be interpreted with imaging studies from many different modalities and different patients also requiring diagnostic attention. However, machine intelligence techniques can help prioritize and triage the review of radiographs to the top of a radiologist’s queue, improving workflow and turn-around-time (TAT).  Such prioritization does not require high specificity, but rather high sensitivity; they should alert the radiologist to all potentially important radiographs requiring immediate attention with a low false negative rate.

 

Computer Aided Detection (CAD) is the current FDA-approved approach to aid radiologists in the interpretation of medical images and decrease misses. Recently new advances in deep learning technology applied to medical imaging have showed much promise in the development of new tools to aid in image interpretation [3], including improving the performance of CAD with deep convolutional neural networks (DCNN). DCNNs can automatically extract salient features from vast datasets and classify data into output classes with the extracted features. DCNNs have been applied to many medical image analyses, including automatic pulmonary nodule detection [4], cerebral microhemorrhage detection [5] and brain tumor segmentation [6]. However, a system for PICC line detection has not been previously emphasized in the literature.

 

In this paper, we propose a deep learning driven platform to assist radiologists in rapidly detecting and confirming PICC placement, with emphasis on incorrect placement accelerating recognition and avoiding serious complications.  We first developed a preprocessing pipeline to reduce numerous false positives due to the inherent noise in radiographs to isolate the region of interest, while keeping a low false negative rate.  We then utilized a patch-based approach that splices an image into smaller image patches, classifies them with a trained model, and creates a result image annotated with the trajectory of the PICC and the tip location.

 

Puchner SB, Ferencik M, Maehara A, Stolzmann P, Ma S, Do S, Kauczor H-U, Mintz GS, Hoffmann U, Schlett CL. Iterative Image Reconstruction Improves the Accuracy of Automated Plaque Burden Assessment in Coronary CT Angiography: A Comparison With Intravascular Ultrasound . American Journal of Roentgenology. 2017;2018 :1-8.Abstract

 

OBJECTIVE. The purpose of this study was to determine whether use of iterative image reconstruction algorithms improves the accuracy of coronary CT angiography (CCTA) compared with intravascular ultrasound (IVUS) in semiautomated plaque burden assessment.

MATERIALS AND METHODS. CCTA and IVUS images of seven coronary arteries were acquired ex vivo. CT images were reconstructed with filtered back projection (FBP) and adaptive statistical (ASIR) and model-based (MBIR) iterative reconstruction algorithms. Cross-sectional images of the arteries were coregistered between CCTA and IVUS in 1-mm increments. In CCTA, fully automated (without manual corrections) and semiautomated (allowing manual corrections of vessel wall boundaries) plaque burden assessments were performed for each of the reconstruction algorithms with commercially available software. In IVUS, plaque burden was measured manually. Agreement between CCTA and IVUS was determined with Pearson correlation.

RESULTS. A total of 173 corresponding cross sections were included. The mean plaque burden measured with IVUS was 63.39% ± 10.63%. With CCTA and the fully automated technique, it was 54.90% ± 11.70% with FBP, 53.34% ± 13.11% with ASIR, and 55.35% ± 12.22% with MBIR. With CCTA and the semiautomated technique mean plaque burden was 54.90% ± 11.76%, 53.40% ± 12.85%, 57.09% ± 11.05%. Manual correction of the semiautomated assessments was performed in 39% of all cross sections and improved plaque burden correlation with the IVUS assessment independently of reconstruction algorithm (p < 0.0001). Furthermore, MBIR was superior to FBP and ASIR independently of assessment method (semiautomated, r = 0.59 for FBP, r = 0.52 for ASIR, r = 0.78 for MBIR, all p < 0.001; fully automated, r = 0.40 for FBP, r = 0.37 for ASIR, r = 0.53 for MBIR, all p < 0.001).

CONCLUSION. For the quantification of plaque burden with CCTA, MBIR led to better correlation with IVUS than did traditional reconstruction algorithms such as FBP, independently of the use of a fully automated or semiautomated assessment approach. The highest accuracy for quantifying plaque burden with CCTA can be achieved by using MBIR data with semiautomated assessment.

 

Lee H, Tajmir S, Lee J, Zissen M, Yeshiwas BA, Alkasab TK, Choy G, Do S. Fully Automated Deep Learning System for Bone Age Assessment. Journal of Digital Imaging. 2017;2017 :1-15.Abstract

Skeletal maturity progresses through discrete phases, a fact that is used routinely in pediatrics where bone age assessments (BAAs) are compared to chronological age in the evaluation of endocrine and metabolic disorders. While central to many disease evaluations, little has changed to improve the tedious process since its introduction in 1950. In this study, we propose a fully automated deep learning pipeline to segment a region of interest, standardize and preprocess input radiographs, and perform BAA. Our models use an ImageNet pretrained, fine-tuned convolutional neural network (CNN) to achieve 57.32 and 61.40% accuracies for the female and male cohorts on our held-out test images. Female test radiographs were assigned a BAA within 1 year 90.39% and within 2 years 98.11% of the time. Male test radiographs were assigned 94.18% within 1 year and 99.00% within 2 years. Using the input occlusion method, attention maps were created which reveal what features the trained model uses to perform BAA. These correspond to what human experts look at when manually performing BAA. Finally, the fully automated BAA system was deployed in the clinical environment as a decision supporting system for more accurate and efficient BAAs at much faster interpretation time (<2 s) than the conventional method.

boneage.png
Leonardo I. Valentin MD, Colin McCarthy MD, Synho Do PD, Efren Flores MD, Raul Uppot MD. Predicting multidisciplinary tumor board recommendations: Initial experience with machine learning in interventional oncology. Journal of Vascular and Interventional Radiology. 2017;28 (2) :S19-S20. Publisher's VersionAbstract

To evaluate the importance of clinical and imaging features for machine learning predictions of multidisciplinary hepatocellular carcinoma (HCC) tumor board recommendations at a large academic center.

We created a HIPAA-compliant tumor board registry containing clinical and imaging characteristics of the cases presented to the multidisciplinary team which included diagnostic radiology, interventional radiology, radiation oncology, surgical oncology, medical oncology and transplant team. We then evaluated 50 consecutive cases using Machine Learning algorithms for highly predictive imaging and clinical features including: number of enhancing lesions, largest lesion size, OPTN classification, MELD score, Child-Pugh score, and tumor board treatment recommendation. Machine learning analysis was conducted using models based on a Leo Breiman’s CART decision tree, logistic regression and several ensemble models.

Factors that were considered highly predictive using our machine learning algorithmic representation (>10%) of the multidisciplinary HCC tumor board included: largest lesion size (16.68%), segments involved (14.56% ), enhancing lesions (11.96%), patient age (13.72%) and number of OPTN 5 lesions (11.64%). Factors that were not highly predictive (<5%) included OPTN 3 lesions, MELD score, gender and Child-Pugh score.

Specific imaging-derived and clinical parameters show a high predictive impact for multidisciplinary tumor board recommendations. Machine learning may represent a viable tool for identifying important trends in tumor board recommendations, increasing awareness of problem areas and improving decision objectivity and transparency. It could also expedite crucial decision-making. Continuous accumulation of data improves prediction confidence and accuracy without data overfitting. Implementation of this type of model can aid Interventional Oncologist predict the likelihood that a specific recommendation is supported by an expert HCC tumor board. Moreover, it should also be noted that additional features may emerge with the expansion of the dataset and based on institutional preferences for specific treatment modalities, expertise, and/or patient population.

leo.png
2016
Do S. The future of artificial intelligence for physicians (인공지능과 의사의 미래). J Korean Med Assoc. 2016;59 (6) :410-412. Publisher's VersionAbstract

Artificial Intelligence (AI) to support the medical decision-making process has long been both an interest and concern of physicians and the public. However, the introduction of open source software, supercomputers, and a variety of industry innovations has accelerated the progress of the development of AI in clinical decision support systems. This article summarizes the current trends and challenges in the medical field, and presents how AI can improve healthcare systems by increasing efficiency and decreasing costs. At the same time, it emphasizes the centrality of the role of physicians in utilizing AI as a tool to supplement their decisions as they provide patient-oriented care.

Pages