Joseph Lo

Overview:

My research uses computer vision and machine learning to improve medical imaging, focusing on breast and CT imaging. There are three specific projects:

(1) We design deep learning models to diagnose breast cancer from mammograms. We perform single-shot lesion detection, multi-task segmentation/classification, and image synthesis. Our goal is to improve radiologist diagnostic performance and empower patients to make personalized treatment decisions. This work is funded by NIH, Dept of Defense, Cancer Research UK, and other agencies.

(2) We create "digital twin" anatomical models that are based on actual patient data and thus contain highly realistic anatomy. With customized 3D printing, these virtual phantoms can also be rendered into physical form to be scanned on actual imaging devices, which allows us to assess image quality in new ways that are clinically relevant.

(3) We are building a computer-aided triage platform to classify multiple diseases across multiple organs in chest-abdomen-pelvis CT scans. Our hospital-scale data sets have hundreds of thousands of patients. This work includes natural language processing to analyze radiology reports as well as deep learning models for organ segmentation and disease classification.

Positions:

Professor in Radiology

Radiology
School of Medicine

Professor of Biomedical Engineering

Biomedical Engineering
Pratt School of Engineering

Professor in the Department of Electrical and Computer Engineering

Electrical and Computer Engineering
Pratt School of Engineering

Member of the Duke Cancer Institute

Duke Cancer Institute
School of Medicine

Education:

B.S.E.E. 1988

Duke University

Ph.D. 1993

Duke University

Research Associate, Radiology

Duke University

Grants:

Predicting Breast Cancer With Ultrasound and Mammography

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Principal Investigator
Start Date
End Date

Improved Diagnosis of Breast Microcalcification Clusters

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Principal Investigator
Start Date
End Date

Accurate Models for Predicting Radiation-Induced Injury

Administered By
Radiation Oncology
Awarded By
National Institutes of Health
Role
Investigator
Start Date
End Date

Computer Aid for the Decision to Biopsy Breast Lesions

Administered By
Radiology
Awarded By
US Army Medical Research
Role
Co Investigator
Start Date
End Date

Computer Aid for the Decision to Biopsy Breast Lesions

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Investigator
Start Date
End Date

Publications:

Mask Embedding in conditional GAN for Guided Synthesis of High
Resolution Images

Recent advancements in conditional Generative Adversarial Networks (cGANs) have shown promises in label guided image synthesis. Semantic masks, such as sketches and label maps, are another intuitive and effective form of guidance in image synthesis. Directly incorporating the semantic masks as constraints dramatically reduces the variability and quality of the synthesized results. We observe this is caused by the incompatibility of features from different inputs (such as mask image and latent vector) of the generator. To use semantic masks as guidance whilst providing realistic synthesized results with fine details, we propose to use mask embedding mechanism to allow for a more efficient initial feature projection in the generator. We validate the effectiveness of our approach by training a mask guided face generator using CELEBA-HQ dataset. We can generate realistic and high resolution facial images up to the resolution of 512*512 with a mask guidance. Our code is publicly available.
Authors
Ren, Y; Zhu, Z; Li, Y; Lo, J
URI
https://scholars.duke.edu/individual/pub1395859
Source
arxiv

IAIA-BL: A Case-based Interpretable Deep Learning Model for
Classification of Mass Lesions in Digital Mammography

Interpretability in machine learning models is important in high-stakes decisions, such as whether to order a biopsy based on a mammographic exam. Mammography poses important challenges that are not present in other computer vision tasks: datasets are small, confounding information is present, and it can be difficult even for a radiologist to decide between watchful waiting and biopsy based on a mammogram alone. In this work, we present a framework for interpretable machine learning-based mammography. In addition to predicting whether a lesion is malignant or benign, our work aims to follow the reasoning processes of radiologists in detecting clinically relevant semantic features of each image, such as the characteristics of the mass margins. The framework includes a novel interpretable neural network algorithm that uses case-based reasoning for mammography. Our algorithm can incorporate a combination of data with whole image labelling and data with pixel-wise annotations, leading to better accuracy and interpretability even with a small number of images. Our interpretable models are able to highlight the classification-relevant parts of the image, whereas other methods highlight healthy tissue and confounding information. Our models are decision aids, rather than decision makers, aimed at better overall human-machine collaboration. We do not observe a loss in mass margin classification accuracy over a black box neural network trained on the same data.
Authors
Barnett, AJ; Schwartz, FR; Tao, C; Chen, C; Ren, Y; Lo, JY; Rudin, C
URI
https://scholars.duke.edu/individual/pub1486024
Source
arxiv

Retina-Match: Ipsilateral Mammography Lesion Matching in a Single Shot Detection Pipeline

In mammography and tomosynthesis, radiologists use the geometric relationship of the four standard screening views to detect breast abnormalities. To date, computer aided detection methods focus on formulations based only on a single view. Recent multi-view methods are either black box approaches using methods such as relation blocks, or perform extensive, case-level feature aggregation requiring large data redundancy. In this study, we propose Retina-Match, an end-to-end trainable pipeline for detection, matching, and refinement that can effectively perform ipsilateral lesion matching in paired screening mammography images. We demonstrate effectiveness on a private, digital mammography data set with 1,016 biopsied lesions and 2,000 negative cases.
Authors
Ren, Y; Lu, J; Liang, Z; Grimm, LJ; Kim, C; Taylor-Cho, M; Yoon, S; Marks, JR; Lo, JY
MLA Citation
Ren, Y., et al. “Retina-Match: Ipsilateral Mammography Lesion Matching in a Single Shot Detection Pipeline.” Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12905 LNCS, 2021, pp. 345–54. Scopus, doi:10.1007/978-3-030-87240-3_33.
URI
https://scholars.duke.edu/individual/pub1499563
Source
scopus
Published In
Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume
12905 LNCS
Published Date
Start Page
345
End Page
354
DOI
10.1007/978-3-030-87240-3_33

iPhantom: a framework for automated creation of individualized
computational phantoms and its application to CT organ dosimetry

Objective: This study aims to develop and validate a novel framework, iPhantom, for automated creation of patient-specific phantoms or digital-twins (DT) using patient medical images. The framework is applied to assess radiation dose to radiosensitive organs in CT imaging of individual patients. Method: From patient CT images, iPhantom segments selected anchor organs (e.g. liver, bones, pancreas) using a learning-based model developed for multi-organ CT segmentation. Organs challenging to segment (e.g. intestines) are incorporated from a matched phantom template, using a diffeomorphic registration model developed for multi-organ phantom-voxels. The resulting full-patient phantoms are used to assess organ doses during routine CT exams. Result: iPhantom was validated on both the XCAT (n=50) and an independent clinical (n=10) dataset with similar accuracy. iPhantom precisely predicted all organ locations with good accuracy of Dice Similarity Coefficients (DSC) >0.6 for anchor organs and DSC of 0.3-0.9 for all other organs. iPhantom showed less than 10% dose errors for the majority of organs, which was notably superior to the state-of-the-art baseline method (20-35% dose errors). Conclusion: iPhantom enables automated and accurate creation of patient-specific phantoms and, for the first time, provides sufficient and automated patient-specific dose estimates for CT dosimetry. Significance: The new framework brings the creation and application of CHPs to the level of individual CHPs through automation, achieving a wider and precise organ localization, paving the way for clinical monitoring, and personalized optimization, and large-scale research.
Authors
Fu, W; Sharma, S; Abadi, E; Iliopoulos, A-S; Wang, Q; Lo, JY; Sun, X; Segars, WP; Samei, E
URI
https://scholars.duke.edu/individual/pub1486025
Source
arxiv

Multi-Label Annotation of Chest Abdomen Pelvis Computed Tomography Text Reports Using Deep Learning.

Authors
D'Anniballe, VM; Tushar, FI; Faryna, K; Han, S; Mazurowski, MA; Rubin, GD; Lo, JY
MLA Citation
D’Anniballe, Vincent M., et al. “Multi-Label Annotation of Chest Abdomen Pelvis Computed Tomography Text Reports Using Deep Learning.Corr, vol. abs/2102.02959, 2021.
URI
https://scholars.duke.edu/individual/pub1475011
Source
dblp
Published In
Corr
Volume
abs/2102.02959
Published Date

Research Areas:

Breast Neoplasms
Clinical Trials as Topic
Computer Simulation
Decision Making, Computer-Assisted
Decision Support Systems, Clinical
Decision Support Techniques
Image Processing, Computer-Assisted
Imaging, Three-Dimensional
Machine learning
Mammography
Models, Structural
Pattern Recognition, Automated
Radiographic Image Interpretation, Computer-Assisted
Radiology
Technology Assessment, Biomedical
Tomosynthesis