Chunhao Wang
Overview:
- Deep learning methods for image-based radiotherapy outcome prediction and assessment
- Machine learning in outcome modelling
- Automation in radiotherapy planning and delivery
Positions:
Assistant Professor of Radiation Oncology
Radiation Oncology
School of Medicine
Member of the Duke Cancer Institute
Duke Cancer Institute
School of Medicine
Education:
Ph.D. 2016
Duke University
Medical Physics Resident, Radiation Oncology Physics Division
Duke University
Medical Physics Resident, Radiation Oncology Physics Division
Duke University
Grants:
Publications:
A neural ordinary differential equation model for visualizing deep neural network behaviors in multi-parametric MRI-based glioma segmentation.
PURPOSE: To develop a neural ordinary differential equation (ODE) model for visualizing deep neural network behavior during multi-parametric MRI-based glioma segmentation as a method to enhance deep learning explainability. METHODS: By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we implemented a novel deep learning model, Neural ODE, in which deep feature extraction was governed by an ODE parameterized by a neural network. The dynamics of (1) MR images after interactions with the deep neural network and (2) segmentation formation can thus be visualized after solving the ODE. An accumulative contribution curve (ACC) was designed to quantitatively evaluate each MR image's utilization by the deep neural network toward the final segmentation results. The proposed Neural ODE model was demonstrated using 369 glioma patients with a 4-modality multi-parametric MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. Three Neural ODE models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The key MRI modalities with significant utilization by deep neural networks were identified based on ACC analysis. Segmentation results by deep neural networks using only the key MRI modalities were compared to those using all four MRI modalities in terms of Dice coefficient, accuracy, sensitivity, and specificity. RESULTS: All Neural ODE models successfully illustrated image dynamics as expected. ACC analysis identified T1-Ce as the only key modality in ET and TC segmentations, while both FLAIR and T2 were key modalities in WT segmentation. Compared to the U-Net results using all four MRI modalities, the Dice coefficient of ET (0.784→0.775), TC (0.760→0.758), and WT (0.841→0.837) using the key modalities only had minimal differences without significance. Accuracy, sensitivity, and specificity results demonstrated the same patterns. CONCLUSION: The Neural ODE model offers a new tool for optimizing the deep learning model inputs with enhanced explainability. The presented methodology can be generalized to other medical image-related deep-learning applications.
MLA Citation
Yang, Zhenyu, et al. “A neural ordinary differential equation model for visualizing deep neural network behaviors in multi-parametric MRI-based glioma segmentation.” Med Phys, 2023. Pubmed, doi:10.1002/mp.16286.
URI
https://scholars.duke.edu/individual/pub1526964
PMID
36840621
Source
pubmed
Published In
Med Phys
Published Date
DOI
10.1002/mp.16286
Editorial: Interpretable and explainable machine learning models in oncology
Authors
Hrinivich, WT; Wang, T; Wang, C
MLA Citation
Hrinivich, W. T., et al. “Editorial: Interpretable and explainable machine learning models in oncology.” Frontiers in Oncology, vol. 13, Jan. 2023. Scopus, doi:10.3389/fonc.2023.1184428.
URI
https://scholars.duke.edu/individual/pub1573188
Source
scopus
Published In
Frontiers in Oncology
Volume
13
Published Date
DOI
10.3389/fonc.2023.1184428
Input feature design and its impact on the performance of deep learning models for predicting fluence maps in intensity-modulated radiation therapy.
Objective. Deep learning (DL) models for fluence map prediction (FMP) have great potential to reduce treatment planning time in intensity-modulated radiation therapy (IMRT) by avoiding the lengthy inverse optimization process. This study aims to improve the rigor of input feature design in a DL-FMP model by examining how different designs of input features influence model prediction performance.Approach. This study included 231 head-and-neck intensity-modulated radiation therapy patients. Three input feature designs were investigated. The first design (D1) assumed that information of all critical structures from all beam angles should be combined to predict fluence maps. The second design (D2) assumed that local anatomical information was sufficient for predicting radiation intensity of a beamlet at a respective beam angle. The third design (D3) assumed the need for both local anatomical information and inter-beam modulation to predict radiation intensity values of the beamlets that intersect at a voxel. For each input design, we tailored the DL model accordingly. All models were trained using the same set of ground truth plans (GT plans). The plans generated by DL models (DL plans) were analyzed using key dose-volume metrics. One-way ANOVA with multiple comparisons correction (Bonferroni method) was performed (significance level = 0.05).Main results. For PTV-related metrics, all DL plans had significantly higher maximum dose (p < 0.001), conformity index (p < 0.001), and heterogeneity index (p < 0.001) compared to GT plans, with D2 being the worst performer. Meanwhile, except for cord+5 mm (p < 0.001), DL plans of all designs resulted in OAR dose metrics that are comparable to those of GT plans.Significance. Local anatomical information contains most of the information that DL models need to predict fluence maps for clinically acceptable OAR sparing. Input features from beam angles are needed to achieve the best PTV coverage. These results provide valuable insights for further improvement of DL-FMP models and DL models in general.
MLA Citation
Li, Xinyi, et al. “Input feature design and its impact on the performance of deep learning models for predicting fluence maps in intensity-modulated radiation therapy.” Phys Med Biol, vol. 67, no. 21, Oct. 2022. Pubmed, doi:10.1088/1361-6560/ac9882.
URI
https://scholars.duke.edu/individual/pub1553015
PMID
36206747
Source
pubmed
Published In
Phys Med Biol
Volume
67
Published Date
DOI
10.1088/1361-6560/ac9882
Quantification of lung function on CT images based on pulmonary radiomic filtering.
PURPOSE: To develop a radiomics filtering technique for characterizing spatial-encoded regional pulmonary ventilation information on lung computed tomography (CT). METHODS: The lung volume was segmented on 46 CT images, and a 3D sliding window kernel was implemented across the lung volume to capture the spatial-encoded image information. Fifty-three radiomic features were extracted within the kernel, resulting in a fourth-order tensor object. As such, each voxel coordinate of the original lung was represented as a 53-dimensional feature vector, such that radiomic features could be viewed as feature maps within the lungs. To test the technique as a potential pulmonary ventilation biomarker, the radiomic feature maps were compared to paired functional images (Galligas PET or DTPA-SPECT) based on the Spearman correlation (ρ) analysis. RESULTS: The radiomic feature maps GLRLM-based Run-Length Non-Uniformity and GLCOM-based Sum Average are found to be highly correlated with the functional imaging. The achieved ρ (median [range]) for the two features are 0.46 [0.05, 0.67] and 0.45 [0.21, 0.65] across 46 patients and 2 functional imaging modalities, respectively. CONCLUSIONS: The results provide evidence that local regions of sparsely encoded heterogeneous lung parenchyma on CT are associated with diminished radiotracer uptake and measured lung ventilation defects on PET/SPECT imaging. These findings demonstrate the potential of radiomics to serve as a complementary tool to the current lung quantification techniques and provide hypothesis-generating data for future studies.
Authors
MLA Citation
Yang, Zhenyu, et al. “Quantification of lung function on CT images based on pulmonary radiomic filtering.” Med Phys, vol. 49, no. 11, Nov. 2022, pp. 7278–86. Pubmed, doi:10.1002/mp.15837.
URI
https://scholars.duke.edu/individual/pub1525478
PMID
35770964
Source
pubmed
Published In
Med Phys
Volume
49
Published Date
Start Page
7278
End Page
7286
DOI
10.1002/mp.15837
Post-Radiotherapy PET Image Outcome Prediction by Deep Learning Under Biological Model Guidance: A Feasibility Study of Oropharyngeal Cancer Application.
PURPOSE: To develop a method of biologically guided deep learning for post-radiation 18FDG-PET image outcome prediction based on pre-radiation images and radiotherapy dose information. METHODS: Based on the classic reaction-diffusion mechanism, a novel biological model was proposed using a partial differential equation that incorporates spatial radiation dose distribution as a patient-specific treatment information variable. A 7-layer encoder-decoder-based convolutional neural network (CNN) was designed and trained to learn the proposed biological model. As such, the model could generate post-radiation 18FDG-PET image outcome predictions with breakdown biological components for enhanced explainability. The proposed method was developed using 64 oropharyngeal patients with paired 18FDG-PET studies before and after 20-Gy delivery (2 Gy/day fraction) by intensity-modulated radiotherapy (IMRT). In a two-branch deep learning execution, the proposed CNN learns specific terms in the biological model from paired 18FDG-PET images and spatial dose distribution in one branch, and the biological model generates post-20-Gy 18FDG-PET image prediction in the other branch. As in 2D execution, 718/233/230 axial slices from 38/13/13 patients were used for training/validation/independent test. The prediction image results in test cases were compared with the ground-truth results quantitatively. RESULTS: The proposed method successfully generated post-20-Gy 18FDG-PET image outcome prediction with breakdown illustrations of biological model components. Standardized uptake value (SUV) mean values in 18FDG high-uptake regions of predicted images (2.45 ± 0.25) were similar to ground-truth results (2.51 ± 0.33). In 2D-based Gamma analysis, the median/mean Gamma Index (<1) passing rate of test images was 96.5%/92.8% using the 5%/5 mm criterion; such result was improved to 99.9%/99.6% when 10%/10 mm was adopted. CONCLUSION: The developed biologically guided deep learning method achieved post-20-Gy 18FDG-PET image outcome predictions in good agreement with ground-truth results. With the breakdown biological modeling components, the outcome image predictions could be used in adaptive radiotherapy decision-making to optimize personalized plans for the best outcome in the future.
MLA Citation
Ji, Hangjie, et al. “Post-Radiotherapy PET Image Outcome Prediction by Deep Learning Under Biological Model Guidance: A Feasibility Study of Oropharyngeal Cancer Application.” Front Oncol, vol. 12, 2022, p. 895544. Pubmed, doi:10.3389/fonc.2022.895544.
URI
https://scholars.duke.edu/individual/pub1523882
PMID
35646643
Source
pubmed
Published In
Frontiers in Oncology
Volume
12
Published Date
Start Page
895544
DOI
10.3389/fonc.2022.895544

Assistant Professor of Radiation Oncology
Contact:
04207 Red Zone, Morris Bldg, Duke South, DUMC, Durham, NC 27710