You are here
Image Analysis Tools for mpMRI Prostate Cancer Diagnosis Using PI-RADS
Phone: (203) 785-3055
Email: john.onofrey@yale.edu
Phone: (530) 274-1240
Email: mahtab.damda@eigen.com
Address:
Type: Nonprofit College or University
Project Summary
Prostate cancer is one of the most commonly occurring forms of cancer, accounting for 21% of all cancer in men.
The Prostate Imaging Reporting and Data System (PI-RADS) aims to standardize reporting of prostate cancer
using multi-parametric magnetic resonance imaging (mpMRI). However, the in-depth analysis, as demanded
by PI-RADS, remains challenging due to the complexity and heterogeneity of the disease, and it is a clinically
burdensome task subject to both significant intra- and inter-reader variability. Auxiliary tools based on machine
learning methods such as deep learning can reduce diagnostic variability and increase workload efficiency by
automatically performing tasks and presenting results to a radiologist for the purpose of decision support. In
particular, automated identification and classification of lesion candidates using imaging data can be performed
with respect to PI-RADS scoring. In Phase I of this project, we developed two automated methods to reduce the
intra- and inter-observer variability while interpreting mpMRI images using the PI-RADS protocol: (i) a method
to co-register mpMRI data, and (ii) a method to geometrically segment the prostate gland into the PI-RADS
protocol sector map. The overarching goal of this Phase II project is to develop machine learning algorithms that
incorporate both co-registered multi-modal imaging biomarkers and PI-RADS sector map information into an
automated clinical diagnostic aid. The innovation in this project lies in the use of deep learning to automatically
predict PI-RADS classification. This project is significant in that it has the potential to improve clinical efficiency
and reduce diagnostic variation in prostate cancer diagnosis. In Aim 1 of this project, we will develop a deep
learning approach to localize and classify lesions in mpMRI. In Aim 2, we will integrate this diagnostic tool into the
ProFuseCAD system and perform rigorous multi-site validation to quantify PI-RADS classification performance.
Both aims will utilize a database of over 1,000 existing mpMRI images from multiple clinical sites to develop and
validate the algorithms. Ultimately, enhancements from this project will create a novel feature for Eigenandapos;s (the
applicant companyandapos;s) FDA 510(k)-cleared imaging product, ProFuseCAD, in order to improve the diagnosis and
reporting of prostate cancer.Project Narrative
Radiological interpretation of multimodal prostate imaging data is challenging and subject to high levels of vari-
ability. To address this problem, auxiliary tools based on machine learning methods such as deep learning can
increase workload efficiency by automatically performing tasks and presenting results to a radiologist for the
purpose of decision support. In particular, automated identification of lesion candidates and assessment of po-
tentially benign or malignant lesions with respect to specific PI-RADS categories from clinical imaging data can
improve prostate cancer reporting and reduce variation in radiological interpretation.
* Information listed above is at the time of submission. *