You are here

Bounding generalization risk for Deep Neural Networks

Award Information
Agency: Department of Defense
Branch: National Geospatial-Intelligence Agency
Contract: HM0476C220003
Agency Tracking Number: M2-0150
Amount: $999,544.57
Phase: Phase II
Program: STTR
Solicitation Topic Code: NGA20A-001
Solicitation Number: N/A
Timeline
Solicitation Year: 2020
Award Year: 2022
Award Start Date (Proposal Award Date): 2021-12-20
Award End Date (Contract End Date): 2023-07-04
Small Business Information
5452 SONOMA PLACE
SAN DIEGO, CA 92130-5754
United States
DUNS: 081267253
HUBZone Owned: No
Woman Owned: No
Socially and Economically Disadvantaged: No
Principal Investigator
 Jonathan Miller
 (630) 229-4056
 jon@euler-sci.com
Business Contact
 Steve Campbell
Phone: (208) 859-5238
Email: Steve@Euler-sci.com
Research Institution
 Fermi National Accelerator Laboratory
 Mary Jo Lyke
 
Pine St
Batavia, IL 60510-0000
United States

 (630) 840-8976
 Federally Funded R&D Center (FFRDC)
Abstract

Deep Convolutional Neural Networks (DCNNs) have become ubiquitous in the analysis of large datasets with geometric symmetries. These datasets are common in medicine, science, intelligence, autonomous driving and industry. While analysis based on DCNNs have proven powerful, uncertainty estimation for such analyses has required sophisticated empirical studies. This has negatively impacted the effectiveness DCNN, motivating the development of a bound on generalization risk. Uncertainty estimation is a crucial component of physical science. Models can be trusted only to the degree their limitations and potential inaccuracies are fully understood and accurately characterized. Best estimates can be wrong. Accordingly, the scientific community has invested a great deal of effort into understanding and benchmarking methods of uncertainty quantification, and we will bring a deep knowledge of those tools and traditions to bear on the the predictions of uncertainty for DCNN. There are parallels between astrophysics and computer vision in that many of the ground truth labels in real-world data are established by human inspection. We will bring the tools and expertise of science to bear on benchmark data in computer vision as well, both for rigor and to create points of reference for a broad community of practitioners and researchers. Generalization risk or error is the difference between the error as found in training or validation and the error that exists in application. In empirical studies, this risk is estimated using a blind test sample. However, doing so is costly when data is limited, and such studies are necessarily incomplete since the blind test sample does not include all the data that the DCNN will be applied to. This motivates the ascertainment and study of a mathematical bound on on generalization risk. In the norm based approach, we have found that the interplay between the frequency behavior of the target function and the NN depth determines it’s approximation error. In this project we will develop a toolbox which returns a bound on the a priori generalization risk when provided a DCNN topology and example data or some functional description. (This risk will be independent of the training of the sample.) This bound will be applied for ResNet101 and similar DCNN on tasks in computer vision and astrophysics, and the impact of the bound on astrophysics uncertainty analyses will be evaluated. In particular, we will evaluate this uncertainty for classification and regression tasks on images of strong lenses and galaxy mergers. This toolbox will allow improved uncertainty estimation in the domains where DCNNs are used, such as in astrophysics, particle physics and computer vision.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government