You are here
Secure Private Neural Network (SPNN)/Charles River Analytics Inc.
Title: Scientist
Phone: (617) 491-3474
Email: jdruce@cra.com
Phone: (617) 491-3474
Email: yfuller@cra.com
Deep Neural Networks (DNNs) are becoming widely used in the DoD for image classification, but recent research has shown DNNs are vulnerable to adversary attacks. Adversaries can monitor the DNN training and classification processes to learn attributes of the training data and the DNN. With this information, an adversary can gain valuable insight into the potentially sensitive data used to train the DNN (e.g., identify a theater of interest based on training set images) and even compose images designed to fool the DNN into misclassification. To address these concerns, we propose to design and demonstrate the feasibility of a Secure Private Neural Network (SPNN), a secure neural network that preserves the privacy of training and testing data via end-to-end efficient homomorphic encryption (HE), while providing additional defense against black box adversarial and membership inference attacks through intelligent network stochasticity, and training set confounding. HE enables the DNN to perform training and classification operations on an untrusted platform with privacy-preserving encrypted data. Additional obfuscation defenses thwart black box attacks by adversaries who are able to gain unencrypted access to the DNN through subversion or misuse of the client and conduct chosen plaintext attacks.
* Information listed above is at the time of submission. *