You are here

Resisting Adversarial Attacks with Quantum Machine Learning

Description:

TECHNOLOGY AREA(S): Info Systems 

OBJECTIVE: Develop methods for machine learning that leverage quantum information science for increased robustness to adversarial attacks. 

DESCRIPTION: Machine learning has demonstrated near human level performance on a variety of tasks in recent years; however, it’s also proving to be brittle and susceptible to adversarial attacks. There are many examples in the literature that have demonstrated (1) that small perturbations can be applied to inputs to yield significant changes in output, or (2) information about a model can be easily leaked – potentially exposing sensitive data used to train a model. Recent work has shown that detection of adversarial examples is possible by analyzing them in the context of the underlying statistical distribution of the original dataset and that they can be detected with statistical tests. Thermometer encodings of input vectors have been shown to provide improved robustness against the strongest known white-box attacks. Moreover, Wasserstein generative adversarial networks (GANs) have been used to “denoise” input images to further reduce the risk of adversarial attacks. The commonality among these approaches is that they represent the data as distributions, rather than single values. Casting the training of GANs in the context of quantum machine learning may provide an additional benefit due to their inherent ability to represent distributions of datasets accurately. 

PHASE I: Design and develop a quantum machine learning framework that demonstrates robustness against adversarial attacks. Perform and document preliminary analysis that quantifies the degree of robustness against adversarial attacks and deliver a report with a full-scale test plan that includes a set of test metrics that could be used during Phase II as part of a complete assessment. Deliverables will include a system architecture design; a block diagram identifying data flows and interfaces; a technical report describing the approach, evaluation results, and proposed future research directions. 

PHASE II: Develop a quantum machine learning framework robust to adversarial attacks based on the lessons learned in Phase I. Test and fully characterize the performance of the framework with respect to multiple types of adversarial attacks. 

PHASE III: Machine learning -- particularly computer vision -- has become an increasingly important component of many systems within DoD. Techniques that ensure robustness will be increasingly important in the coming years. Similarly, in commercial markets there are many examples in which a similar degree of robustness is desired: preventing deception of computer vision algorithms for autonomous vehicles, securing individual medical records aggregated in health care analytics tools, detecting fake news within social media streams, etc. 

REFERENCES: 

1: Yuan, X., et al., "Adversarial Examples: Attacks and Defenses for Deep Learning," 5 Jan 2018, arXiv:1712.07107v2.

2:  Grosse, K., et al., "On the (Statistical) Detection of Adversarial Examples," 17 October 2017, arXiv:1702.06280v2.

3:  Buckman, J., et al., "Thermometer Encoding: One Hot Way to Resist Adversarial Examples," to appear at ICLR 2018. Pre-publication available at: https://openreview.net/pdf?id=S18Su--CW

4:  Samangouei, P., et al., "Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models," to appear at ICLR 2018. Pre-publication available at: https://openreview.net/pdf?id=BkJ3ibb0-

5:  Ciliberto, C., et al., "Quantum machine learning: a classical perspective," 13 Feb 2018, arXiv:1707.08561v3

6:  Wiebe, N., et al., "Hardening Quantum Machine Learning Against Adversaries," 17 Nov 2017, arXiv:1711.06652v1

KEYWORDS: Adversarial Attacks, Quantum Machine Learning, Neural Networks, Generative Adversarial Networks, Quantum Information Science 

CONTACT(S): 

sbir@sco.mil 

US Flag An Official Website of the United States Government