You are here
Computer Learning Obfuscating Adversarial Kit (CLOAK)
Title: Senior Software Engineer
Phone: (703) 414-5022
Phone: (703) 682-1532
In 2013, Szegedy et al. identified a series of “intriguing properties” of neural networks. One property introduced in the paper, “adversarial examples”, describes the possibility of instability in neural network classification when a small perturbation is added to input. Specifically, the authors found that a small strategic perturbation to an input image could cause a classifier to misclassify an object. Given the increasing presence of computer vision and its impact on human safety, the area of adversarial imagery has garnered a lot of attention. Researchers have developed efficient techniques for generating adversarial candidates and have applied them to problems with obvious safety concerns such as road sign classification. On the battlefield, the ramifications are no less serious. In a world of automated tracking and identification, the cost of misclassification can be catastrophic. Further research in adversarial attacks can be leveraged by the Navy to create “digital camouflage” capable of cloaking Navy assets from adversaries. Under this effort, DECISIVE ANALYTICS Corporation (DAC) seeks to advance the state-of-the-art in adversarial imagery through physical world manipulation and data synthesis to provide a portable camouflaging kit to the Navy. To accomplish this goal, DAC proposes the Computer Learning Obfuscating Adversarial Kit (CLOAK) system.
* Information listed above is at the time of submission. *