You are here

Algorithm-Based People Detection and Threat Determination from Passive Infrared and Visible Cameras

Description:

TECHNOLOGY AREA(S): Electronics, 

OBJECTIVE: The U.S. Army requests algorithms to automatically detect people that pose a threat to Warfighters or civilians from real-time images acquired by passive cameras operating in mid-wave, long-wave and visible bands. Indictors of a threat situation are complex and may include scenarios where a person carries a visually identifiable weapon or a military equipment or a person abandons a bag in a crowded space. The proposed algorithm shall detect, recognize, and identify threat-level of personnel from sensor images. 

DESCRIPTION: In combat, a Soldier needs to visually scan the battlefield quickly and detect, recognize, and identify threats. The accuracy and the speed of detection, recognition, and identification are of critical importance to the Soldier’s ability to make tactical decisions. Computer vision algorithms can perform threat detection from visual data with speed unmatched by a human, and can therefore greatly aid the Warfighter on the battlefield by cueing the Soldier. In a military setting, optical or infrared cameras can be worn on a Soldier or mounted on a vehicle to provide real-time data. In the civilian world, there exists an urgent need to detect persons with an intent to harm from surveillance videos to protect public safety. During recent mass shooting events such as Marjory Stoneman Douglas High School in Parkland, FL in Feb. 2018 and the Tree of Life Synagogue in Pittsburg, PA in Oct. 2018, the shooter in each event carried at least one visually identifiable weapon as he left his vehicle and walked to the building where the shooting took place. Algorithms designed to detect and recognize a person who carries a weapon from live surveillance video could mean earlier notification of the threat situation to the authorities and potentially stopping the person before the crime can be carried out. Detection refers to an algorithm’s ability to distinguish the object of interest from background elements of the scene. The object may occupy few pixels when it is located at large distances. At this distance there may not be enough information to confirm what the object is. Recognition refers the algorithm’s ability to determine the object’s class. In this case class refers to person or non-person labels. Identification refers to an algorithm's ability to differentiate between objects within a class, for example, identifying whether the person is a Soldier or a civilian, is the person armed or unarmed, and is the armed person carrying a large or small weapon. In addition, algorithms provide contextual information about a person: is the person with a group, is the person waiting idly, walking, or running and at which speed and orientation. All of these elements inform a Soldier of the threat level of the detected object. Algorithms developed for personnel detection and threat determination should address the complexity of the scenes in real-life scenarios. Threat targets may be occluded partially or fully from the sensor’s field of view by clutter sources such as trees, buildings, bright light, or moving crowds intermittently in time. Target visibility may also vary based on environmental factors such as lighting and time of the day. Training and testing data shall be collected to mimic scenes in which a person acts dangerously according to threat definitions. The algorithms shall provide accurate detection and low false positives on targets who fit these threat definitions. 

PHASE I: The performer shall conduct a trade study of existing algorithms for personnel detection and threat determination using passive sensors. They shall collect preliminary data in at least two threat scenarios, urban and natural. This data shall be used to demonstrate algorithms and show a capability to identify threats (i.e., armed individuals) from non-threats. 

PHASE II: The performer shall further develop algorithms that detect, recognize, and identify personnel that pose a threat. These algorithms shall be applied to additional scenario data collected by the performer and Government. The new scenarios shall have greater complexity, occlusions, and clutter. These scenarios should include realistic urban scenes that include urban objects and street level activities typical of this environment, e.g., unarmed civilians and commercial vehicles. The rural environments will assume a larger field of view with fewer pixels on target; implicit in this environment is vegetation ranging in scale from grass and shrubs up through forests. In both scenarios, the scenes shall include static and dynamic clutter that represent bystander human and non-human activity. The performer shall quantify detection results in terms of detection probability and false positive probability and confusion matrices. 

PHASE III: Further develop demonstrator algorithms to meet detection performance target set by the Army. Demonstrate real-time feasibility of demonstrator algorithms. Field the demonstrator algorithms on a system of cameras. Implement the algorithm on field hardware in scenarios similar to those previously described. 

REFERENCES: 

1: Mahajan, R., & Padha, D. (2019, May). Detection Of Concealed Weapons Using Image Processing Techniques: A Review. In 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC) (pp. 375-378). IEEE

KEYWORDS: Target Detection And Recognition, Image Processing, Public Security, Machine Learning, Deep Learning, Sensor Fusion, Information Processing 

US Flag An Official Website of the United States Government