You are here

Adaptive camera to display mappings using computer vision

Award Information
Agency: Department of Homeland Security
Branch: N/A
Contract: NBCHC060140
Agency Tracking Number: 613017
Amount: $99,953.00
Phase: Phase I
Program: STTR
Solicitation Topic Code: N/A
Solicitation Number: N/A
Timeline
Solicitation Year: N/A
Award Year: 2006
Award Start Date (Proposal Award Date): N/A
Award End Date (Contract End Date): N/A
Small Business Information
465 Fairchild Drive Suite: 226
Mountain View, CA 94043
United States
DUNS: N/A
HUBZone Owned: No
Woman Owned: No
Socially and Economically Disadvantaged: No
Principal Investigator
 Ismail Haritaoglu
 President
 (650) 641-7111
 ismail.haritaoglu@polarrain.com
Business Contact
 Ismail Haritaoglu
Title: president
Phone: (650) 641-7111
Email: ismail.haritaoglu@polarrain.com
Research Institution
 University of Maryland
 Larry Davis
 
Institude for Advance Computer Studies
College Park, MD 20742
United States

 (301) 405-4526
 Nonprofit College or University
Abstract

The video surveillance industry is experiencing dramatic change with the move from analog to digital video. Command centers need to have coordinated viewing of multiple camera feeds at one time, and the ability to switch automatically between feeds and display relevant patterns. Conventional security control rooms include a bank of monitors connected through a switch to an array of security cameras. Fixed protocols are used to cycle the cameras through the monitors, with provisions for human over-ride. Advances in display technology and high speed networks motivate us to propose a radically new model of the human/display interface in the control room. We propose general techniques, based on computer vision algorithms for measuring the saliency of surveillance videos, for mapping video cameras to display space (resulting in variable amounts of display space per camera), and for visualizing the information in each video stream. The computer vision techniques involve statistical characterization of patterns of movements to develop measures of movement saliency (to control the camera to display space mapping), and perceptual modeling of video content to drive the visualization of an individual video stream. We describe a pilot user study to evaluate these ideas.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government