You are here
Computational Methods for Dynamic Scene Reconstruction
Title: Sr. Member Technical Staff
Phone: (781) 503-3287
Email: tom.pollard@STResearch.com
Phone: (781) 305-4054
Email: m.joyce@STResearch.com
Contact: Takeia Bradley
Address:
Phone: (301) 405-8061
Type: Nonprofit College or University
Mobile video sensors have become cheap and easy to integrate into cell-phones, UAVs, body cameras, and security cameras. The possibility of dense networks of video sensors enables a wide range of security-related uses for ONR and other government agencies including seaport monitoring, battlefield situational awareness, and forensic crime-scene analysis. The density and overlap of sensors creates the ability to keep track of all activity in a large area of interest, but also generates more data than a human monitor can process. Algorithms for automated detection and tracking of vehicles and people in video have advanced greatly in recent years, but are generally only usable on one video source at a time. Systems & Technology Research has teamed with the University of Maryland and Vision Systems Inc to develop a 4D semantic reconstruction framework that integrates geometric and human/vehicle detection information derived from multiple simultaneous video streams into a common time-varying 3D representation. This representation contains dense 3D geometry on static and moving objects as well as a high-level semantic understanding of the scene -- i.e., the position, orientation, and size of all moving objects.
* Information listed above is at the time of submission. *