You are here

Analyst-centric multimedia analysis scattered system (AMASS)

Award Information
Agency: Department of Defense
Branch: N/A
Contract: N00014-15-P-1096
Agency Tracking Number: N142-122-0024
Amount: $79,973.00
Phase: Phase I
Program: SBIR
Solicitation Topic Code: N142-122
Solicitation Number: 2014.2
Timeline
Solicitation Year: 2014
Award Year: 2015
Award Start Date (Proposal Award Date): 2014-10-27
Award End Date (Contract End Date): 2015-08-27
Small Business Information
5266 Hollister Avenue, Suite 229
Santa Barbara, CA 93111
United States
DUNS: 000000000
HUBZone Owned: No
Woman Owned: No
Socially and Economically Disadvantaged: No
Principal Investigator
 Jelena Tesic
 Research Staff Member
 (646) 379-6042
 tesic@mayachitra.com
Business Contact
 B.S. Manjunath
Title: Technical Point of Contact
Phone: (805) 448-8227
Email: manj@mayachitra.com
Research Institution
N/A
Abstract

Intelligence cues in multimedia data are result of multiple sources: inherent knowledge of multiple end-users (analysts), feature-rich digital data content (co-occurrence of specific behaviors and scenes in video, audio, other sensors), and intelligence context (where, when, why, how). Analysts need to fully access video and acoustic data content (when multiple constraints are present), formulate complex queries across features' modalities, and visualize patterns from retrieved results in multiple contextual spaces. To do this real-time requires the sophisticated back-end: storage, common representation, search, annotation, and tagging schemes to manage the rich and diverse information contained in sensor feeds (video metadata, acoustic files, analyst comments, spatial and temporal localisation, context). To do it accurately requires the sophisticated data retrieval that relies on the information fusion of various sources. Analysts expect from the system to produce time-critical actionable intelligence and insights beyond the querying. Sole domain techniques are not applicable here, as they solve only part of the problem (high-dimensional descriptor search for video and audio content or text search for transcripts). To do this effectively, the project will explore deep learning techniques to capture the underlying dynamic of useful insights. The project will develop an end-to-end solution that supports (a) back-end development and integration of a wide range of video and audio descriptions at different semantic levels through unified representation of content description, and inference of the stored semantic knowledge at retrieval time; (b) fast and versatile access (security and bandwidth wise) and addition of rich semantic description in collaborative environments (back-end and front-end feature annotation and tagging); and (c) sequencing, and discovery of information contained in distributed networked sensor data files at the frame level.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government