You are here

Cloud computing architecture for next generation video

Description:

TECHNOLOGY AREA(S): Info Systems 

OBJECTIVE: Engineer solutions to address enterprise cloud adoption and distributed computer processing challenges for current and next-generation video content, consistent with mixed media formats and wide viewer and machine applications. 

DESCRIPTION: Consistent with recent direction from the Deputy Secretary of Defense, "Accelerating Enterprise Cloud Adoption" (Issued 2017/11/07), National Geospatial Intelligence Agency seeks next-generation technologies to integrate full motion video (FMV) and other media into a scalable, distributed framework with ingest, storage, and streaming content delivery capabilities. The technical solution should enable deep integration with distributed processing applications to include by both human and emerging machine automation technologies, while maintaining link integrity between the data source(s) and processed results. The technology should be agnostic to media formats to support both legacy and emerging media formats, as well as voice, text, non-video sensor data, and other annotations. The approach should include the benefits of modern networks, such as Netflix-style storage, streaming and distribution, while avoiding its pitfalls when applied to the government solution space. Example features which go beyond consumer requirements are frame-accurate scrubbing and annotations, distributed storage of a single asset based on network topology constraints, delayed data fusion allowing multiple bitrates and alternative data types to auto-fuse at any time, globally unique marker points independent of asset start time that work with partial assets and parallel distributed computer processing for live or video on demand data streams. 

PHASE I: Provide a report containing a specific technical and architectural approach that solves the challenges described in this topic. Specific examples of the approach matching the feature set of the most recent advanced streaming media networks as well as specifics on how it will solve other challenges described in this topic. 

PHASE II: Implement the proposed approach and run real world simulations showing each of the features. Solution must show how it works in hybrid networking models, such as when distribution of a real-time asset from one location to another is delayed, allowing only parts of the asset, such as a low bitrate version, to be distributed and the other parts to be fused later. Show how distributed processing can take place on one element, such as the audio or metadata tracks, while the resulting data is automatically fused to other elements of the asset. Provide simulations of how parallel processing would be possible enabling algorithms to process a single asset across a compute cluster which is accessible over a network and not directly connected to the asset storage system. 

PHASE III: Network and commercial video content providers, including coverage of sports, special events, and “breaking news” requiring remote sensing are examples of cases where video capture can use dynamic collection strategies. As consumer use of “video on demand” increases and local storage is increasingly replaced by cloud services, commercial video content distribution can and will depend increasingly on efficient mechanisms for transmitting and caching video data, and can make use of technologies that are inspired by the defense problem space. 

REFERENCES: 

1: Albert Y. Zomaya, Albert Y. (2014). "Advanced Content Delivery, Streaming and Cloud Services". Wiley

2:  Adaptive bitrate streaming. In Wikipedia. en.wikipedia.org/wiki/Adaptive_bitrate_streaming

KEYWORDS: FMV, Streaming, Sensor Data, Big Data, Data Fusion, Distributed Computing, Adaptive Streaming 

CONTACT(S): 

John Harvie 

(571) 557-3079 

John.B.Harvie@nga.mil 

US Flag An Official Website of the United States Government