You are here
Generative Immersive Scenario Testbed (GIST)
Phone: (415) 601-7021
Phone: (415) 377-8051
An artificial intelligence, machine learning approach is applied to automatically and rapidly generate immersive training material from 2D video and image capture. Several small proofs of concept generated during Phase I validate a machine learning neural network solution for transforming existing 2D video footage to 3D (360o) immersive content. Approved human subjects research testing will be applied to the full prototype build in Phase II. While many disparate algorithms exist that claim to modify 2D images and video for a 3D experience, this carefully crafted end-to-end algorithmic pipeline demonstrated by the Phase I proof-of-concept lies on the true cutting edge of technology, providing a full path to automating the conversion of existing 2D video of many types for almost immediate the use, providing warfighters, operators, first responders and others in private enterprise, a way to rapidly create immersive training that has the potential to reduce conflict and save lives. Pervasive, rapid access to immersive training material for adults working in hazardous environments would entirely change our response and the success rate, to missions and daily routines that put American troops and workers in environments that hold potentially lethal dangers. Rapid, immersive experiential contact with such dangers, without actual harm, has already proven a gateway to stronger, safer, more successful results for rapid delivery of much needed training material. The proposed algorithm would eliminate roadblocks to hand created immersive footage, including cost, development time, and realism. A wealth of 3D material could rapidly be assembled into relevant scenarios for training. And, when such 3D immersive training reaches regular schools and universities (the latter may already be introducing such classroom training in certain fields), the landscape may change forever in terms of learning and conceptualization. Additionally, Phase I research has demonstrated that such a product has more than a single use: in addition to providing rapid immersive training footage, the algorithm (incidentally) provides automatic figure rigging, meshing, and texturing for assets used in commercial immersive electronic games. The generation of such assets currently requires highly skilled graphic designers hours, days, and months the create. While such artists would still be necessary for adding the final perfection required by such games, the product results will reduce such preparatory work from weeks to a day or two. Finally, it would not be difficult, at a later product stage, to add interactive content – such as controls and/or buttons – to rapidly create full simulation environments. Again, environments that currently take costly months, even years, to fully develop.
* Information listed above is at the time of submission. *