You are here

Rad-Hard Adaptive Dual-Mode Event-Based Vision and Perception for Autonomous Robot Operations

Award Information
Agency: National Aeronautics and Space Administration
Branch: N/A
Contract: 80NSSC22PA940
Agency Tracking Number: 222229
Amount: $156,266.00
Phase: Phase I
Program: STTR
Solicitation Topic Code: T4
Solicitation Number: STTR_22_P1
Timeline
Solicitation Year: 2022
Award Year: 2022
Award Start Date (Proposal Award Date): 2022-07-22
Award End Date (Contract End Date): 2023-08-25
Small Business Information
304 South Rockford Drive
Tempe, AZ 85281-3052
United States
DUNS: 078602532
HUBZone Owned: No
Woman Owned: No
Socially and Economically Disadvantaged: No
Principal Investigator
 Phaneendra Bikkina
 (480) 321-6758
 phani.bikkina@alphacoreinc.com
Business Contact
 Dave Johnson
Phone: (602) 510-7754
Email: dave.johnson@alphacoreinc.com
Research Institution
 Arizona State University-Polytechnic
 
P.O. Box 876011
Tempe, AZ 85287-6011
United States

 Federally Funded R&D Center (FFRDC)
Abstract

In response to NASA STTR topic T4.01, Information Technologies for Intelligent and Adaptive Space Robotics, Alphacore Inc. in partnership with the Arizona State University (ASU) School of Earth and Space Exploration will develop a low-SWaP-C, high-performance, extreme perception and vision system for autonomous robot operations. Our novel radiation-hard adaptive dual-mode neuromorphic (event-based) vision system configuration is designed to provide terrestrial autonomous robot-comparable 3D object detection, depth estimation, mapping, and tracking functionality for future use on lunar and planetary surfaces.nbsp;Our approach is to recognize that ultra-high performance terrestrial state-of-the-art image processing microelectronics hardware likely will not be available. To address this constraint, our solution is to instead selectively reduce or throttle the image data from the image sensors to the downstream image processing microelectronic hardware. By selectively (and significantly) reducing the image data rate from the camera(s), lower performance space-qualifiable image processing electronics will then be able to provide the needed functionality on the now much sparser information from image data. The challenge is reducing the image data rate, while still providing the terrestrial-comparable state (pose and velocity) estimation, 3D object detection, depth estimation, mapping, and tracking functionality needed for autonomous operations.nbsp;We propose to solve this challenge by replacing existing conventional space-grade CMOS frame-based cameras with a novel radiation-hardened version of a Dynamic and Active Pixel Vision Sensor (DAVIS) dual-mode image sensor , which combines a conventional global-shutter CMOS camera with an event-based image sensor (EBS) in the same pixel array. In practice, event-based image sensors can reduce the downstream computational burden by an estimated one to two orders of magnitude.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government