Sign Understanding in Support of Autonomous Navigation (SUSAN)
Small Business Information
625 Mount Auburn Street, Cambridge, MA, 02138
AbstractMobile robots currently cannot detect and read arbitrary signs. Existing work on road sign recognition is template based, limiting the detectable signs to a specific set. This is a major hindrance to mobile robot usability, since they cannot be taskedusing directions that are intuitive to humans. It also limits their ability to report their position relative to intuitive landmarks.We propose to develop a system for Sign Understanding in Support of Autonomous Navigation (SUSAN), that detects signs from various cues common to signs (vivid colors, compact shape, text or other symbols) and from contextual cues (placement with respect toroads, walls, doors, poles, etc.). In Phase II, we will enhance our Phase I color and text-based sign detection and add adaptation for a variety of lighting and background conditions (both indoors and outdoors). Our partial 3D scene modeling module willbias detectors to specific parts of the scene based on shape from motion, 3D line extraction, vertical surface and vanishing point detection. Our tracking module will accumulate evidence from tracking potential signs in time. We will demonstrateperformance on an indoor mobile robot and, in collaboration with PercepTek, we will demonstrate SUSAN on the MARS unmanned ground vehicle.
* information listed above is at the time of submission.