You are here
Multi-Modal, Configurable User Interface for Persons with Disabilities
Phone: () -
Individuals with disabilities need better access to computing technology. Recent research and development that is integrating multiple input modalities for general business and consumer applications can address the special access needs of persons with disabilities as well. The proposing firm has recently developed integrated eye-tracking and voice recognition technology for hands-free operation of a graphical user interface, which assembles a single message of user intent by fusing the information in the separate voice and eye gaze data streams. The innovation in this approach is that by combining the two inherently ambiguous data streams, the intent of the user can more easily be determined. This core technology can be augmented to create a Multi-Modal Configurable User Interface (MMCUI) for persons with disabilities. The MMCUI would be an add-on module that would work with the basic eye/voice integration software layer that has been developed. The proposed Phase I research will produce a prototype MMCUI. Importantly, requirements and evaluation of the MMCUI will be accomplished with the support of peer counselors from a local Center for Independent Living. Thus, input from the disabled community will have a direct impact on the MMCUI feature set as the technology is being developed.
* Information listed above is at the time of submission. *