You are here

Clarity in Motion: A Motion-Tolerant Aid for Selectively Hearing Acoustic Sources

Award Information
Agency: Department of Health and Human Services
Branch: National Institutes of Health
Contract: 1R43DC020690-01A1
Agency Tracking Number: R43DC020690
Amount: $275,389.00
Phase: Phase I
Program: SBIR
Solicitation Topic Code: NIDCD
Solicitation Number: PA21-260
Solicitation Year: 2021
Award Year: 2023
Award Start Date (Proposal Award Date): 2022-12-15
Award End Date (Contract End Date): 2023-11-30
Small Business Information
4 Militia Drive, Suite 6
Lexington, MA 02421
United States
DUNS: 837257039
HUBZone Owned: No
Woman Owned: No
Socially and Economically Disadvantaged: No
Principal Investigator
 (781) 861-7827
Business Contact
Phone: (781) 861-7827
Research Institution

Noisy rooms with multiple moving sound sources create problems for hearing-impaired listeners.
Unwanted masking sounds reduce the intelligibility of speech and other sounds listeners want to hear. “Source
Separation” signal processing methods are known that extract important sources and “scrub” unwanted noise,
but these methods typically require the acoustic sensors (microphones) and sources they process to be fixed
in space—the optimal separation solutions computed by such methods are position dependent. Movement
degrades the quality of separation (QoS) of the computed separation solutions, and reconvergence following a
change of position takes time—often tens of seconds. This constraint limits the practical utility of traditional
separation methods. We propose a novel assistive listening system called CIM (“Clarity in Motion”) which is
capable of maintaining an optimal separation of acoustic sources in real-world environments changing at
“human” speeds. CIM dramatically shortens the time required to reconverge separation solutions. CIM is
designed for integration into NIH’s Open Speech Platform (OSP) initiative for hearing aids and personal audio
devices. CIM leverages STAR’s Multiple Algorithm Source Separation (MASS) application framework of
“pluggable” acoustic separation modules. MASS is compatible with OSP and is publicly available on GitHub.CIM is room-centric, sensor image-based, and listener-specific. Important system components are
embedded in the room itself, rather than in the user’s ear (e.g. hearing aid). CIM delivers listener-specific audio
to one or more users through their smartphones. CIM employs multiple microphones distributed around a room
and connected to a CIM Room Server (a signal processing device) supporting all listeners. This Server pro-
cesses the audio signals from these shared Room Mics to scrub unwanted sounds from private Listener
Mics, which are typically hearing aid, cochlear implant, or other head-mounted mics specific to each listener.
Each listener uses a CIM mobile device app to register their Listener Mic and specify which acoustic sources to
scrub. The Room Server computes an individualized scrubbed audio stream for each listener and transmits it
wirelessly to their Listener App. The Listener App outputs this stream to the listener’s hearing aid, cochlear
implant, or earbuds as a standard line level or current loop audio signal.The heart of CIM’s innovation resides in two separate proprietary techniques, described herein, for
reducing the separation solution deconvergence (ΔQ) associated with source or sensor movements.In Phase I, we will characterize the relationship between ΔQ and relevant objective parameters of
acoustic scenes; implement and quantitatively evaluate the contribution of our novel methods for reducing
motion-induced deconvergence; and carry out a perceptual study of the relationship between movement-
induced solution deconvergence and both listening effort and intelligibility judgements.The CIM system will help hearing-impaired listeners hear clearly in noisy rooms with moving sources.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government