You are here

SBIR Phase I:A language learning app based on sound and mouth movements

Award Information
Agency: National Science Foundation
Branch: N/A
Contract: 2323040
Agency Tracking Number: 2323040
Amount: $274,660.00
Phase: Phase I
Program: SBIR
Solicitation Topic Code: LC
Solicitation Number: NSF 23-515
Timeline
Solicitation Year: 2023
Award Year: 2023
Award Start Date (Proposal Award Date): 2023-10-01
Award End Date (Contract End Date): 2024-09-30
Small Business Information
2664 West Park Drive
Baltimore, MD 21207
United States
DUNS: N/A
HUBZone Owned: No
Woman Owned: Yes
Socially and Economically Disadvantaged: Yes
Principal Investigator
 Margaret Smith
 (443) 831-0657
 margaretcsmith@womtinc.com
Business Contact
 Margaret Smith
Phone: (443) 831-0657
Email: margaretcsmith@womtinc.com
Research Institution
N/A
Abstract

The broader/commercial impact of this Small Business Innovation Research (SBIR) Phase I project is advancing new language learning by incorporating facial and lip recognition along with sound analysis. This visual aspect of creating sounds is vital for mastering pronunciation, one of the significant hurdles of learning a foreign language and even improving a native language. Current language learning methods often fall short in helping learners achieve speaking proficiency and fail to provide real-life language usage experiences. This language learning platform aims to change this by addressing the growing need for multi-language proficiency in workplaces and academic settings, providing an effective and engaging language learning experience._x000D_
_x000D_
Current language learning methods and apps often fail to develop speaking and writing proficiency, focusing instead on memorization and standardized tests. This language trainer addresses this gap by offering insights into the science of speech production. By combining visual cues of oral shapes with auditory input, learners can master pronunciation, a significant challenge in language acquisition. This research will include obtaining near-perfect voice files for machine learning model training, signal processing of the voice and video files, development and comparison of machine learning models, data visualization development, incorporation into the mobile test suite, and preliminary testing. The machine learning algorithms will use the insights extracted from students' voice data to provide learners with highly targeted, fine-tuned activities._x000D_
_x000D_
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government