You are here

SBIR Phase I: Providing Tools for Richer eLearning Assessment

Award Information
Agency: National Science Foundation
Branch: N/A
Contract: 0441519
Agency Tracking Number: 0441519
Amount: $100,000.00
Phase: Phase I
Program: SBIR
Solicitation Topic Code: IT
Solicitation Number: NSF 04-551
Timeline
Solicitation Year: 2004
Award Year: 2005
Award Start Date (Proposal Award Date): N/A
Award End Date (Contract End Date): N/A
Small Business Information
1100 South Main Street
Grapevine, TX 76051
United States
DUNS: N/A
HUBZone Owned: Yes
Woman Owned: Yes
Socially and Economically Disadvantaged: No
Principal Investigator
 Adele Goldberg
 Dr
 (650) 856-8720
 adele@thinkfive.com
Business Contact
 Linda Chaput
Title: Ms
Phone: (415) 385-4632
Email: lchaput@thinkfive.com
Research Institution
N/A
Abstract

This Small Business Innovation Research (SBIR) Phase I project will study the feasibility of creating test construction tools that allow school educators to conveniently produce and deliver tests ranging from informal assessments of mastery that can be given and taken on the fly, to tests that benchmark progress of instruction against goals. The key innovations are (1) the capability to define answer analyses for stored question items so that the test constructor can know in advance what the test can report about what test-takers likely know and do not know, and (2) the capability to represent question items in a form in which actual experience can be used to improve the assessment corpus. The objective is to create tests that move beyond the current broadly-accepted applications that consist entirely of multiple-choice questions and that include varied and even game-like question types incorporating motivational and pedagogically effective feedback; that is, question types which teach while they assess. These are question types, such as drag-and-drop, matching, fill-in-the-blank sentence, and table builders that may have multiple correct answers, which do not have broadly agreed upon wrong answer distractors, and which typically require more experience to define what errors imply about the test-takers' knowledge and skills. The aim of the project is not to compete with high-stakes tests, but to move beyond current applications that consist entirely of multiple-choice questions. This project is a first step in determining whether the strategy of focusing on improving .low-stakes. assessments has merit commercially as well as intellectually. Multiple-choice and constructed answer exams have long proven highly efficient tools for state and national high-stakes exams. A problem with multiple-choice questions, however, is that many do not assess what students know but only what students demonstrate they know. Certain types of students typically perform better than others on multiple-choice tests. In a period of heightened accountability, the difficulty of designing fair test items that can withstand legal challenge has made multiple-choice, by consensus, the only efficient, reliable form of high-stakes assessment in states representing most of the school-age population. Because there are many students in environments in which their learning is measured almost solely by multiple-choice tests who are not served well, a significant contribution to exploratory learning can be made by increasing what learners can experience by making assessments more intrinsically interesting and also by improving the kinds of formative feedback available to students, teachers, and administrators.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government