Evaluation-Based Effectiveness Testing
Small Business Information
TANGLEWOOD RESEARCH, INC., 420-A GALLIMORE DAIRY RD, GREENSBORO, NC, 27409
AbstractDESCRIPTION (provided by applicant): Schools, communities, and service providers are beginning to implement programs with proven or promising potential. As part of this dissemination effort, local implementers are often required to conduct local evaluation s. These evaluations, however, are often underpowered, lack appropriate or any comparison group, and are subject to attenuated effectiveness estimates due to unreliable measures or sample invariance. The goal of this SBIR is to develop a tool to help local evaluators and policy makers obtain valid, reliable, and robust evidence for the effectiveness of their intervention practices without the burden of having to identify convene, maintain, and measure a local comparison group. The system will meta-analytica lly combine archived data from prior evaluations, including evaluation data from intervention groups of relevant programs and from control groups to establish appropriate counterfactuals for local evaluators. Strategies will be developed that will allow us ers to complete analyses that will useful for interpreting local evaluation outcomes. Thus, rather than relying on a typically unobtainable gold-standard method for assessing impact (randomized control trials), the goal of this approach will be to provide a statistically reliable method for determining local program performance and to provide programs useful benchmarks against which local evaluation findings can be interpreted. This project will take advantage of an existing resource, previously developed w ith SBIR funds that assists schools and communities in performing local evaluations. This existing system, Evaluation Lizard, is now an active service and has, during the past several years amassed a large database that includes not only survey data on beh avioral outcomes, mediators related to outcomes, and demographic information about survey participants, but also ties these data to programs being implemented and the evaluation design that was adopted. In this Phase I project we will: 1) We will assemble extant datasets that are appropriate for completing a pilot test of proposed methods and ensure that data are coded to provide appropriate pools of results for completing meta-analytic analyses; 2) refine meta-analytic statistical methods for completing co mparisons using referent data; and 3) test these methods using data from five to ten local evaluations to develop and then demonstrate the utility of the proposed methods for local program evaluators, service providers, and funders.
* information listed above is at the time of submission.