All ATE proposals, except for planning grants, are required to specify a budget line for an independent evaluator. But the solicitation offers no guidance as to what a small-scale project evaluation should look like or the kinds of data to collect. The Common Guidelines for Education Research and Development—issued jointly by the National Science Foundation and the Institute of Education Sciences—specify that evidence of impact requires randomized controlled trials, while evidence of promise is generated by correlational and quasi-experimental studies.
The Common Guidelines aren’t well aligned to the work done by many ATE projects and centers, especially projects awarded through the “Small Grants” track. Small ATE projects are funded to do things like create new degree programs, offer summer camps, expand recruitment, provide compensatory education, and develop industry partnerships. These sorts of endeavors are quite distinct from the research and development work to which the Common Guidelines are oriented.
NSF expects small ATE projects to be grounded in research and utilize materials developed by ATE centers. Generally speaking, the charge of small projects is to do, not necessarily to innovate or prove. Therefore, the charge for small project evaluations is to gather and convey evidence about how well this work is being done and how the project contributes to the improvement of technician education. Evaluators of small projects should seek empirical evidence about the extent to which…
The project’s activities are grounded in established practices, policies, frameworks, standards, etc. If small projects are not generating their own evidence of promise or impact, then they should be leveraging the existing evidence base to select and use strategies and materials that have been shown to be effective. Look to authoritative, trusted sources such as the National Academies Press (for example, see the recent report, Reaching Students: What Research Says About Effective Instruction in Undergraduate Science and Engineering) and top-tier education research journals.
The target audience is engaged. All projects should document who is participating in the project (students, faculty, partners, advisors, etc.) and how much. A simple tracking spreadsheet can go a long way toward evaluating this aspect of a project. Showing sustained engagement by a diverse set of stakeholders is important for demonstrating the project’s perceived relevance and quality.
The project contributes to changes in knowledge, skill, attitude, or behavior among the target audience. For any project that progresses beyond development to piloting or implementation, there is presumably some change being sought among those affected. What do they know that they didn’t know before? What new/improved skills do they have? Did their attitudes change? Are they doing anything differently? Even without experimental and quasi-experimental designs, it’s possible to establish empirical and logical linkages between the project’s activities and outcomes.
The ATE program solicitation notes that some projects funded through its Small Grants track “will serve as a prototype or pilot” for a subsequent project. As such, ATE small grant recipients should ensure their evaluations generate evidence that their approaches to improving technician education are worth the next level of investment by NSF.
To learn more about…
‒ the Common Guidelines, see EvaluATE’s Evaluation and Research in the ATE Program webinar recording and materials
‒ evaluation of small projects, see EvaluATE’s Low-Cost, High-Impact Evaluation for Small Projects webinar recording and materials
‒ alternative means for establishing causation, see Jane Davidson’s Understand Causes of Outcomes and Impacts webinar recording and slides
Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.