Imagine: You’re an evaluator who has compiled lots of data about an ATE project. You’re preparing to present the results to stakeholders. You have many beautiful charts and compelling stories to share.  

Youre confident you’ll be able to answer the stakeholders’ questions about data collection and analysisBut you get queasy at the prospect of questions like What does this mean? Is this good? Has our investment been worthwhile?  

It seems like the project is on track and they’re doing good work, but you know your hunch is not a sound basis for a conclusion. You know you should have planned ahead for how findings would be interpreted in order to reach conclusions, and you regret that the task got lost in the shuffle.  

What is a sound basis for interpreting findings to make an evaluative conclusion?  

Interpretation requires comparison. Consider how you make judgments in daily life: If you declare, “this pizza is just so-so,” you are comparing that pizza with other pizza you’ve had, or maybe with your imagined ideal pizza. When you judge something, you’re comparing that thing with something else, even if you’re not fully conscious of that comparison.

The same thing happens in program evaluation, and its essential for evaluators to be fully conscious and transparent about what they’re comparing evaluative evidence againstWhen evaluators don’t make their comparison points explicit, their evaluative conclusions may seem arbitrary and stakeholders may dismiss them as unfounded 

Here are some sources and strategies for comparisons to inform interpretation. Evaluators can use these to make clear and reasoned conclusions about a project’s performance:  

Performance Targets: Review the project proposal to see if any performance targets were established (e.g., “The number of nanotechnology certificates awarded will increase by 10 percent per year”). When you compare the project’s results with those targets, keep in mind that the original targets may have been either under or overambitious. Talk with stakeholders to see if those original targets are appropriate or if they need adjustment. Performance targets usually follow the SMART structure. 

Project Goals: Goals may be more general than specific performance targets (e.g., “Meet industry demands for qualified CNC technicians”)To make lofty or vague goals more concrete, you can borrow a technique called Goal Attainment Scaling (GAS). GAS was developed to measure individuals’ progress toward desired psychosocial outcomesThe GAS resource from BetterEvaluation will give you a sense of how to use this technique to assess program goal attainment. 

Project Logic Model: If the project has a logic model, map your data points onto its components to compare the project’s actual achievements with the planned activities and outcomes expressed in the model. No logic model? Work with project staff to create one using EvaluATE’s logic model template. 

Similar Programs: Look online or ask colleagues to find evaluations of projects that serve similar purposes as the one you are evaluating. Compare the results of those projects’ evaluations to your evaluation results. The comparison can inform your conclusions about relative performance.  

Historical Data: Look for historical project data that you can compare the project’s current performance against. Enrollment numbers and student demographics are common data points for STEM education programs. Find out if baseline data were included in the project’s proposal or can be reconstructed with institutional data. Be sure to capture several years of pre-project data so year-to-year fluctuations can be accounted for. See the practical guidance for this interrupted time series approach to assessing change related to an intervention on the Towards Data Science website. 

Stakeholder Perspectives: Ask stakeholders for their opinions about the status of the project. You can work with stakeholders in person or online by holding a data party to engage them directly in interpreting findings 

 

Whatever sources or strategies you use, its critical that you explain your process in your evaluation reports so it is transparent to stakeholders. Clearly documenting the interpretation process will also help you replicate the steps in the future. 

About the Authors

Lori Wingate

Lori Wingate box with arrow

Executive Director, The Evaluation Center, Western Michigan University

Lori has a Ph.D. in evaluation and more than 20 years of experience in the field of program evaluation. She is co-principal investigator of EvaluATE and leads a variety of evaluation projects at WMU focused on STEM education, health, and higher education initiatives. Dr. Wingate has led numerous webinars and workshops on evaluation in a variety of contexts, including CDC University and the American Evaluation Association Summer Evaluation Institute. She is an associate member of the graduate faculty at WMU.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.