EvaluATE’s external evaluators at The Rucks Group reviewed the evaluation plans in a random sample of 169 ATE proposals across 14 years. They found that grant seekers came to provide more detail about this aspect of their ATE evaluation plans over time. In this blog post, we share tips about what to do and not to do when describing how your evaluation data will be analyzed or interpreted.

Describing Your Data Analysis and Interpretation Plan: What to Do

The current NSF ATE program solicitation advises proposers to describe “how data will be analyzed and interpreted to answer the evaluation questions and reach conclusions about the quality of the project’s implementation and outcomes.”

As suggested by this statement, analysis and interpretation are distinct activities, although the terms are sometimes used interchangeably. Analysis involves transforming raw quantitative or qualitative data into usable information, such as statistics or qualitative themes. Interpretation goes a step further to assign meaning or value to findings from data.

For example, analysis may indicate that 72% of students pass a critical gatekeeping course. To determine whether that is a good result or not requires interpretation.

While analysis and interpretation are distinct, the study of ATE evaluation plans rated these elements together; the focus was on the extent to which a plan was in place for making sense of data. EvaluATE recommends that ATE proposals include the following details related to data analysis and interpretation:

  • Analysis: Identify procedures that will be used to summarize quantitative and qualitative data (e.g., descriptive statistics, inferential statistics, regression, deductive or inductive coding).
  • Interpretation: Identify sources of comparative information (e.g., baseline data, benchmarks, group comparisons, performance rubrics, program community members’ opinions), and explain how it will be used to answer the evaluation questions.

Including these details demonstrates your team has thought ahead to the process of sense-making to ensure the evaluation data will be usable and useful to the project.

Proposal space is limited, so you may not be able to include a lot of detail about both analysis and interpretation. Most important is to show you have a plan for making sense of your evaluation data. Here’s an example from a proposal that emphasizes the interpretation piece:

Data will be analyzed and compared to grant targets, past performance, and national benchmarks to draw conclusions about the effectiveness and impact of project efforts.

Describing Your Data Analysis and Interpretation Plan: What Not to Do

The most common mistake when it comes to describing how evaluation data will be analyzed and interpreted is its complete omission. That is, proposers discuss the methods for gathering data, and often the source, but they don’t say what will be done to analyze or interpret data.

Or they may mention analysis without any supporting details, such as in this example:

Dr. Richards will collect and analyze data on the various components of the project to determine its success.

Thinking through the sense-making piece of your evaluation is critical to ensuring your plan will generate useable and useful information. Communicating this part of your plan to reviewers helps them understand your evaluation and the rationale for gathering certain kinds of data.

Resources

For more tips about describing evaluation plans in ATE proposals, check out the Evaluation Plan Checklist for ATE Proposals and related resources in EvaluATE’s Evaluation Plan Toolkit for ATE Proposals. To learn more about EvaluATE’s review of ATE proposal evaluation plans, view the overview of findings, the scoring rubric, or our article in the American Journal of Evaluation.

About the Authors

Kelly Robertson

Kelly Robertson box with arrow

Principal Research Associate, The Evaluation Center, Western Michigan University

Kelly has a Ph.D. in evaluation and more than eight years of experience in the field of evaluation. She works as a Senior Research Associate at The Evaluation Center at Western Michigan University. Dr. Robertson has worked on evaluations at the local, regional, national, and international levels, spanning a wide variety of sectors (e.g., STEM education, adult education, career and technical education, and evaluation capacity development). Her research interests primarily focus on evaluation as it relates to equity, cultural competence, and making evaluation more user-friendly. She, along with Dr. Lori Wingate, led the development of The Evaluation Center's online training program, Valeo (valeoeval.com).

Lori Wingate

Lori Wingate box with arrow

Executive Director, The Evaluation Center, Western Michigan University

Lori has a Ph.D. in evaluation and more than 20 years of experience in the field of program evaluation. She is co-principal investigator of EvaluATE and leads a variety of evaluation projects at WMU focused on STEM education, health, and higher education initiatives. Dr. Wingate has led numerous webinars and workshops on evaluation in a variety of contexts, including CDC University and the American Evaluation Association Summer Evaluation Institute. She is an associate member of the graduate faculty at WMU. She, along with Dr. Kelly Robertson, led the development of The Evaluation Center's online training program, Valeo (valeoeval.com)

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.