Trust is integral to valuing evaluation results and findings. If stakeholders don’t trust your evaluation work, they will neither value it nor accept and use it.

At the same time, we know that trust often is misguided and not warranted by the findings presented. Actions based on trusted evaluation work that is poorly done lead to problematic results: misdirection of work, cost overruns, and many more problems in matters such as equity, diversity, and inclusion.

So it is incumbent on us as evaluators to build and maintain trust in sound ways. Not only do we want our evaluation findings to be believed and used, but we also want them built on sound work and principles.

Since its inception in 1975 the Joint Committee on Standards for Educational Evaluation has actively worked to guide evaluators and evaluations to produce sound, trustable evaluation findings. The program evaluation standards published by this committee provide guidelines with examples of strong use as well as errors often made in evaluation practice. These guidelines set forward five key principles for trusted evaluation practice:

  • Utility means ensuring that the usefulness of evaluation findings is apparent.
  • Propriety includes treating stakeholders safely and appropriately in the evaluation process and ensuring that findings are applied appropriately to stakeholders individually and as groups.
  • Feasibility focuses most directly on work effort and costs. It encompasses ensuring that the evaluation work can be completed in a timely way and within the cost constraints budgeted.
  • Accuracy addresses validity matters, including providing sound, testable practices that substantiate the value of instruments, data gathering, analyses, and other important attributes of evaluation reports.
  • Evaluation accountability standards focus on the evaluation process itself. These guidelines are important because the evaluator’s work shows stakeholders how they are building trust in the results of their evaluations.

The accountability standards include one of the strongest ways evaluators can build trust: by metaevaluating their evaluation work and sharing their findings with stakeholders. By evaluating your work and sharing that information, you provide transparency, show your strengths and weaknesses, and provide clear evidence of evaluation trustability. No evaluation should be completed without a metaevaluation.

In all, these standards build trust when you not only know them but regularly engage them in your evaluation practice. Are these standards an important part of your evaluation practice? Ask yourself these questions:

  1. Are these standards on your desk or close at hand—not pristine but smudged with use?
  2. Do you reference them regularly in your evaluation design and work efforts?
  3. Do you justify your evaluation plans, work, and findings by referencing specific applicable standards?

We hope you can answer each with a firm yes. If not, begin to use the standards regularly; the trust you earn will be worth your time and effort.

About the Authors

Arlen Gullickson

Arlen Gullickson box with arrow

Emeritus Researcher, The Evaluation Center, Western Michigan University

One of four children, Arlen Gullickson was born and raised in a farming family in the state of Iowa. His education includes baccalaureate, masters, and Ph.D. degrees in mathematics, physics and education, respectively. He has 30 years of teaching experience at the high school and college levels and altogether more than 40 years of experience working in education. In the past, Arlen was the director of The Evaluation Center and Chair of the Joint Committee on Standards for Educational Evaluation. Currently, he is supposed to be retired. But he serves as a Co-Principal Investigator for EvaluATE (after serving as the PI) and fishes whenever he can.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.