This week I am in Atlanta at the American Evaluation Association (AEA) Summer Evaluation Institute, presenting a workshop on Translating Evaluation Findings into Actionable Recommendations.  Although the art of crafting practical, evidence-based recommendations is not covered in-depth either in evaluation textbooks or academic courses, most evaluators (86% according to Fleischer and Christie’s  survey of AEA members) believe that making recommendations is part of an evaluator’s job. By reading as much as I can on this topic[1] and reflecting on my own practice, I have assembled 14 tips for how to develop, present, and follow-up on evaluation recommendations:

DEVELOP

  1. Determine the nature of recommendations needed or expected.  At the design stage, ask stakeholders: What do you hope to learn from the evaluation? What decisions will be influenced by the results? Should the evaluation include recommendations?
  2. Generate possible recommendations throughout the evaluation. Keep a log of ideas as you collect data and observe the program. I like Roberts-Gray, Buller, and Sparkman’s (1987) evaluation question-driven framework.
  3. Base recommendations on evaluation findings and other credible sources. Findings are important, but they’re often not sufficient for formulating recommendations.  Look to other credible sources, such as program goals, stakeholders/program participants, published research, experts, and the program’s logic model.
  4. Engage stakeholders in developing and/or reviewing recommendations prior to their finalization. Clients should not be surprised by anything in an evaluation report, including the recommendations. If you can engage stakeholders directly in developing recommendations, they will feel more ownership. (Read Adrienne Adam’s article about a great process for this).
  5. Focus recommendations on actions within the control of intended users. If the evaluation client doesn’t have control over the policy governing their programs, don’t bother recommending changes at that level.
  6. Provide multiple options for achieving desired results.  Balance consideration of the cost and difficulty of implementing recommendations with the degree of improvement expected; if possible, offer alternatives so stakeholders can select what is most feasible and important to do.

PRESENT

  1. Clearly distinguish between findings and recommendations. Evaluation findings reflect what is, recommendations are a predication about what could be. Developing recommendations requires a separate reasoning process.
  2. Write recommendations in clear, action-oriented language. I often see words like consider, attend to, recognize, and acknowledge in recommendations. Those call the clients’ attention to an issue, but don’t provide guidance as to what to do.
  3. Specify the justification sources for each recommendation. It may not be necessary to include this information in an evaluation report, but be prepared to explain how and why you came up with the recommendations.
  4. Explain the costs, benefits, and challenges associated with implementing recommendations. Provide realistic forecasts of these matters so clients can make informed decisions about whether to implement the recommendations.
  5. Be considerate—exercise political and interpersonal sensitivity. Avoid “red flag” words like fail and lack, don’t blame or embarrass, and be respectful of cultural and organizational values.
  6. Organize recommendations, such as by type, focus, timing, audience, and/or priority. If many recommendations are provided, organize them to help the client digest the information and prioritize their actions.

FOLLOW-UP

  1. Meet with stakeholders to review and discuss recommendations in their final form.  This is an opportunity to make sure they fully understand the recommendations as well as to lay the groundwork for action.
  2. Facilitate decision making and action planning around recommendations. I like the United Nations Development Programme’s “Management Response Template” as an action planning tool.

See also my handy one-pager of these tips for evaluation recommendations.

[1] See especially Hendricks & Papagiannis (1990) and Utilization-Focused Evaluation (4th ed.) by Michael Quinn Patton.

About the Authors

Lori Wingate

Lori Wingate box with arrow

Executive Director, The Evaluation Center, Western Michigan University

Lori has a Ph.D. in evaluation and more than 20 years of experience in the field of program evaluation. She is co-principal investigator of EvaluATE and leads a variety of evaluation projects at WMU focused on STEM education, health, and higher education initiatives. Dr. Wingate has led numerous webinars and workshops on evaluation in a variety of contexts, including CDC University and the American Evaluation Association Summer Evaluation Institute. She is an associate member of the graduate faculty at WMU. She, along with Dr. Kelly Robertson, led the development of The Evaluation Center's online training program, Valeo (valeoeval.com)

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 2332143. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.