As seasoned evaluators committed to utilization-focused evaluation, we partner with clients to create questions and data analysis connected to continuous improvement. We stress developmental evaluation to help link implementation and outcome evaluation. Sounds good, right? Well, not so fast.
Confession time. At times a client’s attention to data wanes as a project progresses, and efforts to actively engage clients to use data for continuous improvement do not generate the desired enthusiasm. Although interest in data re-emerges as the project concludes, that enthusiasm seems more related to answering “How did we do?” rather than exploring “What did we learn?” This phenomenon, depicted in the U-shaped curve in Figure 1, suggests that when data may have great potential to impact continuous improvement (“the Messy Middle”), clients may be less curious about their data.
To address this issue, we revisit Stufflebeam’s guiding principle: the purpose of evaluation is to improve, not prove. Generally, clients have good intentions to use data for improvement and are interested in such endeavors. However, as Bryk points out in his work with networked improvement communities (NIC), sometimes practitioners need help learning to improve. Borrowing from NIC concepts, we developed the Thought Partner Group (TPG) and incorporated it into our evaluation. This group’s purpose is to assist with data interpretation, sharing, and usage. To achieve these goals, we invite practitioners or stakeholders who are working across the project and who have a passion for the project, an interest in learning, and an eagerness to explore data. We ask this group to go beyond passive data conversations and address questions such as:
- What issues are getting in the way of progress and what can be done to address them?
- What data and actions are needed to support sustaining or scaling?
- What gaps exist in the evaluation?
The TPG’s focus on improvement and data analysis breathes life into the evaluation and improvement processes. Group members are carefully selected for their deep understanding of local context and a willingness to support the transfer of knowledge gained during the evaluation. Evaluation data has a story to tell, and the TPG helps clients give a voice to their data.
Although not a silver bullet, the TPG has helped improve our clients’ use of evaluation data and has helped them get better at getting better. The TPG model supports the evaluation process and mirrors Englebart’s C-level activity by helping shed light on the evaluator’s and the client’s understanding of the Messy Middle.
 Patton, M. Q. (2010). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York: Guilford Press.
 Stufflebeam, D. L. (1971). The relevance of the CIPP evaluation model for educational accountability. Journal of Research and Development in Education.
 Bryk, A., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Cambridge, MA: Harvard Education Publishing.
 Bryk A. S., Gomez, L. M., & Grunow A. (2010). Getting ideas into action: Building networked improvement communities in education. Stanford, CA: Carnegie Foundation for the Advancement of Teaching. Also see McKay, S. (2017, February 23). Quality improvement approaches: The networked improvement model. [blog].
 Englebart, D. C. (2003, September). Improving our ability to improve: A call for investment in a new future. IBM Co-Evolution Symposium.
Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.