From my experience evaluating different types of programs, including NSF-funded Advanced Technological Education (ATE) projects, I have learned firsthand the practical usefulness of paying attention to how we develop the tools we use to gather evaluation data.
It is important to ensure that stakeholders not only see themselves in the data collected but also participate in, understand, and agree with the methods used to gather the data. This practice, rooted in the principles of utilization-focused evaluation, has profoundly impacted the successful application of evaluation findings in all programs I have been involved with.
Below are some key lessons I have learned about creating data collection instruments to evaluate ATE programs. (These lessons can be applied broadly to many additional evaluations, as well).
Lesson 1: Co-create with Stakeholders
Involving stakeholders from the outset in (i) the design of evaluation instruments and (ii) planning the data collection process leads to a higher degree of both buy-in and investment and, ultimately, more actionable data. By engaging with program participants, instructors, and administrators during the development phase, we ensure that the tools reflect the realities and needs of those directly impacted by the ATE initiatives.
Lesson 2: Aim for Simplicity and Parsimony
In trying to collect thorough data, it is easy to make the process (and tools) overly complicated inadvertently. However, simplicity in designing instruments—be they surveys, interviews, or observational protocols—enhances response rates and the quality of data collected. Keeping questions straightforward and relevant makes it easier for participants to provide meaningful insights.
Lesson 3: Validate Through Pilot Testing
Before full deployment, consider pilot testing instruments with a small, diverse subset of stakeholders. This has the potential to uncover unexpected interpretations or confusion, allowing for necessary adjustments. This iterative process refines the tools and actively involves stakeholders in the development and refinement of instruments. By making them feel part of the process, it can improve the validity of the data collected and increase the chances that the evaluation findings will be used.
In a nutshell, instrument development and data collection in ATE evaluations is a collaborative, iterative process. By prioritizing the needs and perspectives of our stakeholders and focusing on the practical application of our findings, we can drive meaningful improvements in ATE.
I encourage programs to leverage utilization-focused evaluation (among other approaches) to guide our efforts, ensuring that every tool we develop and every piece of data we collect serves the ultimate goal of enhancing learning and teaching in the ATE community.
Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.