Blue desk with a notebook, coffee cup and notebook on it

From my experience evaluating different types of programs, including NSF-funded Advanced Technological Education (ATE) projects, I have learned firsthand the practical usefulness of paying attention to how we develop the tools we use to gather evaluation data.

It is important to ensure that stakeholders not only see themselves in the data collected but also participate in, understand, and agree with the methods used to gather the data. This practice, rooted in the principles of utilization-focused evaluation, has profoundly impacted the successful application of evaluation findings in all programs I have been involved with.

Below are some key lessons I have learned about creating data collection instruments to evaluate ATE programs. (These lessons can be applied broadly to many additional evaluations, as well).

Lesson 1: Co-create with Stakeholders

Involving stakeholders from the outset in (i) the design of evaluation instruments and (ii) planning the data collection process leads to a higher degree of both buy-in and investment and, ultimately, more actionable data. By engaging with program participants, instructors, and administrators during the development phase, we ensure that the tools reflect the realities and needs of those directly impacted by the ATE initiatives.

Lesson 2: Aim for Simplicity and Parsimony

In trying to collect thorough data, it is easy to make the process (and tools) overly complicated inadvertently. However, simplicity in designing instruments—be they surveys, interviews, or observational protocols—enhances response rates and the quality of data collected. Keeping questions straightforward and relevant makes it easier for participants to provide meaningful insights.

Lesson 3: Validate Through Pilot Testing

Before full deployment, consider pilot testing instruments with a small, diverse subset of stakeholders. This has the potential to uncover unexpected interpretations or confusion, allowing for necessary adjustments. This iterative process refines the tools and actively involves stakeholders in the development and refinement of instruments. By making them feel part of the process, it can improve the validity of the data collected and increase the chances that the evaluation findings will be used.

In a nutshell, instrument development and data collection in ATE evaluations is a collaborative, iterative process. By prioritizing the needs and perspectives of our stakeholders and focusing on the practical application of our findings, we can drive meaningful improvements in ATE.

I encourage programs to leverage utilization-focused evaluation (among other approaches) to guide our efforts, ensuring that every tool we develop and every piece of data we collect serves the ultimate goal of enhancing learning and teaching in the ATE community.

About the Authors

George Chitiyo

George Chitiyo box with arrow

Professor of Educational Research and Evaluation, Tennessee Tech University

George Chitiyo is a Professor of Educational Research and Evaluation at Tennessee Tech University. He teaches courses in research methods, applied statistics, and program evaluation. In addition to various educational and health initiatives, he evaluates several projects funded by National Science Foundation (NSF), including those within the Advanced Technological Education (ATE) program.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 2332143. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.