Blue background with tangled white lines leading to a lightbulb

Complex constructs that represent thoughts (e.g., attitudes) or multifaceted skills (e.g., creativity) are difficult to measure in applied evaluation contexts, such as in ATE projects, for a variety of reasons:

  • Some constructs may require literature reviews to understand how to measure them or to identify existing measures.
  • Contexts such as ATE projects may require a measure to be tailored to fit the project, which can require collaboration between the evaluation and project teams.
  • ATE projects often occur in environments with little control of factors external to the project that might impact the outcomes of interest, making it difficult to isolate a project’s impact on an outcome.

These challenges present exciting opportunities and solutions to measuring complex constructs. I offer a few ideas based on my experiences.

Use existing research and theory as a guide, but don’t limit yourself. In approaching measurement, using an evaluation lens rather than a research lens offers flexibility.

Although I familiarize myself with available validated measures for the construct of interest, if they are not a great fit for the project or not accessible to me, I get creative. For example, an ATE project I evaluate aims to foster an entrepreneurial mindset among students who participate in a product design program. While we found a well-established evidence-based framework to measure entrepreneurial mindset, the associated instrument was not a good fit for the project. We used the framework to develop our own instrument, where we found or created measures that captured each domain of the framework.

In addition to broader literature searches for validated measures, some relevant resources for ATE projects may include:

Build in appropriate time and funding for instrument development during the proposal phase. During proposal development, consider the time and funds you will need to review existing literature or instruments, to collaborate with the project team to identify and modify instruments, and to purchase proprietary instruments.

ATE projects may require narrowly defined outcomes given the focus on technological education. This necessitates collaboration time between the evaluation and project teams to identify and tailor the evaluation instruments. In the ATE project described above, we worked together to identify measures that could be impacted by participation in the product design program. For example, rather than focusing on collaboration skills broadly, we found a measure that focused on communicating and collaborating in teams, which reflected the structure of the program.

Consider multiple methods of data collection and alternative methodologies to accurately measure complex constructs. In applied evaluation contexts, change is hard to detect, and it becomes even harder when the outcomes are difficult to measure.

Continuing the ATE project example above, we used our instrument to capture students’ entrepreneurial mindset before and after participating in the program and conducted focus groups with students. Students rated themselves very highly regarding communication and collaboration before and after the program. However, during focus groups, they shared that their greatest learnings were around teamwork. Using mixed methods—methods that integrate quantitative and qualitative data—allowed us to better understand how students’ entrepreneurial mindset changed during the program (Creswell & Clark, 2017).

These findings also led us to consider the utility of alternative methods, such as retrospective pre-posttests, to examine changes in entrepreneurial mindset. This method of data collection involves participants providing their “pre” and “post” ratings at the same time, addressing the potential for response-shift bias among students participating in this program (Bhanji et al., 2012). As feasible, employing multiple, distinct methods to measure complex constructs can help you accurately capture the outcomes of your ATE project.

Measuring complex constructs is an exciting challenge for ATE project and evaluation teams. Being flexible and creative, building in appropriate collaboration time and funding, and considering multiple methods of data collection can support this endeavor.

References

Bhanji, F., Gottesman, R., de Grave, W., Steinert, Y., & Winer, L. R. (2012). The retrospective pre–post: A practical method to evaluate learning from an educational program. Academic Emergency Medicine19(2), 189–194.

Creswell, J. W., & Plano Clark, V. L. (2017). Designing and conducting mixed methods research (3rd ed.). Sage.

 

About the Authors

Dr. Jennifer Gruber

Dr. Jennifer Gruber

Senior Researcher and Evaluator, Magnolia Consulting, LLC

Dr. Jennifer Gruber is a Senior Researcher and Evaluator at Magnolia Consulting. Dr. Gruber has designed, coordinated, and supported various evaluations of K-16 educational initiatives funded by entities such as the National Science Foundation, NASA, the U.S. Department of Education, the U.S. Department of Labor, and the National Institutes of Health. Her projects include studies of the implementation and outcomes of STEM education and workforce programs, educator professional development programs, and curricular products and programs, among others. Her areas of expertise include evaluation and research study design, instrument development and validation, and quantitative methods. She currently evaluates two NSF-ATE projects.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 2332143. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.