In a retrospective pretest,1 trainees rate themselves before and after a training in a single data collection event. It is useful for assessing individual-level changes in knowledge and attitudes as one part of an overall evaluation of an intervention. This method fits well with the Kirkpatrick Model for training evaluation, which calls for gathering data about participants’ reaction to the training, their learning, changes in their behavior, and training outcomes. Retrospective pretest data are best suited for evaluating changes in learning and attitudes (Level 2 in the Kirkpatrick Model).

The main benefit of using this method is that it reduces response-shift bias, which occurs when respondents change their frame of reference for answering questions. It is also convenient, more accurate than self-reported data gathered using traditional pre-post self-assess methods, adaptable to a wide range of contexts, and generally more acceptable to adult learners than traditional testing. Theodore Lamb provides a succinct overview of the strengths and weaknesses of this method in a Harvard Family Research Project newsletter article—see bit.ly/hfrp-retro.

The University of Wisconsin Extension’s Evaluation Tip Sheet 27: Using the Retrospective Post-then-Pre Design provides practical guidelines about how to use this method: bit.ly/uwe-tips.

Design

The focus of retrospective pretest questions should be on the knowledge, skills, attitudes, or behaviors that are the focus of the intervention being evaluated. General guidelines for formatting questions: 1) Use between 4 and 7 response categories in a Likert-type or partially anchored rating scale; 2) Use formatting to distinguish pre and post items; 3) Provide clear instructions to respondents. If you are using an online survey platform, check your question type options before committing to a particular format. To see examples and learn more about question formatting, see University of Wisconsin Extension’s Evaluation Tip Sheet 28: “Designing a Retrospective Post-then-Pre Question” at bit.ly/uwe-tips.

For several examples of Likert-type rating scales, see bit.ly/likert-scales—be careful to match question prompts to rating scales.

Analysis and Visualization

Retrospective pretest data are usually ordinal, meaning the ratings are hierarchical, but the distances between the points on the scale (e.g., between “somewhat skilled” and “very skilled”) are not necessarily equal. Begin your analysis by creating and examining the frequency distributions for both the pre and post ratings (i.e., the number and percentage of respondents who answer in each category). It is also helpful to calculate change scores—the difference between each respondent’s before and after ratings—and look at those frequency distributions (i.e., the number and percentage of respondents who reported no change, reported a change of 1 level, 2 levels, etc.).

For more on how to analyze retrospective pretest data and ordinal data in general, see the University of Wisconsin Extension’s Evaluation Tip Sheet 30: “Analysis of Retrospective Post-then-Pre Data” and Tip Sheet 15: “Don’t Average Words” bit.ly/uwe-tips.

For practical guidance on creating attractive, effective bar, column, and dot plot charts, as well as other types of data visualizations, visit stephanieevergreen.com.

Using Results

To use retrospective pretest data to make improvements to an intervention, examine the data to determine if some groups (based on characteristic such as job, other demographic characteristics, and incoming skill level) gained more or less than others and compare results to the intervention’s relative strengths and weaknesses in terms of achieving its objectives. Make adjustments to future offerings based on lessons learned and monitor to see if the changes lead to improvements in outcomes.

To learn more, see the slides and recording of EvaluATE’s December 2015 webinar on this topic: http://www.evalu-ate.org/webinars/2015-dec/

For a summary of research on this method, see Klatt and Powell’s (2005) white paper, “Synthesis of Literature Relative to the Retrospect Pretest Design:” bit.ly/retro-syn.

1 This method has other names, such as post-then-pre and retrospective pretest-posttest.

About the Authors

Lori Wingate

Lori Wingate box with arrow

Executive Director, The Evaluation Center, Western Michigan University

Lori has a Ph.D. in evaluation and more than 20 years of experience in the field of program evaluation. She is co-principal investigator of EvaluATE and leads a variety of evaluation projects at WMU focused on STEM education, health, and higher education initiatives. Dr. Wingate has led numerous webinars and workshops on evaluation in a variety of contexts, including CDC University and the American Evaluation Association Summer Evaluation Institute. She is an associate member of the graduate faculty at WMU. She, along with Dr. Kelly Robertson, led the development of The Evaluation Center's online training program, Valeo (valeoeval.com)

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 2332143. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.