Posted on Leave a comment

Guest blog: The Pretest Alternative for Program Impact Evaluation


Most funders require organizations to submit an evaluation plan detailing how impact and outcomes will be measured. Many make evaluation harder than it needs to be by using a traditional pretest-posttest design. In this method, an assessment of participants' knowledge or attitudes are administered at the start of the training and then again at the conclusion. This design has merit, but it is not always the most economical or effective method for assessing social-services program outcomes.

In your next evaluation plan, considering a retroactive pretest-posttest design. In this method, the assessment is administered once at the end of the program, rather than twice. To assess change, the tool asks participants to refer back to a prior point in time, such as prior to the start of the program. Participants are then asked to answer the same questions thinking about their feelings today.

Some benefits of the retroactive pretest-posttest design compared to a traditional pretest-posttest design include:

¨ It is more economical. This method requires only one administration, saving printing costs and time spent in evaluation activity.

¨ It reduces bias that may occur if participants are not present at the very beginning and end of a program.

Demonstr It demonstrates more accurate assessment of program outcomes, especially when looking at change of knowledge and attitudes. In traditional pretests, participants may give an inaccurate self-report of one's knowledge because one does not yet have enough information to understand that one's knowledge of a subject is not well developed. This is particularly a concern when a program addresses complex subjects that are clarified over the course of the intervention.

¨ It is more useful for documenting self-assessed changes that occur as a result of the particular intervention, in part, because retrospective pretest-posttest evaluations are more sensitive to respondent change than traditional pretest-posttest evaluations.

Here is an example of a pretest-posttest assessment questions for a program designed to increase the evaluation capacity of an organization.

Section 1. Please think about your knowledge and skill level prior to your participation in this program.

How familiar were you with the following skills or concepts prior to participating in the DMC institutes? Use the following rating scale: Unaware = 1, Somewhat Familiar = 2, Familiar = 3, Very familiar = 4, Expert = 5

1. Develop a program logic model
2. Choose appropriate and relevant data-collection methods
3. Importance of fidelity

Section 2. How familiar with the following skills now? Use the following rating scale: Unaware = 1, Somewhat Familiar = 2, Familiar = 3, Very familiar = 4, Expert = 5

1. Develop a program logic model
2. Choose appropriate and relevant data-collection methods
3. Importance of fidelity

Section 3. To what extent do you agree with the following statements in regards to your organizations participation with this training? Please use the following scale: 1 = strongly disagree, 2 = disagree, 3 = neither disagree nor agree, 4 = agree, 5 = strongly agree.

1. I have better outcomes
2. I have more confidence in my ability to determine program success
3. I am more knowledgeable about program evaluation
4. My organization has started to implement evaluation in other programs

Now it is your turn. Think about your program's specific goals and outcomes. Make a list of those skills, attitudes, and knowledge you expect to change. Then turn them into a retroactive pretest-posttest evaluation.

Looking for more information? These references provide more information about the retroactive pretest-posttest design as well as provide support for this methodology when developing your grant applications.

Is Davis, G. (2003). Using a retrospective pre-post questionnaire to determine program impact. Journal of Extension [On-line], 41 (4) Article 4TOT4. Available at:

¨ Howard, GS, & Dailey, PR (1979). Response-shift bias: A source of contamination of self-report measures. Journal of Applied Psychology, 64 (2), 144-150.

¨ Howard, GS, Schmeck, RR, & Bray, JH (1979). Internal invalidity in studies employing self-report instruments: A suggested remedy. Journal of Educational Measurement, 16 (2), 129-135.

¨ Pratt, CC, McGuigan, WM, & Katzev, AR (2000). Measuring program outcomes: Using retrospective pretest methodology. American Journal of Evaluation, 21 (3), 341-349.

Eff Skeff, KM, Bergen, MR, & Stratos, GA (1992). Evaluation of a medical faculty development program: A comparison of traditional pre / post and retrospective pre / post self-assessment ratings. Evaluation and the Health Care Professions, 15 (3), 350-366.


Source by Sheri Chaney Jones

Leave a Reply

Your email address will not be published. Required fields are marked *