Introduction

This delineation of evaluation tasks serves as a foundation for EvaluATE’s evaluation-capacity strengthening activities within the ATE program. EvaluATE aims to provide resources to support ATE evaluators and project teams in understanding and carrying out every task listed on this page.

About the tasks:

    • Although the tasks are numbered, they are not strictly sequential. Many tasks may occur simultaneously or iteratively.
    • The tasks are written as actions that have a clear beginning and end, rather than as general considerations, competencies, or principles that should govern the entire evaluation process.¹
    • External evaluators, internal evaluators, project leaders and staff, and others may be responsible for different evaluation tasks or aspects of tasks. The task statements do not specify who is responsible for each one, as this varies by project.
    • The tasks convey what needs to be done in evaluation and, in most cases, why. They do not include details about how the work should be done. (EvaluATE provides guidance on how to do the tasks through its resources and activities.)
    • Several tasks are framed as decisions, which must then be put into action. Decisions may need to be revisited to respond to challenges, opportunities, and needs that arise while an evaluation is underway. To avoid redundancy, the tasks do not include statements about putting decisions into action or revisiting them over time.

¹ Documents serve that larger purpose include the American Evaluation Association’s Guiding Principles for Evaluators (www.eval.org/About/Guiding-Principles) and Evaluator Competencies (www.eval.org/About/Competencies-Standards) and the Joint Committee on Standards for Educational Evaluation’s Program Evaluation Standards (https://evaluationstandards.org/).

Task Areas & Tasks

1. Management

Managing an evaluation involves making decisions about allocating and using resources involved in conducting an evaluation, including people, time, and money.

1.1 Assemble an evaluation team―which may include project personnel, external evaluators, and consultants―and determine each member’s responsibilities to ensure competent² and timely implementation of the evaluation.

1.2 Develop a budget for the evaluation to show how much money will be allocated to various evaluation activities and cost categories within a specified timeframe. Use the budget to guide decision-making about resource use throughout the project.

1.3 Prepare document(s) to formalize the relationship between the evaluator and the project and/or the project’s institution. These materials should address the evaluator’s scope of work, conditions and procedures for compensation, data ownership, and contractual obligations.

1.4 Engage external experts as needed throughout the evaluation to ensure technical quality and contextual appropriateness (e.g., editor, methods experts, subject matter experts, cultural liaison or mediator).

² American Evaluation Association Evaluator Competencies: www.eval.org/About/Competencies-Standards

2. Engagement

Many types of people may be involved in or affected by the project being evaluated and the evaluation itself (e.g., project leaders and other project representatives such as participants, staff, funders, project partners or collaborators, business or industry representatives). These individuals may be engaged in various ways (e.g., providing advice or feedback, participating in decision-making) throughout an evaluation, including during the planning and implementation of the evaluation and while using results. (Note: While project participants are often called upon to provide data for an evaluation, serving as a source of data is not considered engagement per se).

2.1 Identify specific people who should be involved in the evaluation,³ to what extent they should be involved, and the rationale for their involvement. Involve them as appropriate (e.g., during planning, analysis, interpretation, reporting). Involvement by a diverse group maximize the evaluation’s cultural and contextual responsiveness, inclusiveness, feasibility, and usefulness.

2.2 Develop and follow a plan for communicating with decision makers and other key project representatives throughout the evaluation to ensure there is a timely and open exchange of information.

2.3 Take steps to ensure people involved in evaluation planning fully understand the general purpose of evaluation and the range of options for focusing and conducting an evaluation so they can contribute meaningfully to decision-making.

2.4 Determine if decision makers and other key project representatives regard certain types of evidence or evaluation approaches as more meaningful and relevant than others. Accommodate these preferences to the extent feasible and appropriate to ensure the evaluation is responsive to their needs and interests.

2.5 Determine appropriate format(s), focus(es), and frequency for reporting to different audiences to ensure each audience receives the right information, at the right times, and in ways that are accessible and understandable.

2.6 Share draft plans, instruments, and reports and gather feedback from decision makers and other key project representatives to enhance these materials’ clarity, relevance, and cultural appropriateness.

2.7 Identify and implement appropriate strategies to ensure that the people involved benefit from their participation in the evaluation (e.g., sharing evaluation results, providing monetary or in-kind compensation or incentives, offering capacity-strengthening activities, or giving back proportionately in other ways).

³ In the remainder of this document, these individuals are called “decision makers and other key project representatives.”

3. Planning & Design

Planning an evaluation requires clarifying what will be evaluated and understanding its context. Designing an evaluation involves making a series of decisions about the key questions the evaluation will address and how to structure the inquiry.

3.1 Create a program logic model, theory of change, or similar document to describe how the project’s activities will bring about intended outcomes. When planning and designing the evaluation, refer to this description to ensure the evaluation is appropriately focused.

3.2 Identify cultural and other contextual factors (e.g., economic, historical, political, technological, environmental, social, geographic) that should be considered when planning and designing the evaluation to ensure its feasibility and appropriateness.

3.3 Determine specific evaluation questions to focus the inquiry in ways that address the information needs of decision makers and key project representatives, as well as those of the National Science Foundation (e.g., to meet specific expectations for ATE evaluations and address
NSF’s priorities regarding broadening participation).

3.4 Identify what will be measured to generate evidence to answer the evaluation questions.

3.5 Determine what methods and sources will be used to obtain data to answer the evaluation questions.

3.6 For outcome evaluation questions, identify strategies for determining (a) how project activities contributed to outcomes or (b) how outcomes can be attributed (i.e., causally linked) to activities.

3.7 If existing data are to be used in the evaluation, take steps to ensure timely access to needed information in a usable form (e.g., data-sharing agreements, fees, point of contact).

3.8 Determine what, if any, sampling techniques should be used to obtain data to optimize the evaluation’s feasibility and validity.

3.9 Determine how the data collected in the evaluation will be analyzed to answer the evaluation questions.

3.10 Identify appropriate sources of information to guide the interpretation of findings to reach evaluative conclusions (e.g., success targets, benchmarks, historical data, decision makers’ and other key project representatives’ expectations).

3.11 Determine if it is feasible and appropriate for additional information to be collected as part of the evaluation to meet other data-related needs the project may have (e.g., research, institutional reporting, ATE Survey).4

3.12 Prepare a summary of the evaluation plan for inclusion in the project’s funding proposal to meet submission requirements and to show that evaluation is an integral part of the project.

3.13 Prior to data collection, obtain human subjects institutional review board approval or verification that the evaluation is not subject to review to meet NSF requirements.

3.14 Prepare a detailed evaluation implementation plan that includes both (a) technical details, such as evaluation questions and a data collection and analysis plan, and (b) managerial details, such as timelines for activities and deliverables. (Evaluation plans prepared for inclusion in funding proposals are usually not sufficiently detailed to guide implementation.)

3.15 Document important changes in the project’s staffing, activities, timeline, or implementation to contextualize findings and, if needed, substantiate changes in the evaluation plan.

3.16 Identify the evaluation’s potential harmful consequences for the individuals involved, the project, or the greater good, and put appropriate safeguards in place to avoid or minimize harm. (Examples of harmful consequences include traumatizing individuals by asking sensitive questions or failing to gather or analyze data that would reveal existing or emergent inequities among project participants.)

4 Information about the ATE Survey is available from atesurvey.evalu-ate.org/

4. Data Collection & Analysis

Information for an evaluation is obtained through systematic procedures specified in the evaluation’s design. That information (e.g., institutional data, questionnaire data, focus group transcripts) is then subject to analysis, which is the process of organizing and transforming raw data into findings.

4.1 Develop a data management plan5 that clarifies how data or other material produced through the evaluation will be securely stored and shared.

4.2 Develop or select data collection instruments and protocols (e.g., questionnaires, interview guides) that will be used to obtain data needed to answer the evaluation questions.

4.3 Review and revise data collection instruments before use to ensure they are technically sound, inclusive, and culturally appropriate.

4.4 Collect data according to the plan, instruments, and protocols to generate data of sufficient quality and quantity to answer the evaluation questions.

4.5 Clean data to ensure the information is of sufficient quality for analysis (e.g., identify and correct or remove untrustworthy, improperly formatted, or duplicate information from a data set).

4.6 Identify serious data limitations that need to be disclosed to decision makers and documented in reports.

4.7 Analyze data to generate findings pertinent to the evaluation questions.

4.8 Analyze data to reveal patterns or trends related to diversity, equity, and inclusion in the project’s implementation and outcomes.

See NSF’s policy on data management plans at www.nsf.gov/bfa/dias/policy/dmp.jsp

5. Interpretation

Interpretation is the process of making sense of analyzed data to answer evaluation questions. This task typically involves translating findings into conclusions about the project’s quality, importance, or effectiveness.

5.1 Consult appropriate sources of information (e.g., success targets, benchmarks, historical data, decision makers and other key project representatives) to interpret results and address evaluation questions in a systematic and transparent manner.

5.2 Refer to relevant cultural and contextual factors (identified in Task 3.2) to inform understanding and interpretation of findings.

5.3 As appropriate, to encourage use of evaluation results, work with decision makers and other key project representatives to develop actionable recommendations for improving the project based on evidence.

6. Communication, Dissemination, & Use of Results

Formal communication about evaluation involves describing the evaluation process and results in a way that is understandable and usable by those who receive the information. (This communication may be done via technical reports, journal articles, or alternative formats such as infographics and videos.) Dissemination involves making the results available to others (beyond the immediate project team) who could use or otherwise benefit from the information. Using the results means making decisions and taking actions based on what was learned from the evaluation.

6.1 Prepare reports that describe the evaluation’s purpose, process, and findings to serve as a stable and credible source of information about the study.

6.2 Discuss evaluation results with decision makers, key project representatives, and other relevant audiences to help them engage with the content and to facilitate action planning based on findings, conclusions, and recommendations.

6.3 Implement action plans, and refer to results as needed to inform ongoing project improvement.

6.4 To support accountability and maximize use, disseminate information about the evaluation to audiences who have a right to know about or are interested in it.

7. Quality Review

Reviewing an evaluation for quality involves obtaining formal and informal feedback about the degree to which the evaluation adheres to the Program Evaluation Standards6 (i.e., Utility, Feasibility, Propriety, Accuracy, and Accountability) or other relevant evaluation quality guidelines.

6 Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users. Sage. See also evaluationstandards.org.

7.1 Obtain input on the evaluation’s quality and utility at regular intervals to inform decisions about how the evaluation is conducted (e.g., seek input from an advisory committee, feedback from the project team, and/or a formal expert metaevaluation).

Suggested Citation

Robertson, K. N., & Wingate, L. A. (2022). Essential ATE evaluation tasks. EvaluATE. https://evalu-ate.org/essential-ate-evaluation-tasks/

Development of Tasks

Interested in learning about how the Essential ATE Evaluation Tasks were developed? Read about the ATE Evaluation Task Validation Study.

STUDY DETAILS

Resources

Want to find resources to help you learn about and/or complete tasks?

VIEW resources

Want to read more detailed descriptions of each task area?

VIEW Detailed task area descriptions

 

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 2332143. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.