Managing an evaluation involves allocating and using resources—especially people, money, and time—effectively to carry out an evaluation. Plans for resource use are communicated in formal documents such as budgets, work plans, and contracts or memoranda of agreement. 

People: Professional evaluators have credentials and experience that prepare them for a variety of technical, analytic, and interpersonal activities. Evaluators often involve staff from the projects they are evaluating in planning or conducting the evaluation. Evaluations may engage other experts such as editors, cultural liaisons, or subject matter experts to fill in knowledge or skill gaps among the evaluation team.

Money: The cost of an evaluation depends mostly on its scope, because that determines how much personnel time is required. Travel, materials, and overhead costs also affect the overall evaluation budget.     

Time: Decisions about how much time is needed for an evaluation and how to use that time depend on the project’s duration and schedule. The evaluation’s scope and when information is needed for decision making must also be considered.  

Featured Resources

Evaluation Basics for Non-Evaluators: How Much Does It Cost?

View More Resources

Planning and Design

Designing an evaluation involves making a series of decisions about the key questions the evaluation will address and how to structure the inquiry.

This process often begins with developing a project logic model or theory of change to describe how the project will achieve desired outcomes. Key context factors to consider when planning the evaluation are also surfaced at this time.

Specific evaluation questions drive evaluation design. Evaluation questions reflect what the project is designed to do and what the evaluation will measure, as in these examples: 

“To what extent did the project influence the teaching practices of participating faculty?”  

“What is the program’s impact on students’ employability skills?”   

Planning how to answer evaluation questions involves deciding how to collect data and from what sources. Most evaluation questions are best addressed by using both qualitative and quantitative data.  

A critical step in planning is to prepare a summary of the evaluation plan to include in a funding proposal. A more detailed plan is needed to guide the evaluation’s implementation.

Evaluation planning and design decisions must take into account what’s feasible, ethical, and culturally appropriate.

Featured Resources

Designing a Purposeful Mixed Methods Evaluation

View More Resources

Data Collection & Analysis

Information for an evaluation is obtained through systematic procedures. That information is then analyzed. Analysis is the process of organizing and transforming raw data into evaluation findings. 

Evaluators use instruments—such as questionnaires, interview questions, and observation protocols—to collect information in a structured manner. Sometimes evaluators can find existing instruments that are appropriate. More often than not, evaluators must create new ones tailored to the specific project and evaluation questions. Regardless of their origin, all instruments must be reviewed and revised to ensure they are appropriate for the project’s context. 

The raw data collected (usually in form of numbers or words) for an evaluation then has to be cleaned and transformed into usable information through analysis. This process generates findings that serve as the evidence that evaluators will use to answer the evaluation questions. 

All NSF-funded projects must develop and follow data management plans that specify how data and materials generated by a project will be securely stored and shared.

Featured Resources

Effective Communication Strategies for Interviews and Focus Groups

View More Resources


Interpretation is the process of making sense of analyzed data to answer evaluation questions.  

Interpretation involves comparing findings about project performance with targets, benchmarks, expected outcomes, or other points of reference to reach conclusions. 

Another aspect of interpretation is developing recommendations based on evaluation findings and conclusions. Sometimes evaluators develop recommendations to be considered by project staff. Other times, project staff and evaluators work together to recommend actions to take based on the evaluation.  

Featured Resources

Strategies and Sources for Interpreting Evaluation Findings to Reach Conclusions

View More Resources

Communication, Dissemination, & Use of Results

Formal communication about evaluation involves describing the evaluation process and results in a way that stakeholders can understand and use.  

Formal evaluation reports typically describe the project that was evaluated; the evaluation process; and the evaluation findings, conclusions, and recommendations. But communicating results can take many other forms, from real-time discussions to peer-reviewed articles. The format and content of these communications depends on the audience’s interest level and how they will use the information. NSF values broad dissemination for the benefit of others outside of the evaluated project.

Reports—whatever form they take—are the vehicle for conveying evaluation information to the people who can use it. Evaluations get “used” when the information leads to a change in the project, its host organization, or the people involved. Using evaluative information to identify opportunities to improve projects is one of the most important purposes of evaluation. 

Featured Resources

View More Resources

Quality Review

Reviewing an evaluation for quality involves taking steps to ensure that evaluation plans, activities, and products are sufficiently useful, practical, ethical, and accurate. Whether formal or informal, these reviews should inform decisions about how the evaluation is conducted.

Evaluators can get feedback on their evaluation work from colleagues, project personnel, and advisors.  Or a project may engage another evaluator to conduct a formal evaluation of the evaluation (metaevaluation). 

The Joint Committee on Standards for Educational Evaluation has developed standards for educational program evaluation. These 30 standards address five domains: 

  • Utility 
  • Feasibility 
  • Propriety 
  • Accuracy 
  • Accountability 

Featured Resources

Checklist of The Program Evaluation Standards Statements

View More Resources
Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 2332143. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.