
A logic model is a visual tool that shows how a program is designed to work, outlining the relationships among resources, activities, and intended outcomes.
As an external evaluator, I frequently use logic models and find that codeveloping them with clients yields significant value—not just in the final product, but in the reflective discussions that occur as we build the logic model together. Here’s an overview of how I lead teams through this process.
Getting Buy-In
I recommend creating a logic model when clients need to articulate their programming to stakeholders, align internal understanding, or design/revise programming. A quick pitch might sound like: “Let’s create a one-page logic model for your program. It visually clarifies what you’re doing, why, and what changes you expect. It can align your team, reduce confusion, and help use time and resources effectively.”
Logistics
Once there’s agreement, I suggest a small working group (ideally four or five people) that includes both decision makers and frontline staff. The process usually involves three virtual sessions over a few weeks so there is enough time to reflect and revise. During our meetings, I screen-share PowerPoint and use it in real time to develop the model.
Meeting 1: Impact, Inputs, and Activities
Before the first meeting, I prepare a draft logic model based on a review of existing program materials.
In the meeting, I encourage participants to engage critically and not shy away from hard questions.
As a first step, we review a blank logic model template, and I ask everyone to write down individually what they believe the program’s ultimate impact is. Similarities and differences in responses can be illuminating!
By beginning with impact (rather than moving from left to right across the document) we set the stage to keep every conversation focused on whether activities and outcomes support that goal.
After the group reaches consensus about impact, I present the inputs and activities from my draft logic model. Our conversation at this stage focuses mainly on activities – checking for accuracy and alignment with the stated impact.
Often, teams are surprised by the volume of activities they offer, and we discuss whether it is better to have fewer activities with stronger dosage, or more activities with lower dosage. I prompt the team to consider whether the types and number of activities, along with their intensity, can truly drive the desired change.
Meeting 2: Outputs and Outcomes
Before the second meeting, I revise the model based on the first meeting’s discussions. We quickly review changes; then we shift to the Outputs section for what is typically a brief discussion of each activity’s outputs. Next, we tackle the short- and medium-term outcomes. I pre-fill these based on documents and logic, and we review them one activity at a time. We again discuss logical connections and timing, questioning whether outcomes are realistic based on the planned activities.
Meeting 3: Review and Refine
Before the final session, I update the logic model again. During this meeting we focus on reviewing and refining the logic model for the last time. The team is able to see the full model and make sure that it logically makes sense as a whole.
Clients often note that, although the process is intensive, they value the clarity it brings and the alignment it fosters across their team.
Follow-Up
I deliver two versions of the logic model: (a) a detailed version to be used internally and (b) a high-level version for external audiences. Clients often ask me to present these to their broader teams or boards to help explain the program’s structure and expected outcomes.
The final product is meant to be a living document, so I encourage the team to revisit the model on an annual basis—and more frequently if there are major programmatic changes.
Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.



EvaluATE is supported by the National Science Foundation under grant number 2332143. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.