As our field explores the impact of informal (and formal) science programs on learning and skill development, it is imperative that we integrate research and evaluation methods into the fabric of the programs being studied. Embedded assessments (EAs) are “opportunities to assess participant progress and performance that are integrated into instructional materials and are virtually indistinguishable from day-to-day [program] activities” (Wilson & Sloane, 2000, p. 182). As such, EAs allow learners to demonstrate their science competencies through tasks that are integrated seamlessly into the learning experience itself.

Since they require that participants demonstrate their skills, rather than simply rate their confidence in using them, EAs offer an innovative way to understand and advance the evidence base for knowledge about the impacts of informal science programs. EAs can take on many forms and can be used in a variety of settings. The essential defining feature is that these assessments document and measure participant learning as a natural component of the program implementation and often as participants apply or demonstrate what they are learning.

Related concepts that you may have heard of:

  • Performance assessments: EA methods can include performance assessments, in which participants do something to demonstrate their knowledge and skills (e.g., scientific observation).
  • Authentic assessments: Authentic assessments are assessments of skills where the learning tasks mirror real-life problem-solving situations (e.g., the specific data collection techniques used in a project) and could be embedded into project activities. (Rural School and Community Trust, 2001; Wilson & Soane, 2000)

You can use EAs to measure participants’ abilities alongside more traditional research and evaluation measures and also to measure skills across time. So, along with surveys of content knowledge and confidence in a skill area, you might consider adding experiential and hands-on ways of assessing participant skills. For instance, if you were interested in assessing participants’ skills in observation, you might already be asking them to make some observations as a part of your program activities. You could then develop and use a rubric to assess the depth of that observation.

Although EA offers many benefits, the method also poses some significant challenges that have prevented widespread adoption to date. For the application of EA to be successful, there are two significant challenges to address: (1) the need for a standard EA development process that includes reliability and validity testing and (2) the need for professional development related to EA.

With these benefits and challenges in mind, we encourage project leaders, evaluators, and researchers to help us to push the envelope by:

  • Thinking critically about the inquiry skills fostered by their informal science projects and ensuring that those skills are measured as part of the evaluation and research plans.
  • Considering whether projects include practices that could be used as an EA of skill development and, if so, taking advantage of those systems for evaluation and research purposes.
  • Developing authentic methods that address the complexities of measuring skill development.
  • Sharing these experiences broadly with the community in an effort to highlight the valuable role that such projects can play in engaging the public with science.

We are currently working on a National Science Foundation grant (Embedded Assessment for Citizen Science – EA4CS) that is investigating the effectiveness of embedded assessment as a method to capture participant gains in science and other skills. We are conducting a needs assessment and working on creating embedded assessments at each of three different case study sites. Look for updates on our progress and additional blogs over the next year or so.

Rural School and Community Trust (2001). Assessing Student Work. Available from

Wilson, M., & Sloane, K. (2000). From principles to practice: An embedded assessment system. Applied Measurement in Education, 13(2), 181-208. Available from

About the Authors

Rachel Becker-Klein

Rachel Becker-Klein box with arrow

Senior Research Associate PEER Associates

Rachel Becker-Klein, Ph.D. is a Senior Research Associate at PEER Associates. Dr. Becker-Klein has over a decade of experience as an evaluator. Dr. Becker-Klein’s interest in systems thinking that derived from a Ph.D. in Community Psychology (from New York University in 2003) has pushed her to bring a holistic approach to evaluation and assessment tools. Embedded assessment tools as a way to measure participant skills, knowledge, and behavior are an important part of the work she does as an evaluator. Dr. Becker-Klein has developed embedded assessment tools for several STEM education programs (in both formal and informal educational settings).

Karen Peterman

Karen Peterman box with arrow

President Karen Peterman Consulting

Karen Peterman, Ph.D., is the founder of Karen Peterman Consulting, Co., a small research and evaluation firm in Durham, North Carolina. She has conducted evaluations of STEM education programs for almost 20 years. Her research projects focus on evaluation methods that can be used to gather meaningful data in informal STEM learning environments. Karen leads the EvalFest project with Todd Boyette from Morehead Planetarium and Science Center and Katherine Nielsen from the University of California, San Francisco’s Science and Health Education Partnership.

Cathlyn Stylinski

Cathlyn Stylinski box with arrow

Senior Agent University of Maryland Center for Environmental Science

Cathlyn Stylinski is a tenured research faculty at the University of Maryland Center for Environmental Science. She holds a Ph.D. in ecology and has over a decade of experience in designing and evaluating science education projects in schools and informal education settings with funding from NSF, NOAA and other organizations. Her research interests focus on public/student engagement in science and collaborative learning around environmental topics. Her work includes development of a classroom observation tool to understand technology use in science classes and exploration of embedded assessments to measure skill gains in citizen science efforts.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.