One of my evaluations examines implementation of what project managers call incremental credentials—non-degree credentials that don’t fit a traditional associate–bachelor’s–graduate degree structure.

We have encountered particular challenges evaluating the success of these badges, certificates, certifications, and so-called micro credentials. Reflecting on those challenges might be useful for those charged with evaluating programs that result in such credentials-including technician training programs.

My hope here is to give a heads-up to other evaluators, alerting them to a few of the difficulties we have confronted when studying student success through pursuit of these credentials.

Assessing Fidelity. It’s generally necessary to document whether a program is being implemented as designed (i.e., with fidelity), as a basis for judgments about how well it works. But a key feature of many incremental credentialing approaches is flexibility, with learners able to tailor coursework to their particular needs. When a learner can choose from a menu of courses, modules, or activities (such as with stacking credentials), it may be difficult to clearly define and monitor the “treatment.” Such credentials may also change more often than old-school degree programs. Flexibility can also result in more, smaller cohorts of learners getting differing treatments, complicating outcome analysis design. This is a fundamental predicament.

Terminology. Incremental credentialing, as a rapidly evolving area of higher education practice, invokes a messy vocabulary. What exactly is a “micro credential” or “badge” in the evaluand program’s higher ed system or institution? Does everyone know the generally accepted distinctions between a “certificate” and a “certification?” The answers are central to defining what is being evaluated and testing how it works for learners. It’s also crucial that all evaluation stakeholders (funders, program designers and managers, and consumers of findings) share consensus understandings of words describing how a credential solves a problem (its theoretical basis) and the outcomes it is supposed to further. Confusion about constructs and definitions will almost certainly confound implementation and compromise the utility of an evaluation.

Identifying Comparisons. Incremental credentials are often very specialized-aligned with a particular industry or perhaps even unique to a single employer. They are typically specific to one college and may differ in important ways from legacy postsecondary offerings (e.g., requiring much less time to complete than the alternatives they replace). All of this makes finding the comparison cases required for outcome analysis very difficult. Distinctions may even be substantive enough that outcome measures typical for assessing success for comparison programs become obsolete for the cases being studied. How pertinent is “persistence” if a credential entails a few weeks and one registration, rather than 20 courses over two years?

Data Availability. Because state and institutional data systems were designed to track progress and completion for traditional degrees, outcome data for incremental credentials—even basic information like enrollment, completion, and demographics—may not be available from those sources. Data for incremental credential programs may reside in ad hoc systems, potentially even spreadsheets managed by single staff members on local computers. Results of externally administered certification exams are often not managed by (or even shared with) colleges, adding to data sharing and availability issues. And if the evaluator is tasked with measuring learning, the skills-focused orientation common among technician-training credentials makes effective independent assessment extremely difficult.

Advance planning can, however, head off these potential difficulties, albeit likely only through direct collaboration with credential designers and their partners—notably industry partners if they are involved. Regardless, I hope anticipating the above issues from an evaluation’s outset is useful.

About the Authors

Kirk Knestis

Kirk Knestis box with arrow

Principal, Evaluand LLC

Dr. Knestis is founding Principal of Evaluand LLC, a research consultancy local to Washington DC. Kirk has been a professional evaluator and researcher for more than 20 years, having previously been a business owner, K-12 STEM and arts educator, and university faculty member. A content expert in STEM and workforce education, he specializes in mixed-method evaluations; research and development to test and improve education and social services innovations; and the design of theory-based studies to understand implementation and outcomes of complex, multi-level, and multi-site change innovations.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.