Our quarterly newsletter, Conduit, is available for download by clicking the links below. Let us know if you would like to be added to our mailing list for print copies.
Summer '13 Lara Smith provides an overview of "what grant writers need to know about evalution", Krystin Martens tacklesa question about evaluation vs subject matter, we share our revised Evaluation Planning Checklist for ATE Proposals and Leslie Goodyear shares thoughts about evaluation and proposals.
*We regret that two of the links to EvaluATE resources (Evaluation Planning Checklist and Logic Model Template) included in the PRINT version of the newsletter do not work. They have been corrected in this electronic version. We apologize for any inconvenience or confusion.
Spring '13 Jane Davidson talks about "getting to the point" in evaluation. We provide an overview of the new Research.gov reporting system and discuss the use of rubrics for connecting the dots between data and conclusions. Leslie Goodyear debuts as our new guest columnist, providing tips for useful reporting.
Winter '13 Michael Quinn Patton gives advice on how to align survey questions for evaluative use. We give you tips on how to find an evaluator, cover the four levels of measurement and the best way to ask about demographics in surveys and we share with your our own webinar survey results.
Fall '12 Jim Kirkpatrick and Wendy Kayser Kirkpatrick of Kirkpatrick Partners provide an overview of the Kirkpatrick Model for evaluation. We define different types of evaluative comparisons, describe the key elements of an evaluation plan, introduce new checklist for planing and launching evaluations, and discuss different places where evaluation results should be reported.
Summer '12 Michael Lesiecki gives PI-toPI advice about working with an evaluator at the proposal stage. We share some tips about budgeting for evaluations, define claims and evidence and discuss the NSF proposal review criteria of intellectual merit and broader impact.
Spring '12 Donna Milgram talks about gathering data on women in girls in STEM from project partners. We share strategies other ATE PIs have used to gather data from partner institutions, define "underrepresented," share tips for collecting data from student veterans, and highlight our most popular recorded webinars
Winter '12 Kirk Knestis discusses how to be an informed consumer of external evaluation services, advice is given for dealing with small sample sizes in your evaluations, and we discuss our Annual Survey Data Snapshots.
Fall '11 John Kmiec talks about the value of evaluation capacity buidling. Other topics include what to do with an evaluation report, how to make the most of an NVC, and involving stakeholders in evaluation.
Summer '11 Liz Teles outlines 10 helpful hints for writing the evaluation sections of ATE proposals, we describe the implications of NSF"s new data management plan requirement, and Amy Gullickson identifies three streams of evaluation use in ATE centers.
Spring 11 Wayne Welch outlines steps toward instrument validation, we introduce methods for locating preexisting valid instrument, and Helen Sullivan and Amy Gullickson discuss use of a Project Mapping Template.
Winter '11 Sarah Butzen discusses statistical significance and the use of descriptive statistics in ATE evaluation. Gerhard Salinger addresses the question of what NSF does with grantees' evaluation reports.
Fall '10 Vera Zdravkovich and John Sener share their perspectives about what makes for a productive PI-evaluator relationship. We introduce our new ATE Evaluator Directory and Community of Practice advisors.
Summer '10 Terryll Bailey and Joyce LaTulippe discuss ways they've used Evalua|t|e resources, Mark Viquesney outlines the criteria for a good webinar, and we introduce our ATE Evaluation listserv, an evaluation primer, and our support funds to bring your evaluator to our preconference workshop.
Spring '10 Amy Germuth suggests tips for increasing survey response rates, and we discuss tracking students after they've graduated, using a logic model template, and the results of our investigation into the background of ATE evaluators.
Winter '10 Elaine Craft reviews the qualities of an exemplary evaluator, and we speak to the role of advisory boards, distinguish between formative and summative evaluation, discuss how NSF is using survey findings, and talk about various communication platforms for evaluators and their clients.
Joellen Killion provides a brief review of professional development impact measures, and we introduce Peggie Weeks, share some findings from our review of ATE evaluations, and get frank about evaluation terminology.
Gloria Rogers discusses best practices in student assessment, and we review findings from the 2009 annual survey, point out ATE evaluation case studies available on our web site, and talk evaluation design.
Lori Wingate outlines the purpose of a needs assessment and the multiple ways Evaluate is working in this regard, and we discuss how to select an evaluator, review the practice of benchmarking in education, and introduce our upcoming webinar for proposers to the ATE program.
Evaluator Stephen Jurs describes his evaluation of Evaluate and we highlight a PI's question on internal and external evaluators, reflect on the PI Conference, and review the 2002 User-Friendly Handbook for Project Evaluation.