At the end of the 2023 ATE PI conference in Washington, DC, I found myself in a reflective mood. Perhaps I was feeling reflective because this was my first-ever experience at an ATE PI conference. Given my positionality and research before attending this conference, I found myself comparing ATE PI to other conferences I have attended. I also found myself deeply reflecting upon my experience attending the evaluation pre-conference workshop.
In what follows, I first explain how my identity and scholarly trajectory shaped my experience at the conference. Then, I will explain what I learned and am wrestling with from the pre-conference workshop.
As a scholar in the social sciences, I’ve attended numerous national academic conferences, from the American Sociological Association to the American Educational Research Association. I am currently chairing a conference myself with the Sociology of Education Association. I mention this to say that I am very familiar with conferences, but ATE PI was new and refreshing in three main ways.
- I appreciated the focus on real-life evaluation applications. Much of my personal research is theoretically driven or purely theoretical in nature, so hearing these presentations focused on tangible processes I could institute in evaluation and data analysis immediately was refreshing.
- I noticed and appreciated the emphasis on collaboration throughout the presentations. Evaluations, as we know, are not done in isolation, so it follows that presentations about evaluation and analysis were also collaborative in nature. In some academic spaces, I have noticed much more of a competitive and individualist spirit, and the air of ATE PI was far from that.
- Many presentations and attendees were focused on results and not just results of evaluations, but results about a) how programs can better meet the needs of an increasingly diverse STEM workforce and b) how evaluators can support programs to sharpen their focus on inclusion and creating more comprehensive access to STEM pathways.
I also could not help but compare the informative workshop on interpretation and data analysis to how I usually go about research. I recall one of the presenters using an analogy to demonstrate the process of interpreting data for evaluation: Data are the stars, and your analysis and interpretation define the constellations. I’ve made similar analogies and connections in my courses for research in social sciences. I keep learning more about the similarities and differences between the empirical research I do for my research agenda and the evaluative work I do for micro/nanotechnology centers.
The presentation reminded me that evaluation must be done in the service of a program, while my research is in conversation with other literature doing similar work. I also feel more confident in my evaluation skills after being reminded of the similarities between evaluation and much of the qualitative research I do on my own. I’m thankful for the opportunity to be able to have focused reflection about evaluation and data analysis, and the ATE PI pre-conference created a space for just that.
Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.