The modern institutions of today exist in a data-driven world, and this increasingly includes colleges and universities. Faculty data can provide valuable metrics that can be used to improve teaching effectiveness for future students and improve the student learning experience.
We spoke with Andy Goodman, Director of the Office of Academic Affairs at University of Missouri System, where he works on the measurement, evaluation, and improvement of the faculty teaching experience. He shared his best practices for understanding the data, evaluating the facts, and taking action to improve on teaching and learning.
“How you organize matters down the road.”
As the time approaches for an institution’s annual review of faculty, it’s crucial to look at what data is most meaningful to your institution and organize accordingly. Goodman explains how the University of Missouri thoughtfully approached their implementation of Interfolio’s Faculty Activity Reporting module in a way that would make reporting and analysis easier. While a list of recommended categories for organizing faculty data is provided, the University of Missouri did additional customization to meet the needs of their institution, specifically, with a more in-depth focus on teaching data.
University of Missouri uses the Faculty Activity Reporting module to segment teaching data by courses taught, student advising, and mentorship of students. Other data collected includes their faculty’s extension efforts, courses taught at other institutions, and other teaching activities that are relevant to the evaluation of their work.
Goodman posed questions to consider for your activity reporting system when beginning the reviewing process: “Will your categories facilitate ease of evaluation?” and “Do your categories and annual review components help your university achieve its goals for annual review?”
Evaluating Data for the Faculty Review Process
After providing insight and recommendations on structuring the data, Goodman discussed how faculty are evaluated, especially as it pertains to teaching and learning efficacy.
On a biannual basis, Goodman works with individuals in their review cycles. Reporting enabled the conversations to be data-driven, and allows regular discussion of what a faculty member needs to do in order to “put their best foot forward” in the next year, while highlighting what was deemed to be important in future evaluation rubrics. Goodman emphasized the importance of this feedback loop to institutional success.
Working Toward Teaching and Learning Improvement
“The goal of the annual review process is making things better,” Goodman described. It’s important to have buy-in and understanding about why an institution does annual reviews. For him, the guiding principles of the process are to improve teaching, to be forward-looking, and “to approach it thoughtfully, [and] not just a perfunctory exercise.”
Goodman explained that the evaluations should be inclusive and holistic, including teaching preparation/delivery, teaching evaluations from students, and any other materials such as exams or syllabi. The goal of this approach is to make sure that the evaluation is not based on one data point but rather understand “the multiple means of which a faculty member can be evaluated.”
For evaluators involved in the reviews, he recommends doing a comparison of data across previous semesters, an evaluation of student feedback, and an assessment of “bottleneck” points for students in the coursework.
Using Annual Reflection Practices to Improve Institutional Success
Goodman recounted a best practice he found for encouraging faculty to understand student and peer reviews. For a qualitative course evaluation, he has his faculty bring in a black permanent marker and multi-colored highlighters along with their printed student comments. He asks them to think of these comments in terms of “control” or “no control” and then “positive” or “negative” to give context to the reviews. For example, he provided samples of a student saying “hate the haunted classroom” as a no control/negative comment and “really organized lectures – easy to follow” as a control/positive review. He encourages faculty to find central themes in these comments by breaking down what reviews are truly useful.
Finally, he explained how faculty utilize these reflection practices to see teaching improvement immediately. The process includes making sure faculty have easy access to information, support in identifying problems and altering pedagogy which may lower SET (Society for Education and Training) ratings, and understanding of the metrics on which they are evaluated.
Goodman shares, “The way that teaching improvement happens is when you’re able to sit back and look at everything and say ‘OK, these were my strengths, these are some areas for improvement, and these are some insights that I’ve gained.'”
What Comes Next?
Goodman outlined three next steps that could be beneficial to an institution with their annual review process. First, he encouraged a working relationship with the provost’s office to clarify key components of the review process. Second, consult with the teaching center to coordinate professional development opportunities around use of SETs. And third, explore ways to explicitly align review, promotion, and tenure components with a designated data reporting structure such as Interfolio’s Faculty Activity Reporting module.