If you’ve ever taken a college or graduate level course, surely you’ve completed some kind of summative evaluation form at the end of the semester. At Hofstra University, where I worked for 5 years before this past academic year, we called them CTRs (Course and Teacher Ratings). They consisted of a bunch of Likert scale items (strongly disagree to strongly agree) and a few open-ended questions. For the most part, students hated doing them and faculty members hated having to use them. I didn’t love the wording of many of the items, but I always asked my students to please take them seriously as an opportunity to let me know how I was doing. I told them that I would receive an analysis of the data and their actual responses to the open-ended items.
As part of applying for tenure at VCU, I have to demonstrate growth as an instructor. So, I plugged the CTR data from my 5 years at Hofstra into EXCEL and discovered some very interesting things. The graph below represents the data from a scale (composed of 5 items) that purports to be an overall measure of the course and the instructor. The x-axis represents the time points from Fall 2002 to Spring 2007. The y-axis represents the range of scores (which can range from 1 to 5). For this particular scale, the lower the number the better. But, I flipped the y-axis so that it looks like “better is higher;” a more standard look for such a line graph. The blue line represents my ratings; the red line represents the average score of the other faculty members (including adjuncts) within the program area.
[NOTE: click on image for larger view]
I entered the professoriate with NO teaching experience. I guest lectured once while I was getting a masters degree, but that was it. Hofstra took a bit of chance on me in that respect and I am eternally grateful to them for that. But, the graph clearly shows that my ratings were not as good early in my teaching career as they were last year.
I should also add that in my first couple of years as a professor, i was asked to teach a few sections of an undergraduate foundations of education course. I thought I would really enjoy working with undergraduates considering a future as an educator. But, after teaching a few semesters, I began to really dislike it. I had a hard time dealing with the students’ limited understanding of and experiences with education. Seemingly simple concepts such as “charter schools” were completely foreign to them. My ratings were not terrible for those course sections, but my department chair and my colleagues and I decided that my time and energy was better spent working with graduate students.
Overall though, I think the graph tells an accurate and interesting story. Quite simply, I’ve improved significantly as an instructor. The more comfortable I’ve become in my own skin and the more I’ve been able to find my own voice, the more I’ve been able to engage my students. That’s my interpretation of the data.
Academics bemoan the use of “quantitative” ratings of their work as instructors. But, I think it’s critically important that we ask our students to reflect on their experiences in our classes and to provide us with data about our work. I wonder how many of my P-12 colleagues/readers have similar systems in place to collect and analyze summative or formative data about their performance directly from their students. Do you?