Yet, it is widely believed that evaluations reflect little more than a popularity contest; that it’s easy to “game” the ratings; that good teachers get bad ratings; that bad teachers get good ratings; and that fear of bad ratings stifles pedagogical innovation and encourages faculty to water down course content.
Full Answer
Aug 28, 2015 · The result is that professors must face anonymous accusers when it is too late to help the matter leading to the negative opinion in the first place. And, students are fearful of contacting their ...
BY LÉO CHARBONNEAU | AUG 21 2013. They’re often seen as the bane of a professor’s existence: student course evaluations. Among the many criticisms that faculty level at such evaluations is …
One of the things that most frightens non-tenured faculty members is the prospect of getting too low an average on end-of-term student course evaluations. That is the central point in Stacey …
May 31, 2013 · The results. When you measure performance in the courses the professors taught (i.e., how intro students did in intro), the less experienced and less qualified professors …
Among the many criticisms that faculty level at such evaluations is that they’re not taken seriously by students, aren’t applied consistently, may be biased and don’t provide meaningful feedback. Guilty on all counts, if they’re designed poorly, says Pamela Gravestock, the associate director of the Centre for Teaching Support and Innovation at the University of Toronto. But that doesn’t have to be the case. Done well, they can be both a useful and effective measure of teaching quality, she says.
It often gets boiled down to particular characteristics – communication skills, organization . But ultimately what we should be assessing for teacher effectiveness is learning, and course evaluations are limited in their ability to do that. They’re assessing the student’s perception of their learning or their experience of learning in a course, but not whether they’ve actually learned anything. That’s why they should be only one factor when you’re assessing effectiveness.
Dr. Gravestock: We have eight core institutional questions that appear on all evaluation forms. And then faculties and departments can add their own that reflect their contexts, needs and interests.
Gravestock is also the project manager behind a total revamp of the course evaluation system at U of T, a process that is still ongoing. She recently spoke with University Affairs about the misperceptions and pitfalls of course evaluations and how to improve them.
Dr. Gravestock: There have been a fair number of studies with regard to the perception that students will provide more favourable feedback when the course is easy. But there have been studies that have countered that claim. Students will respond favourably to a course, even if it’s tough, if they knew it was going to be tough. If the expectations of the instructor are made clear at the outset of a course, and students understand what is expected of them, they won’t necessarily evaluate the instructor harshly.
Dr. Gravestock: It’s required that an evaluation be administered for every course, but it’s not required that an individual student fill it out.
Dr. Gravestock: Yes and no. There are definitely certain things that students can provide feedback on, but there are also things that students are not necessarily in a position to provide feedback on. An example of the latter is a question that appears on most course evaluations, asking students to comment on the instructor’s knowledge ...
Course evaluations might make sense at a level where the students were both dedicated and somewhat knowledgeable about the subject. Professors fortunate enough to teach such students would probably welcome their feedback since it could help them improve the course.
That’s why faculty members are under pressure to show “good” evaluation numbers, even though that means treating all of the students like little kids.
One of the things that most frightens non-tenured faculty members is the prospect of getting too low an average on end-of-term student course evaluations.
Our Janet Wilson concludes, “We all know we can’t afford to uphold grading standards because of the pressure put on us.”
Several years ago, Norfolk State University terminated an experienced biology professor, Stephen Aird, because his grades were “too low.” Not undeserved, mind you, but just too low to keep the students satisfied.
430) That is, because they don't teach directly to the test, they do worse in the short run but better in the long run.
Students were randomly assigned to professors. This eliminated potential data-analysis headaches like the possibility that the good students would all enroll with the best professors.
On the Air Force study, another possible explanation is that younger, less experienced, instructors are more motivated to make their classes go well, substitute greater relational warmth for their lack of experience and are better liked and for either/both of these reasons, receive higher evaluations. These can make the social process of a course more pleasant even if a more practiced instructor has a better sense of where/how students have struggled and can therefore emphasize material that would contribute more skillfully to learning outcomes.
We evaluate students on how much they have learned all the time. It is called grading. I am doing it (or avoiding it by browsing the internet) right now. The classroom setting is all about evaluation; why do we freak out so much when--briefly!--the students are allowed to evaluate us? The information garnered by student evaluations can be useful; perhaps it is the uses to which administration puts those evaluations that is truly the nub of your concern?
Teaching to the test means teaching with the primary goal of helping students do well on the test. This can be done in a lot of different ways. Teaching to the test indicates a purpose not a method.
Teaching to the test is analogous to giving the test to the student as a study guide at home and having them take the test in the classroom. Teaching the material that is the root of the material covered in the textbook produces critical thought. Psychology is difficult and an "A" in the class ought to indicate that the student knows it all.
When you measure performance in the courses the professors taught (i.e., how intro students did in intro), the less experienced and less qualified professors produced the best performance. They also got the highest student evaluation scores. But more experienced and qualified professors' students did best in follow-on courses (i.e., their intro students did best in advanced classes).
A recent comprehensive study, for example, showed that professors get good evaluations by teaching to the test and being entertaining. Student learning hardly factors in, because ( surprise) students are often poor judges of what will help them learn. (They are, instead, excellent judges of how to get an easy A.)
Student learning hardly factors in, because ( surprise) students are often poor judges of what will help them learn. (They are, instead, excellent judges of how to get an easy A.) Advertisement. Advertisement.
But those only work if your peer actually cares about teaching in the first place—or doesn’t want to sabotage you. Outside reviewers (from other departments) could solve for this, but only if you underestimate the academic’s propensity toward petty vindictiveness: One bad review from English of a history professor, and we’ve got a permanent schism between two departments that should be clinging onto each other for survival.
Or, OK, we could measure performance in subsequent classes—but many of us teach general ed, and our departments will never see those kids again. Measuring “good teaching” is a touchy, complicated subject, and all solutions involve both massive compromises in pedagogical autonomy and substantial amounts of “ service work”—two of professors’ very favorite things.
Actual constructive criticism can be delivered as it ought to be: to our faces. Any legitimate, substantive complaints can go to the chair or dean. There is no reason for anonymity—after all, we have no way to retaliate against a student for a nasty evaluation, because we can’t even see our evals until students’ grades have been handed in to the registrar (and if you hated us that much, you won’t take our class again). And besides, I hate to tell you this, but professors know handwriting; we recognize patterns of speech; we can glean the sources of grudges. We know who it was anyway.
Nonetheless, Barnette said course evaluations are more important at Westminster than at most other schools because the college has no tenure system, and they therefore play a large role in decisions about retention and promotions.
The study found that male professors are more likely to be evaluated on their knowledge while female professors are more likely to receive comments about their nurturing characteristics.
Raleigh said the timing of course evaluations is also problematic and can skew the data, as well.