Church culture generally eschews public criticism/correction with the principle becoming better outlined over the last 150 years or so. In post-1890 Utah, Church leaders were going through some growing pains over politics and the establishment of a two party system. To even out the voting pattern, they actively campaigned on the Republican side. The result was hurt feelings all around and worst of all (in President Woodruff’s opinion), flamboyant political rhetoric made it’s way into public discussions between Church leaders on the campaign trail. It took some time for this to dampen out and political views are mostly held incognito now – in terms of party. But places remain in the institution where criticism is leveled, anonymously mostly, at other Church members and sometimes it’s pretty virulent.
One of those spots is the Church institutions of Higher Learning (i.e., BYU-various). I’m talking about the student satisfaction surveys that are run (at least) at the end of every semester. Students have the opportunity to provide numerical ratings of professorial performance and make anonymous comments about the classroom experience. Over the years I’ve had the chance to review large segments of response when sitting in on faculty evaluative processes at various institutions. The great majority of commenters in this context don’t indulge in this virulence, but some do, and faculty who get and read this stuff are often hurt by it to some degree. But that is not my real point, exactly.
This kind of formal feedback system is used in other kinds of venues and since Medieval times, students have registered their discontent with professors (simply by not sitting lectures and hence not paying for the privilege). In modern times, student comment on faculty has varied from word of mouth to the present anonymous computerized rating systems. Over the last couple of decades this rating system has become more and more the equivalent of fast-food employee check-lists. It can be a make-or-break kind of hurdle in promotion and tenure in many schools and BYU is no exception. There is a natural kind of weeding out that now occurs. Faculty at universities naturally sit along a range of values when it comes to student ratings: too far down and you will be asked to look elsewhere for employment. This can be true of tenured faculty too, although the bar for dismissal is a bit higher and varies from place to place.
The odd thing about general faculty evaluative systems in middle-of-the-road or primarily teaching institutions is that they often punish people by asking them to teach more hours, and reward people who perform well by asking them to perform less (hours -and with more advanced students). Also, contrary to some popular beliefs, many of the very best (read most popular) professors register in the upper quartile of research productivity (publishing, grants, awards, etc.). Higher ratings translate to less teaching, lower ratings mean more teaching, all other things being equal. The paradox exists because faculty are expected to not only perform in the classroom- at least at the Provo incarnation of BYU (I’m not sure of the priorities at the others) and its academic peers but to perform in the arena of academic research and professional service at the university and in their respective disciplines. Of course, there is some balancing going on here, with faculty engaged heavily in research, graduate students usually tag along and that means significant student mentoring time. [At BYU-Provo it’s more complex because of the large thrust into the undergraduate research arena.]
Really, these student satisfaction measures point in the wrong direction. If you are going to measure learning effects, and one assumes that this is the real concern, then overall downstream student performance in post-requisite courses is probably the thing to consider. But universities are not set up to do this kind of measurement in the data sets they collect and organize, at least historically that’s been true. Moreover, this effect is not immediately measurable (and it’s noisy) and decisions which purport to involve teaching effectiveness are relatively short-term in nature – you’ve got 5 or 6 years to qualify for tenure or get the boot and start over somewhere. Supposing that a faculty person improves presentation and interaction with students over time, not in the sense of connecting on some reptile-brain level, but in terms of the retained learning that takes place over time, say, then you probably need longer terms of probation to see trends. As it is, new faculty in academically middle-level institutions like BYU-Provo want to “train for the test” like some ACT/SAT prep program. “How do I get higher student ratings” is the natural question for a faculty member of moderate research achievement in a teaching oriented second tier school and of course there are batteries of tips and tricks to get them, more or less couched in the systems at universities themselves. Schools have begun to have faculty teacher training in the honest attempt to make faculty better in the classroom and education theory has contributed various ideas about the methods for doing this–and business strategists have been tapped too. Are we really improving in this? I’m not sure. Suspect any analysis you hear about such things-its usually done by parties with a vested interest in their positive effects for one reason or another.
Student perception of faculty is colored by lots of variables. How important is it? It has some value, but its clear that just taking a number labeled “overall teacher rating” (is he/she a 10 or 5 or . . . a 5.5 or a 5.6 -yes it can be cut that thin) off individual student surveys doesn’t really tell you what you want to know about a given instructor – but the ease of doing it and the group-think it creates makes it the drug of choice. And how do you interpret student comments like “Dr. blahblah is the best professor I’ve ever had” or “Dr. booboo is the worst teacher in the department–she doesn’t have any concept of fairness”? Then there’s the question of long-term value. Do we want to try to measure the effect of a given classroom on a graduate of 5 or 10 years? Can that even be meaningfully quantified beyond some fondly processed memory-moments? My guess is that it would be nearly impossible in general. In the end I think the current rating systems can measure gross incompetence or appealing public personalities but I’m not sure of any other value.
 By this I mean institutions that are not top level research players, you know, steps down from the likes of Berkeley, Princeton, Harvard, etc. At middle level institutions especially those like BYU who have no aspirations to become “Harvard” and don’t want to compete in the hiring pool of scholars in the academic stratosphere, faculty teaching popularity measures form a major data set in hiring/promotion/tenure processes.
 Recent data suggests that teachers who provide a positive good feeling buzz in the classroom may have a short-term effect on student performance, but teachers who are less capable in charismata can have, under a number of conditions, a much longer positive effect on student achievement. The mob effect doesn’t last. (You have to tread lightly here, there are always counterexamples.) I think the prof. that had the greatest effect on me in terms of course memory and use was a guy who grew a culture of fear in our class. Another point of importance among many is the utility of a given faculty member. If a faculty person can only reliably teach a small spectrum of the undergraduate curriculum in a given discipline, or refuses to go outside their favorite narrow band of courses, that could be considerably more important than whatever number arises in student satisfaction instruments.
 This is reminiscent of that old Mormon bug-bear: “I don’t remember what she said, but I remember how I felt.” Good grief.
 Naturally, it is possible that for one reason or another “worst class – worst teacher” comments can register a picture of faculty who are “doing badly” in the classroom. But blips occur in many careers for various reasons. If satisfaction surveys are the primary measure of teacher competence then I wonder if we are missing the point. Maybe we should start doing formal anonymous class surveys of Gospel Doctrine teachers every quarter. And then let the teachers see them (seems better than the current bishopric “counsel” system – you know – “so and so complained about your lesson,” etc.). Or how about high councilors or bishops? Of course then we’d have to start paying them. Oh wait. Religious Education. Robes of a false priesthood indeed. <grin>