Ethics of Criticism in the Church Redux-Sort of

Church culture generally eschews public criticism/correction with the principle becoming better outlined over the last 150 years or so. In post-1890 Utah, Church leaders were going through some growing pains over politics and the establishment of a two party system. To even out the voting pattern, they actively campaigned on the Republican side. The result was hurt feelings all around and worst of all (in President Woodruff’s opinion), flamboyant political rhetoric made it’s way into public discussions between Church leaders on the campaign trail. It took some time for this to dampen out and political views are mostly held incognito now – in terms of party. But places remain in the institution where criticism is leveled, anonymously mostly, at other Church members and sometimes it’s pretty virulent.

One of those spots is the Church institutions of Higher Learning (i.e., BYU-various). I’m talking about the student satisfaction surveys that are run (at least) at the end of every semester. Students have the opportunity to provide numerical ratings of professorial performance and make anonymous comments about the classroom experience. Over the years I’ve had the chance to review large segments of response when sitting in on faculty evaluative processes at various institutions. The great majority of commenters in this context don’t indulge in this virulence, but some do, and faculty who get and read this stuff are often hurt by it to some degree. But that is not my real point, exactly.

This kind of formal feedback system is used in other kinds of venues and since Medieval times, students have registered their discontent with professors (simply by not sitting lectures and hence not paying for the privilege). In modern times, student comment on faculty has varied from word of mouth to the present anonymous computerized rating systems. Over the last couple of decades this rating system has become more and more the equivalent of fast-food employee check-lists. It can be a make-or-break kind of hurdle in promotion and tenure in many schools and BYU is no exception. There is a natural kind of weeding out that now occurs. Faculty at universities naturally sit along a range of values when it comes to student ratings: too far down and you will be asked to look elsewhere for employment. This can be true of tenured faculty too, although the bar for dismissal is a bit higher and varies from place to place.

The odd thing about general faculty evaluative systems in middle-of-the-road or primarily teaching institutions[1] is that they often punish people by asking them to teach more hours, and reward people who perform well by asking them to perform less (hours -and with more advanced students). Also, contrary to some popular beliefs, many of the very best (read most popular) professors register in the upper quartile of research productivity (publishing, grants, awards, etc.). Higher ratings translate to less teaching, lower ratings mean more teaching, all other things being equal. The paradox exists because faculty are expected to not only perform in the classroom- at least at the Provo incarnation of BYU (I’m not sure of the priorities at the others) and its academic peers but to perform in the arena of academic research and professional service at the university and in their respective disciplines. Of course, there is some balancing going on here, with faculty engaged heavily in research, graduate students usually tag along and that means significant student mentoring time. [At BYU-Provo it’s more complex because of the large thrust into the undergraduate research arena.]

Really, these student satisfaction measures point in the wrong direction. If you are going to measure learning effects, and one assumes that this is the real concern, then overall downstream student performance in post-requisite courses is probably the thing to consider.[2] But universities are not set up to do this kind of measurement in the data sets they collect and organize, at least historically that’s been true. Moreover, this effect is not immediately measurable (and it’s noisy) and decisions which purport to involve teaching effectiveness are relatively short-term in nature – you’ve got 5 or 6 years to qualify for tenure or get the boot and start over somewhere. Supposing that a faculty person improves presentation and interaction with students over time, not in the sense of connecting on some reptile-brain level, but in terms of the retained learning that takes place over time, say, then you probably need longer terms of probation to see trends. As it is, new faculty in academically middle-level institutions like BYU-Provo want to “train for the test” like some ACT/SAT prep program. “How do I get higher student ratings” is the natural question for a faculty member of moderate research achievement in a teaching oriented second tier school and of course there are batteries of tips and tricks to get them, more or less couched in the systems at universities themselves. Schools have begun to have faculty teacher training in the honest attempt to make faculty better in the classroom and education theory has contributed various ideas about the methods for doing this–and business strategists have been tapped too. Are we really improving in this? I’m not sure. Suspect any analysis you hear about such things-its usually done by parties with a vested interest in their positive effects for one reason or another.

Student perception of faculty is colored by lots of variables. How important is it? It has some value, but its clear that just taking a number labeled “overall teacher rating” (is he/she a 10 or 5 or . . . a 5.5 or a 5.6 -yes it can be cut that thin) off individual student surveys doesn’t really tell you what you want to know about a given instructor – but the ease of doing it and the group-think it creates makes it the drug of choice. And how do you interpret student comments like “Dr. blahblah is the best professor I’ve ever had” or “Dr. booboo is the worst teacher in the department–she doesn’t have any concept of fairness”? Then there’s the question of long-term value. Do we want to try to measure the effect of a given classroom on a graduate of 5 or 10 years? Can that even be meaningfully quantified beyond some fondly processed memory-moments?[3] My guess is that it would be nearly impossible in general. In the end I think the current rating systems can measure gross incompetence or appealing public personalities but I’m not sure of any other value.[4]

[1] By this I mean institutions that are not top level research players, you know, steps down from the likes of Berkeley, Princeton, Harvard, etc. At middle level institutions especially those like BYU who have no aspirations to become “Harvard” and don’t want to compete in the hiring pool of scholars in the academic stratosphere, faculty teaching popularity measures form a major data set in hiring/promotion/tenure processes.

[2] Recent data suggests that teachers who provide a positive good feeling buzz in the classroom may have a short-term effect on student performance, but teachers who are less capable in charismata can have, under a number of conditions, a much longer positive effect on student achievement. The mob effect doesn’t last. (You have to tread lightly here, there are always counterexamples.) I think the prof. that had the greatest effect on me in terms of course memory and use was a guy who grew a culture of fear in our class. Another point of importance among many is the utility of a given faculty member. If a faculty person can only reliably teach a small spectrum of the undergraduate curriculum in a given discipline, or refuses to go outside their favorite narrow band of courses, that could be considerably more important than whatever number arises in student satisfaction instruments.

[3] This is reminiscent of that old Mormon bug-bear: “I don’t remember what she said, but I remember how I felt.” Good grief.

[4] Naturally, it is possible that for one reason or another “worst class – worst teacher” comments can register a picture of faculty who are “doing badly” in the classroom. But blips occur in many careers for various reasons. If satisfaction surveys are the primary measure of teacher competence then I wonder if we are missing the point. Maybe we should start doing formal anonymous class surveys of Gospel Doctrine teachers every quarter. And then let the teachers see them (seems better than the current bishopric “counsel” system – you know – “so and so complained about your lesson,” etc.). Or how about high councilors or bishops? Of course then we’d have to start paying them. Oh wait. Religious Education. Robes of a false priesthood indeed. <grin>


  1. You see, WVS, I don’t know how to feel know having been duped by your charismatic rendering of the predicament. I chuckled more than once from your pedagogical asides!

  2. In the Religious Ed. department, the mixed and varied student expectations throw yet another odd variable into the mix. I have a few summers of student reviews from that, and it’s amazing how student A gives glowing comments and student B thinks you’re terrible.

  3. Ben, the good ones don’t count. The game is minimizing the bad ones.

  4. It’s true the in general we try to measure how the teaching is perceived (in a popularity/standup comic type of way) not necesarily what learning that teaching motivates.

    My favorite teachers and professors expressed their passion for their subject. I wonder if red tape at universities and these very progress reports and comments keep the teacher from doing what they love the most…studying their subject and figuring out how to teach it.

    My favorite professor had so much he was interested in he gave us long lists of periodicals to choose from…then we could write our own essay questions for the test. He was very demanding, but what a way to learn.

    I’ll always remember the American herritage lecturer going on and on and on and on about Gazebos. now I still couldn’t care about gazebos but his passion for the subject was very inspiring.

    religious teaching…I’m substituting for CES this week and I love not being paid. It means I have no qualms not using their materials (which they have finally made available online without a password).

  5. John Mansfield says:

    It’s surprising that faculty contempt for students as a bunch of lazy whiners doesn’t carry through into how much weight student evaluations are given.

  6. The matter is complicated when survey’s are treated as subjective; students may naturally feel that they should not give perfect scores all the time and offer ways to improve, when in fact, the teacher is excellent and a lower score or comment is only aimed at helping an already excelling professor/teacher.

  7. Josh B. The statistics seem to show that what you suggest is more a practice among students in the sciences rather than the fine arts, where ratings tend to be much higher and with little corrective commentary.

  8. so the obvious correlation here is to apply these measurment/rating tactics to the ward level. Seems simple enough to utilize the back of the bulletin each week to rate the contribution/spirituality/general wellness as defined by the reviewer of any variety of people and programs to include the speaker that day, the administration of the sacrament, the bishop’s attentivness to welfare matters, the friendliness of the RS, the cleanliness of the bathrooms, the nursery snacks, the musical ability of the choir, and so forth. The Ward Council can review these reviews monthly and seek to make any corrections as deemed necessary. I think this is a wonderful idea.

  9. #8, I can see that as either a terrible or wonderful idea. I can see one person stuck in the same calling for decades…

  10. Let’s expand on number 8 by making it similar to the phone-in ratings of my local Jack-in-the-Box restaurant: becoming a potential winner of $10,000 would certainly ecourage participation!

  11. Lets come up with some solutions.

    1. eliminate ratings (bad idea)
    2. educate the students on how important ratings are (also bad idea)
    3. banish student ratings to the black hole (ok idea)
    4. use midterm evaluations and educate students on grading the final evals based on ”improvement” from the midterm evaluations (best idea I have right now)

  12. BWJohnson says:

    #4 Where has CES put their coursework and lesson plans online?

  13. Mark Brown says:

    BWJohnson (12),

    Try here:

  14. Great post WVS! I couldn’t agree more. My advisor was one who didn’t get tenure at BYU primarily because of student evaluations. It was extremely disheartening because I thought he had so much potential. I thought he was a great advisor.

  15. I neglected doing my homework.

%d bloggers like this: