- "Note Worthy," students, faculty, and staff perform three T.J. Hurley compositions
- "Astonished by Love: Storytelling and the Sacramental Imagination," Alice McDermott's talk (pg. 16)
- "The Poor: What Did Jesus Preach? What Does the Church Teach?" Fr. Kenneth Himes's lecture (pg. 40)
- "Takedown," a Boston College Video Minute showing the demolition of More Hall (pg. 48)
View upcoming events at Boston College
Books by alumni, faculty, and staff
Alumni in the news
Order books noted in Boston College Magazine
Join the online community of alumni
View the current BCM in original format
CSOM brings clarity to students’ course evaluations
A few years ago, Samuel Graves, chairman of the operations management department in the Carroll School of Management (CSOM), was working his way through the reports on student course evaluations that the University tallies each semester. The students deliver their assessments online, answering 14 questions to which they apply a scale of 1 to 5. In addition to ranking an instructor “overall as a teacher,” they rate the extent to which he or she “stimulated interest in the subject matter,” “treated students with respect,” and was “available for help outside the class,” among other points. Graves remembers looking at the 3.1 overall rating given to an instructor in his department and wondering, “What does that mean? Is it good or bad?” He wanted to know how this teacher compared with others offering sections of the same course, and whether the rating was better or worse than the professor had received in previous years. None of that information was evident in the numbers before him.
In 2010, Graves began discussing this data gap with CSOM’s dean, Andy Boynton, and research statistician, Steven Lacey, who assists faculty with data analysis. The result is that today each CSOM department chair receives a single page on which the answers to the questions Graves raised are graphically illustrated.
The new report plots responses under four headings: “Teacher” (the general student rating for the instructor), “Workload” (how demanding the class is), and “Overall” (which relates specifically to the course and takes into account the comment frequently heard from students that “I think the professor did a great job with a really boring course,” as Graves relates). It also records the average final grade given by that instructor for each section (“Grade”). Included are ratings for the most recent semester and for the previous five years, along with the results for other professors teaching the same course. Individual faculty scores are also plotted against department and CSOM averages.
The first crop of these “Teaching Reports,” as they’re called, became available in late August 2011 (containing data from spring term evaluations). The results are currently sent to the department chair, individual faculty members, and the dean.
The University has been collecting the raw information since the early 1980s. But the graphic rendering of the Teaching Reports “gives clarity . . . that you didn’t really have before unless you plotted the data comparisons yourself,” says Judith R. Gordon, who chairs the management and organization department. “It’s a way to motivate [a faculty member] to become a better teacher,” she says, noting that if you’re a professor at the Carroll School you want to exceed the average assessments. “As a chair, I want my [department’s] average to be better than the school’s average,” she says, “and the dean wants to raise all of the averages.”
Dean Boynton says he has always sent letters of thanks and encouragement to professors who perform well in course evaluations. He has also called up department chairs after gleaning a problem regarding one or more of their faculty. With the advent of the Teaching Report, Boynton says, “the department chairs call me and say, ‘You’re not going to like what you see.'” He adds, “Behaviors change because [the reports] are available.”
Formally, the Teaching Reports will be used in deliberations over tenure and promotion at CSOM, as well as in annual performance reviews. Boston College’s vice provost for faculties, Patricia DeLeeuw, says the University is considering furnishing all professors with “easy-to-follow” data points like those provided in the CSOM report; she cited in particular comparisons among instructors of the same course as helpful in giving professors “a clear picture of how they’re doing” in the eyes of their students.
How are faculty taking to the new report? “My impression is that there was some anxiety about it at the beginning, some discomfort,” says Graves. Faculty asked questions like: “Is this all you’re going to look at? Isn’t there some stuff that you won’t [be able to] quantify?” Boynton’s response has always been that Teaching Reports are intended to “begin a conversation.” A cover letter circulated with the inaugural reports in 2011 acknowledged that the measures do not factor in the qualitative comments included with the students’ evaluations.
Some CSOM faculty remain concerned. Jeffrey Pontiff, the James F. Cleary Chair in Finance, says the ratings are subject to the traditional weaknesses of student course evaluations generally, including that students “can’t observe how knowledgeable” a professor is. He noted that the Teaching Reports show final grades given to students, but not the grades they expected when filling out the forms, which is precisely what could tilt an evaluation. A review of the literature, published in 2011 in the online academic journal Create Change, found that the expectation of a good grade typically has only a modest uplifting effect on an evaluation. Graves says his team has not yet been able to prove or disprove such a correlation at the Carroll School.
Gordon notes that course evaluations are “just one measure” of the instruction at CSOM; another, she said, is in-class observation by senior faculty, especially of junior, non-tenured teachers. But, she adds, “It’s important to know what students are thinking. They’re our customers.”
Since April 2011, CSOM professors have also received a statistical rendering of their research accomplishments. In an effort led by Pontiff, the school has produced reports that pivot mostly on how much a professor publishes and the “impact” and “influence” of that work. “Impact” is measured largely by how often scholars cite the research. “Influence” relates to the prominence of the journals in which the original article appears: For example, an article by a single author printed in Harvard Business Review receives an “article influence score” of 1.203, while the score for such an article in Information and Management is 0.933. The school plans to update its research evaluations annually.
Some professors, Pontiff says, have argued that the best way to evaluate research is not to showcase numbers but to ask a specialist in the field to assess its quality and influence. He agrees up to a point but adds that hard numbers appear to give scholars “more of an incentive to stay productive.”