The Faculty Personnel Committee (FPC) has just completed its work
for the 1999-2000 academic year. Members of this year's committee were:
Patrick Bartlein (Geography), Stephen Durrant (East Asian Languages and
Literature), Patricia Gwartney (Sociology), David Herrick (Chemistry),
Heath Hutto (Student member--English), Edward Kame'enui (Education), Lisa
Kloppenberg (Law), Terry O'Keefe (Business), Leslie Steeves (Journalism
and Communication), Kent Stevens (Computer Science), Jenny Young (Architecture).
One of our two student members did not attend after the second meeting.
The FPC work load is heavy. This year we advised the Provost on fifty-two
cases involving tenure and/or promotion. These are broken down as follows:
Internal Cases (48)
Promotion to Professor 17
Promotion to Associate Professor with Tenure 29
Tenure Only 2External Cases (4)
Professor with Tenure 4Total 52
We held twenty-two meetings during the current academic year, each
for approximately two hours. In addition to this time in meetings, we estimate
that we spent an average of three to six hours each week during winter
and spring quarters reading files. Moreover, one member of the committee
is assigned to report each case and prepare a written report to the committee
and then subsequently to the Provost. The member reporting a case, and
each of us has reported five cases during the year, typically spends a
full work-day in preparation.
We believe that the present mission and structure of FPC serves the University well. Careful peer review at each level of the institution (department, college, and university) is an essential part of our University's tenure and promotion process. The current system provides for checks and balances and, we hope, assures fairness.
Service this year on FPC has reinforced our belief in the high quality of our faculty. The vast majority of the cases we have examined were much more than merely adequate--they were impressively strong. Moreover, the committees and department leaders who prepared the cases generally did so with commendable professionalism. We do, however, have one major concern and a number of other suggestions. Several of these concerns are addressed clearly in the Faculty Handbook or other material disseminated from the Provost's Office. We strongly recommend strict adherence to those materials.
Our major concern centers upon our current student teaching evaluations. FPC finds that the interpretability of UO's quantitative teaching evaluations is severely limited because departments (1) omit key comparative information in their candidates' teaching summaries, (2) misinterpret z-scores, and (3) do not explain their summaries of quantitative evaluations.
To compare a promotion and/or tenure candidate's teaching to other instructors' teaching, departments need to report the comparator group. Many departments do not report the group of instructors or courses that define departmental means, standard deviations, and z-scores. Are the comparators all instructors and all courses, or are evaluations stratified into sub-categories by course level or type? Are courses taught by graduate students and adjuncts separated into their own category or included with those of the tenure-related faculty? Without these facts, the magnitudes of means, and the signs and sizes of z-scores, necessary for comparative analysis, are uninformative.
Many departments also misinterpret z-scores. Significant z-scores are only those greater than +2.0 or less than -2.0 (if departments define comparator groups and if certain statistical assumptions are met). Many departments assert significant deviations from means when no z-scores exceed the +/-2.0 criterion. Other departments overlook obvious z-score patterns (such as all-negative or all-positive values, or over-time trends) in favor of simple counts of z-scores with large values.
When creating summary tables of quantitative course evaluations, departments should explain their choices and copy the data carefully. It is often illustrative to report certain course types separately (e.g., graduate vs. undergraduate, specialty area vs. non-specialty area, mass classes vs. small classes). When less than half of enrolled students complete course evaluations, the results should be treated as unreliable. Courses with very few students and that involve substantial independent study (such as internship, dissertation, reading and conference) are not appropriate to evaluate. Some departments do not make such distinctions, some exclude certain courses from summary reports for no obvious reason and some make transcription errors when creating summary tables. Such problems potentially mislead peer reviewers and unnecessarily burden FPC.
If quantitative teaching evaluations are to be useful in promotion and tenure evaluations, departments must, at a minimum, identify and report comparator groups, accurately report and interpret z-scores, and explain their summary tables. Better, UO should (1) consider adopting new, improved methods for quantitative teaching evaluation than enables departments to more easily avoid problems like those described above, and (2) consider creating a template summary table for quantitative evaluations to standardize the reporting process. Either improvement will reduce burdens in the peer review process.
Our further suggestions and comments follow:
cc. John Moseley, Lorraine Davis, Jack Rice, Carol White