February 2000

I. Introduction
II. Survey Methodology
III. Survey Results
IV. Conclusions



The broad goals of this survey were to obtain valid and reliable information from instructional faculty regarding their opinions on specific principles and criteria that could be employed to raise faculty compensation. These principles and criteria were developed by the University Senate Budget Committee and Faculty Advisory Council, and translated into survey questions by OSRL.

This telephone survey of a scientific, random sample of UO instructional faculty was designed to complement a simultaneous mail-out/mail-back questionnaire that was sent to the entire population of instructional faculty. The mail questionnaire comprised 178 questions on a broad array of topics, including job satisfaction, workload, work environment, and compensation issues. By contrast, this representative telephone survey comprised just 25 questions focused upon compensation issues, with a few job satisfaction and demographic questions parallel to the mail questionnaire.

Specifically, this survey's questions addressed the following topics:

1. Overall instructional faculty job satisfaction at the University of Oregon;
2. Awareness of the University Senate Budget Committee's "White Paper," and whether respondents read it, skimmed it, or did not read it;
3. The importance of seven specific principles related to faculty compensation goals;
4. The importance of five criteria that will potentially be used to determine raises for instructional faculty;
5. Ranking of the same five criteria;
6. Demographic characteristics, including school/college, race/ethnicity, sex, tenure status, academic status, years of employment, and whether or not respondents also held administrative positions.

In designing the survey instrument, OSRL consulted extensively with representatives of the Faculty Advisory Council, University Senate, and UO administration. Most of the survey questions are OSRL originals, but some are direct parallels to national surveys of college and university faculty.

The survey instrument was comprehensively pretested with recently retired faculty who would not fall into the sample but who would be familiar with compensation issues, in order to avoid potentially biasing the survey results. (Pretesting with a large group of current instructional faculty could cause them to think about the issues more than they would naturally and cause potential bias.) After several rounds of revision and further pretesting, a research assistant programmed the instrument into OSRL's computer-aided telephone interviewing system (CATI) and it was computer pretested.

A facsimile of the survey instrument is provided in Section 2 of the survey documentation; it includes numerical and percentage frequency results to each question. All interviews were completely confidential; the CATI system automatically strips names and telephone numbers from the sample as the survey is completed. No individual's identity can be linked to the results, which are presented only in aggregate form. Human subjects approval was obtained from the UO Institutional Review Board, Committee for the Protection of Human Subjects.


UO provided OSRL with a list of all instructional faculty members' names and telephone numbers, as well as information indicating racial/ethnic minority status and small department status. From this list, OSRL drew a random sample, over-sampling minority faculty and those in small departments. The target sample size to obtain 95% confidence intervals was 254, but 260 interviews were actually completed.

Interviewer training was conducted on January 27, 2000; see Section 3 for summary interviewer instructions. Interviewing was conducted January 28 to February 2, 2000, primarily from 9:00 AM until 5:00 PM, Monday through Friday, although calls on evenings and weekends were made to complete specially scheduled interviews. Altogether, OSRL interviewers made 1,410 telephone calls to complete 260 interviews. Up to 15 dial attempts were made to each valid office or home telephone number. The average interview length was 8 minutes.

The CASRO response rate was 68%, and the refusal rate was less than 1% . See Section 4 for the complete sample and response rate report.

Survey sampling errors are calculated to assist data users in assessing how much confidence to place in a particular survey result. Larger random samples reduce sampling error. Results for survey questions with low variability also have less sampling error; for example, a variable with a 50/50 proportional split has wider confidence intervals than a variable with a 5/95 proportional split. For this study, the sampling error is +5.0 percentage points on a variable with a 50/50 proportional split (at the 95% confidence level). For a variable with a 90/10 proportional split, the sampling error is +3.0 percentage points.

III. SURVEY RESULTS- UO Instructional Faculty Survey