skip to main content skip to footer

Colleagues as Raters of Classroom Instruction

Author(s):
Centra, John A.
Publication Year:
1974
Report Number:
RB-74-18
Source:
ETS Research Bulletin
Document Type:
Report
Page Count:
14
Subject/Key Words:
Educational Quality, Peer Evaluation, Teacher Evaluation

Abstract

How would colleague ratings based on actual classroom observations compare with student ratings? How reliable would colleague evaluations be when the influence of teaching reputation is minimized? These questions were investigated by analyzing colleague and student ratings of instructors at a small, new university. Just over three quarters of the faculty participated. Ratings were obtained for 78 teachers, and for 54 of these complete colleague data (two visits by each of three colleagues ) were available. The instructors received a summary of the colleague ratings as well as the student ratings. The summaries included mean score on each item and a frequency distribution of responses. In order to ease comparison of student and colleague ratings, 16 items were selected from the Student Instructional Report and used for the colleague ratings as an instrument. Each faculty member or department could add as many as 12 items to be scored. (Not many did). There was a marked positive bias in colleague ratings. Student ratings for the same instructors were also biased, but not to the same extent. A reliability analysis was done on the ratings given to the 54 teachers. Reliability was estimated by analysis of variance. There was not a lot of agreement "among" colleague raters, but there was a fairly high correlation for the first and second ratings given to the teacher by the "same" colleague. Colleague ratings were less reliable than student ratings. This is serious enough to doubt the value of their ratings of those aspects of instructor performance that were included. Comparing colleague and student ratings required determining the level of agreement between responses of the two groups to the 16 items. A correlation analysis was used. It cannot be concluded that colleague generosity in teacher ratings comes entirely from a favorable disposition toward fellow teachers. Colleague ratings of teaching effectiveness based primarily on classroom observation would probably not be reliable enough to use in making administrative decisions on tenure and promotion--at least, not without faculty members investing much more time in training or visitations. Colleagues from the same department might be able to make reliable and useful judgments about such aspects as the teachers course outline, examinations, and materials used for instruction. (SGK) (15pp.)

Read More