Survey of Standards for Foreign Student Applicants

Author(s):
Boldt, Robert F.; Courtney, Rosalea G.
Publication Year:
1997
Report Number:
RR-97-07
TOEFL-RR-57
Source:
Document Type:
Subject/Key Words:
College admission English as a second language foreign students standards surveys test use

Abstract

This project consisted of a survey of practices and standards in the use of the TOEFL® test (Test of English as a Foreign Language™) and attempted to learn the processes by which these standards were developed. By standards we mean the information and requirements established by institutions to evaluate applicants? English proficiency. Information, including other measures that were used with the TOEFL test, was obtained toward that end. Several findings emerged. The most common practice for using the TOEFL test in admissions was that students scoring above a specified score were judged adequate from the standpoint of English proficiency, but students who scored below it had to provide other evidence of English proficiency that could justify admission. Only a few institutions used a rigid cut score. The measures most commonly allowed as substitutes for the TOEFL test in admissions and placement decisions were the Scholastic Assessment Test (SAT) (The College Board, 1995), the American College Testing Assessment (ACT) (ACT, 1988) score, and the Michigan Test of English Language Proficiency (MTELP) (Buros, 1965). It was found that most changes in standards consisted of increases in the minimum test scores, and that they were initiated in response to problems noted in admitted foreign students? performance. In addition to performance, practices of similar institutions were commonly used to set minimum scores. Participants most often mentioned in revised standard setting were administrators (policy), ESL staff (language trainers), and faculty (teachers of academic content). This survey provides information about the standards for different institutions and how those standards have been set and maintained. Users had the opportunity to tell us about their use of the TOEFL test. The following suggestions emerged: (a) When score user surveys are conducted, relevant institutional characteristics could be collected and used to categorize the results. Such information would facilitate the process of adjusting qualifying scores; (b) Data that connects TOEFL scores with speaking and essay scores could be made available. Such data could be useful when essay or speaking measures are used to supplement processing of applicants who achieve marginal scores on the TOEFL test; (c) A system of behaviorally anchored language proficiency scales based on incidents that have stimulated changes in cut scores could be developed. Availability of such scales would facilitate cross-institutional communication about language proficiency levels and could add a new dimension to test validation.