A Comparative Investigation Into Understandings and Uses of the TOEFL iBT Test, the International English Language Testing Service (Academic) Test, and the Pearson Test of English for Graduate Admissions in the United States and Australia: A Case Study of Two University Contexts PTE IELTS TOEFL iBT
- Publication Year:
- Report Number:
ETS Research Report
- Document Type:
- Page Count:
- Subject/Key Words:
English Language Assessment (ELA),
Pearson Test of English Academic (PTE),
International English Language Testing System (IELTS)
In line with expanded conceptualizations of validity that encompass the interpretations and uses of test scores in particular policy contexts, this report presents results of a comparative analysis of institutional understandings and uses of 3 international English proficiency tests widely used for tertiary selection—the TOEFL iBT test, the International English Language Testing Service (IELTS; Academic), and the Pearson Test of English (PTE)—at 2 major research universities, 1 in the United States and the other in Australia. Adopting an instrumental case study approach, the study investigated levels of knowledge about and uses of test scores in international graduate student admissions procedures by key stakeholders at Purdue University and the University of Melbourne. Data for the study were gathered via a questionnaire eliciting fixed-choice responses and supplemented with qualitative interview data querying the basis for participants' beliefs, understandings, and practices.
The study found that the primary use of language-proficiency test scores, whether TOEFL®, IELTS, or PTE, by those involved in the admissions process at both institutions was often limited to determining whether applicants had met the institutional cutoff for admission. Beyond this focused and arguably narrow use, language-proficiency test scores had little impact on admissions decisions, which largely depended on other required elements of applicants' admissions files. In addition, and despite applicants having submitted test scores that met the required cutoffs, survey respondents and interviewees often indicated dissatisfaction with enrolled students' levels of English-language proficiency, both for academic study and for other roles within the university and in subsequent employment. A slight majority at both institutions indicated that they believed the institutional cutoffs represented adequate proficiency, while the remainder indicated that they believed the cutoffs represented minimal proficiency.
The tension created by users' limited use of language-proficiency scores beyond the cut, uncertainty about what cutscores represent, the assumption on the part of many respondents that students should be entering with language skills that allow success in graduate studies, and subsequent dissatisfaction with enrolled students' actual language proficiency may contribute to a perception that English-language proficiency test scores are of questionable value; that is, perceived problems reside with the tests, rather than with how test scores are used and interpreted by those involved in the admissions process. At the same time, respondents at both institutions readily acknowledged very limited familiarity with or understanding of the English-language tests that their institutions had approved for admissions.
Owing to this lack of familiarity, a substantial majority at both institutions indicated no preference for either the TOEFL or the IELTS, counter to our expectation that score users in a North American educational context would prefer the TOEFL, while those in an Australian educational context would prefer the IELTS. The study's findings enhance understandings of test attitudes and test use. Findings may also provide insight for ETS and other language test developers about the context-sensitive strategies that could be needed to encourage test score users to extend their understandings and use of language-proficiency test scores.