Making informed decisions around English-Language Proficiency

November 5, 2011 should have been a big day for Australia's universities, but for most people it probably slipped by unnoticed. This was the day that greater choice arrived in terms of access to an increased pool of international students resulting from acceptance of additional English-language test providers for visa purposes by the Department of Immigration and Citizenship (DIAC).

This is a positive step for the Australian higher education system, and sees the end of a long monopoly which was unique in the global education market. Prospective students to Australia no longer have to rely on taking one test to prove their English-language proficiency; multiple options are now available to them.

However, changing a long-established system doesn't come without its challenges, and the main one for Australian institutions is to understand how each of the tests rank next to each other, ensuring that English-language standards remain consistent, regardless of which test a student takes.

Concurrently, the recent Knight Review has placed our universities under the spotlight with regard to quality assurance and visa probity. Over the last few weeks, working committees and groups have been formed that will help guide, provide advice and form opinions as to how the new arrangements for the Knight Review will work out and feature in our systems. This is a great start, and I for one am pleased to see some of the main university bodies taking a lead on this.

Naturally, there is going to be some hesitation and confusion from universities while they become more familiar with how the various English proficiency tests compare. We need to have the very best overseas talent making its way through our education system, and a key component of this is to understand how each of the English-language test providers stack-up against one another. But how exactly can we do this?

Raising standards doesn't have to raise fears — the good news is that tools exist to help us. Standardised English tests do vary in their frameworks, methodologies, formats and item types so score comparisons provide challenges. What difference does it make if a student does a computer-based test rather than a pencil-and-paper test? What kind of speaking or writing tasks has the test required of the test taker? Experts such as those at the Language Testing Research Centre at the University of Melbourne spend many years of their academic lives addressing these complex questions to provide evidenced-based advice to institutions, business and government.

For Educational Testing Service (ETS), the TOEFL iBT® score comparison accepted by DIAC is based on a research study conducted under the highest standards as defined by the Guidelines for Practice by International Language Testing Association and the Standards for Educational and Psychological Testing. The research is based on a sampling of 1,153 students from 70 different countries who took both the TOEFL iBT test and IELTS® academic test. An equipercentile linking method was used to align the scores from the two assessments. The research study, Linking TOEFL iBT Scores to IELTS® Scores – A Research Report (PDF), which is publicly available, outlines the research design, including the data collection, data analysis and study results in which the score comparison was created. This information and web-based score comparison tools are a great start to a process that is going to take some time for the universities to get used to. But it's clear that we all want one thing: to ensure we have the very best talent from overseas making its way through our higher education system.

The research-based score comparison released by DIAC for its purposes was only one aspect that the department considered when making a decision on alternative English proficiency tests for its global office network. DIAC required accepted tests to meet 24 criteria, set after advice from the ELICOS sector. Large-scale testing tools that were widely accessible, whose administration, security and content management were robust and which could provide easy access to score verification were required.

Institutions have always been able to set their own English entry requirements, using a range of standardised tests, internal assessments and language of prior study. Any English-proficiency assessment cannot provide finely nuanced assessments of a student's language proficiency in specialised or localised colloquial parlance. Rather, proficiency on entry should demonstrate a threshold capability to succeed. Some institutions have invested heavily to provide sophisticated multi-channel support through faculty or centralised specialists throughout the student's study life. Ongoing internal benchmarking at key check points to monitor students' progress and assure quality outcomes is a most worthwhile investment. Several universities encourage exit testing of English proficiency to provide graduates with an additional currency that global employers will recognise.

These measures will ensure that our entry requirements are calibrated to give commencing student populations the best overall opportunity to succeed in their studies. Graduates will take home an Australian qualification that they can be proud of — one that can help them fulfil their career aspirations.


Privacy Policy

ETS — Listening. Learning. Leading.®

Copyright © 2012 by Educational Testing Service. All rights reserved. ETS, the ETS logo, LISTENING. LEARNING. LEADING., TOEFL and TOEFL IBT are registered trademarks of Educational Testing Service (ETS) in the United States and other countries. TOEFL JOURNEY is a trademark of ETS. IELTS is a registered trademark of the University of Cambridge ESOL Examinations Syndicate. 18785

ETS | Rosedale Road | Princeton, NJ 08541