Why Intern at Educational Testing Service?
Watch how other interns view their experience at ETS.
Interns in this eight-week program participate in research under the guidance of an ETS mentor. Each intern is required to give a brief presentation about the project at the conclusion of the internship. The internship is carried out in the ETS offices in Princeton, N.J. This year, projects may be conducted in the following research areas:
Research Area 1: English Language Learning and Assessment
This research area focuses on innovative and technology-enhanced approaches to the measurement of reading, writing, listening, speaking, and interactive communication skills of learners for whom English is a second or foreign language. This covers academic English, “everyday English” and pragmatics. Emphasis is placed not only on measuring a learner’s current standing on these English skills, but on how these skills are acquired and developed, how learner’s progress may be monitored and assessed, and how teachers play an active role in the acquisition and development process. In this regard, the kinds of skills and knowledge that teachers need to have, especially teachers of students for whom English is a foreign language, is of interest. In addition, technology-based tools and methodologies for measurement and learning, such as Natural Language Processing, Speech Recognition and Interpretation, Machine Learning based models are of interest.
Research Area 2: Career and Technical Education
This research area focuses on understanding and supporting the career-and-technical pathways that students (including adult learners) may follow from education to the workforce. Career and Technical Education (CTE) represents a significant and ever growing pathway to employment and quality of life. And although research has been conducted on the benefits of CTE, critical research areas in need of attention include the factors that (a) influence decisions to enter a CTE pathway, (b) support the success of students once they enter a CTE program, and (c) support the successful transition of students into the workplace. In other words, research is needed to bolster what is known about getting into CTE (including the characteristics, experiences, and qualifications of those entering), getting through CTE (including the learning needs of students, and the skills, competencies, and pedagogical strategies especially relevant for CTE educators as well as the use of technology-based environments that include simulation and scenarios), and entering the workforce (including identifying the general competencies and noncognitive skills that employers value and expect of CTE graduates, and job-search strategies).
Research Area 3: Modeling and Analyzing Examinee Response Processes
The move from paper-based to digitally-based assessments is expanding opportunities to more fully understand the approach examinees engage in when responding to the questions in a test by capturing “process data.” Process data refer to behaviors that examinees use while testing and reflect different elements of test-taking behavior, such as the number of key strokes, the time spent on a test item or task, eye movement on a reading passage, interactions with stimulus materials, and the use of resources and tools. Such data hold the promise of developing a better understanding of students’ thought processes and strategies as part of testing. Research in this area focuses on modeling and analyzing examinee response process data for investigating item and task features, and evaluating examinee behavior to target assessment purposes and for validating score meaning.
Projects may involve modeling and analyzing response processes in game- and simulation-based assessments; examining accessibility features of assessment tasks; and comparing testing behaviors across different examinee populations.
Research Area 4: Statistical and Psychometric Foundations
ETS has a long-standing role of being at the forefront of the development of psychometric methodology for testing and assessment. For example, ETS has been fundamental in developing item response theory (IRT), differential item functioning (DIF), methods for test linking, and the statistical methods used in all large-scale group-level assessments. Current research in this area continues this foundational work and focuses on theoretically-based development or enhancement of statistical and psychometric methods for use in analyzing item and test data, using mathematical theory to support these statistical and psychometric methods, and applying existing methods to novel problems.
Example projects include the development of rigorous procedures for item response modeling and item analysis; the development of methods for incorporating measurement error in the analyses of educational data; identifying aberrant test scores; measuring student growth in achievement; and modelling test score data for policy and school evaluation in ways that properly account for test score measurement error.
Research Area 5: Group-Score Assessment
The use of group-score assessments has grown both domestically and internationally. As a result, research devoted to intricate and comprehensive assessment design and psychometrics for large-scale group-score assessments aimed at advancing the field by improving analytical and statistical methods, extracting information from newly available types of data, creating efficiencies in processing results, improving the assessments, and developing new measures has grown. There are three large-scale programs at ETS that are group-score assessments:
- Programme for International Student Assessment (PISA)
- Programme for the International Assessment of Adult Competencies (PIAAC)
- National Assessment of Educational Progress (NAEP) assessments
Research for these assessments include innovative assessment design, cognitive and noncognitive assessments, complex survey sampling and sampling weights, human rater effects modeling, machine scoring of technology-enhanced items, IRT, latent regression modeling and methodologies, linking and equating, design and analysis of scenario-based and simulation-based tasks and assessments, and modeling and analyzing timing and process data.
Projects may include studies on response-style bias; the use of national and international data to estimate achievement for smaller geographic areas (small area estimation); diagnostic models for simulation-based tasks; latent regression modeling and model selection; digitally-based assessment and linking across testing modes; research on survey questionnaires and noncognitive assessments; and multistage testing designs.
Research Area 6: Applied Psychometrics
Many of the foundational methods described under Research Area 4 are applied to solve ongoing measurement issues for ETS testing programs so that test scores resulting from assessments are psychometrically sound, legally defensible, and cost effective. Research in this area is focused on the application of innovative solutions that result in improvements to operational psychometric methods and procedures related to equating and scaling, item and test analyses, measurement error and reliability, quality assurance and scoring, and scaling of complex item types.
Examples of projects include equity and fairness analyses; understanding valid score use across a variety of use contexts; test security analyses; supporting innovations in item and task types; and the application of various test equating, scaling, and linking methods to unique assessments.
Research Area 7: Human and Automated Scoring
The use of constructed-response and performance items and tasks is becoming increasingly more common at ETS and elsewhere in the measurement field. These item and task types may be in the form of written essays, speaking samples, or short responses and may appear in both high-stakes assessments and low-stakes products. Constructed-responses may be scored by human raters or through the application of an automated scoring engine. Research in this area is focused on improvements to methods used to score constructed-response and performance items, including the development of human rater models, methods for predicting/improving rater performance, and the development and evaluation of automated scoring engines.
Projects in this area may include studies that combine human ratings and computer-generated writing features to score constructed-response items; explore efficient rater calibration practices; examine issues in rater training, rater scoring drift, or scoring bias; create metrics for assessing the accuracy of machine scores; determine appropriate sample sizes needed for evaluating machine scores; and evaluate statistical properties of machine scoring modeling techniques.
- The application deadline is February 1, 2019.
- Applicants will be notified of selection decisions by March 30, 2019.
- Eight weeks: June 3, 2019–July 26,2019
- $6,000 salary
- Transportation allowance for relocating to and from the Princeton area
- Housing will be provided for interns commuting more than 50 miles
- Current full-time enrollment in a relevant doctoral program
- Completion of at least two years of coursework toward the doctorate prior to the program start date
The main criteria for selection will be scholarship and the match of applicant interests and experience with the research projects.
ETS affirmative action goals will be considered. Late or incomplete applications will not be considered.
Application deadline was February 1, 2019. The application process is currently closed.
Complete the electronic application form. On the application form:
- Choose up to two research areas in which you are interested and provide written statements about your interest in the particular area(s) of research.
- Attach a copy of your curriculum vitae (preferably as a PDF).
- Attach a copy of your graduate transcripts (unofficial copies are acceptable).
- Download the recommendation form and share it with your recommenders. Recommendations should come from your academic advisor and/or major professors who are familiar with your work. ETS will only accept two recommendation forms. Recommendations should be sent electronically to firstname.lastname@example.org and must be received by February 1, 2019. If you would like to download the recommendation form for sending to your recommenders before submitting your application, the option to save your application information for later is available.
For more information, contact us via email.