angle-up angle-right angle-down angle-left close user menu open menu closed search globe bars phone store

2021 Summer Research and Measurement Sciences (RMS) Internship Program for Graduate Students

Why Intern at Educational Testing Service?

 
 

Watch how other interns view their experience at ETS.

View transcript

Description

If you are a creative and innovative individual who wants to help shape the future of learning and assessment, we encourage you to apply for the 2021 Summer Research and Measurement Sciences (RMS) Internship program. Steeped in decades of broad expertise, RMS conducts rigorous foundational and applied research on the most critical issues facing education and the workforce. Central to ETS’s legacy of global leadership in learning and assessment, RMS is dedicated to advancing the science and practice of measurement, driving innovation in digital assessment, learning and teaching.

Applying for an Internship at RMS

As an intern in RMS, you’ll work with experts who are nationally and internationally known as thought leaders, trusted advisors and go-to collaborators for their high-impact work addressing significant educational and societal goals. ETS staff in RMS have expertise in psychology, education, psychometrics, measurement, statistics, cognitive or learning sciences and data science.

Interns who are accepted into the program will collaborate with scientists on projects related to these topics and will participate in data analysis, writing and other research tasks. Doctoral students who have completed at least two years in one of these or a related field are encouraged to apply. Upon the completion of the program, you’ll have the opportunity to present your findings to teams across R&D.

Apply Now

Note: Applicants may apply to the RMS or AI Labs Internship programs, but not both. However, all applicants may be considered for both programs, depending on qualifications and project needs.

Application Procedures

Complete the electronic application form. On the application form:

  • Choose up to two research areas in which you are interested and provide written statements about your interest in the area(s) of research and how your experience aligns with the project.
  • Attach a copy of your curriculum vitae (preferably as a PDF).
  • If you are (or have been) actively enrolled in a graduate program, attach a copy of your graduate transcripts (unofficial copies are acceptable).
  • Download the recommendation form and share it with your recommenders. The link to the recommendation form is on the application. Recommendations should come from an academic advisor, a professor who is familiar with your work as it relates to the project of interest or an individual with whom you have worked on a closely aligned project. ETS will only accept two recommendation forms. Recommendations should be sent electronically to internfellowships@ets.org and must be received by February 1, 2021. If you would like to download the recommendation form to send to your recommenders before submitting your application, you can save your application information for completion later.

Deadline

  • The application deadline is February 1, 2021.

Decisions

  • Applicants will be notified of selection decisions by March 31, 2021.

Duration

  • Ten weeks: June 1, 2021–August 6, 2021

Compensation

  • $7,500 salary

Eligibility

  • Current full-time enrollment in a relevant doctoral program
  • Completion of at least two years of coursework toward the doctorate prior to the program start date

Selection

The main criteria for selection will be scholarship and the match of applicant interests and experience with the research projects.

ETS affirmative action goals will be considered. We strongly encourage students from under represented groups and backgrounds to apply. Late or incomplete applications will not be considered.

Past Project Examples

Some examples of the kinds of projects interns have worked on in recent years include:

  • Joint Modeling of NAEP Process Data and Response Data. This project involved developing methods to study the relationships between a test taker's sequences of actions on interactive NAEP Science items and their performance on the assessment. The study investigated the use of different modeling strategies including Markov chain models and clustering methods to label test takers as "efficient" or "inefficient" in terms of their sequence of actions and examined how this efficiency was related to performance.

  • Methods for Evaluating Fairness of Machine Scores. Artificial intelligence (AI) and machine learning algorithms have been found to produce biased predictions in some settings. It is important to make sure scoring procedures, such as automated or machine scoring algorithms, are fair and do not produce such biased predictions. This project examined methods based on confirmatory factor analysis and structural equation modeling to investigate whether the relationships among predictors and item scores were invariant across demographic subgroups in order to ensure the scores we produce are fair.

  • Career and Technical Education (CTE) and Work Latent Pathways in Young Adulthood: Individual and Family Precursors and Career Outcomes. In this project we examined the CTE trajectories spanning across both high school and college years, using the Educational Longitudinal Study of 2002. Through latent class analysis we identified three latent pathways:

    1. intensive CTE participants
    2. BA pursuers
    3. intensive workers

    We also examined precursors of the membership to such latent pathways and found that individuals' educational performance, educational expectations and parental socioeconomic backgrounds were associated with a higher likelihood of belonging to the BA pursuer class, as opposed to the intensive CTE participant class or the intensive worker class. An important aspect of this project is that we also examined the relationship between the paths and job outcomes and found that BA pursuers had relatively higher earnings and job satisfaction compared to the other latent classes.

  • Evaluating Measurement Invariance of Translated Tests Across Language Groups. The intern project investigated whether the two (or multiple) language versions of the same test affect item level performance and test level performance, whether score equating is necessary, and how equating should be conducted. We conducted multiple group item response theory (MIRT) concurrent calibrations to select common items that are statistically invariant to serve as an internal anchor between the different language forms (e.g., an English form and Spanish form). We found that text-heavy subjects (e.g., Reading) is more affected by translation in terms of form-equivalency. For Math, the two language forms were very similar.

Contact

For more information, contact us via email.

See also: