How the GRE® Tests are Scored

 GRE® revised General Test
(tests taken on or after August 1, 2011)

Computer-delivered Test

The Verbal Reasoning and Quantitative Reasoning measures are section-level adaptive. This means the computer selects the second operational section of a measure based on the performance on the first section. Within each section, all questions contribute equally to the final score. For each of the two measures, a raw score is computed. The raw score is the number of questions answered correctly.

The raw score is converted to a scaled score through a process known as equating. The equating process accounts for minor variations in difficulty among the different test editions as well as the differences in difficulty introduced by the section-level adaptation. Thus a given scaled score for a particular measure reflects the same level of performance regardless of which second section was selected and when the test was taken.

For the Analytical Writing section, each essay receives a score from at least one trained reader, using a six-point holistic scale. In holistic scoring, readers are trained to assign scores on the basis of the overall quality of an essay in response to the assigned task. The essay score is then reviewed by e-rater®, a computerized program developed by ETS, which is used to monitor the human reader. If the e-rater evaluation and the human score agree, the human score is used as the final score. If they disagree by a certain amount, a second human score is obtained, and the final score is the average of the two human scores.

The final scores on the two essays are then averaged and rounded to the nearest half-point interval on the 0–6 score scale. A single score is reported for the Analytical Writing measure. The primary emphasis in scoring the Analytical Writing section is on critical thinking and analytical writing skills rather than on grammar and mechanics. (Read the "Issue" and "Argument" scoring guides.)

During the scoring process, essay responses on the Analytical Writing section are reviewed by ETS essay-similarity-detection software and by experienced essay readers.

Paper-delivered Test

For the Verbal Reasoning and Quantitative Reasoning sections, a raw score is computed. The raw score is the number of questions answered correctly.

The raw score is then converted to a scaled score through a process known as equating. The equating process accounts for differences in difficulty among the different test editions, so a given scaled score for a particular measure reflects the same level of ability, regardless of the edition of the test that was taken.

For the Analytical Writing section, each essay receives a score from two trained readers, using a six-point holistic scale. In holistic scoring, readers are trained to assign scores on the basis of the overall quality of an essay in response to the assigned task. If the two assigned scores differ by more than one point on the scale, the discrepancy is adjudicated by a third GRE reader. Otherwise, the two scores on each essay are averaged.

The final scores on the two essays are then averaged and rounded to the nearest half-point interval on the 0–6 score scale. A single score is reported for the Analytical Writing measure. The primary emphasis in scoring the Analytical Writing section is on critical thinking and analytical writing skills rather than on grammar and mechanics. (Read the "Issue" and "Argument" scoring guides.)

During the scoring process, essay responses on the Analytical Writing section are reviewed by ETS essay-similarity-detection software and by experienced essay readers.

 GRE® General Test
(tests taken prior to August 1, 2011)

Computer-delivered Test

Scores on the Verbal Reasoning and Quantitative Reasoning sections administered prior to August 1, 2011, depended on performance on the questions given and on the number of questions answered in the time allotted.

These sections were computer-adaptive, which means the questions presented were selected to reflect the performance on preceding questions and meet the requirements of the test design. Test design factors that influenced which questions were presented included:

  • the statistical characteristics (including difficulty level) of the questions already answered
  • the required variety of question types
  • the appropriate coverage of content

For the Analytical Writing section, each essay received a score from at least one trained reader, using a six-point holistic scale. (In holistic scoring, readers are trained to assign scores on the basis of the overall quality of an essay in response to the assigned task.) The essay scores were then reviewed by e-rater, a computerized program developed by ETS, which was used to monitor the human reader. If the e-rater evaluation and the human score agreed, the human score was used as the final score. If they disagreed by a certain amount, a second human score was obtained, and the final score was the average of the two human scores.

The final scores on the two essays were then averaged and rounded up to the nearest half-point interval. A single score was reported for the Analytical Writing section.

The primary emphasis in scoring the Analytical Writing section was on critical thinking and analytical writing skills rather than on grammar and mechanics.

During the scoring process, essay responses on the Analytical Writing section were reviewed by ETS essay-similarity-detection software and by experienced essay readers.

Paper-delivered Test

For the Verbal and Quantitative sections administered prior to August 1, 2011, a raw score was computed. The raw score was the number of questions for which the correct answer choice was given.

The raw score was then converted to a scaled score through a process known as equating. The equating process accounts for differences in difficulty among the different test editions, so a given scaled score for a particular measure reflects the same level of ability, regardless of the edition of the test that was taken.

For the Analytical Writing section, each essay received a score from two trained readers, using a six-point holistic scale. (In holistic scoring, readers are trained to assign scores on the basis of the overall quality of an essay in response to the assigned task.) If the two assigned scores differed by more than one point on the scale, the discrepancy was adjudicated by a third GRE reader.

Otherwise, the scores from the two readings of an essay were averaged. The final scores on the two essays were then averaged and rounded up to the nearest half-point interval. A single score was reported for the Analytical Writing section.

The primary emphasis in scoring the Analytical Writing section was on critical thinking and analytical writing skills rather than on grammar and mechanics.

During the scoring process, essay responses on the Analytical Writing section were reviewed by ETS essay-similarity-detection software and by experienced essay readers.

GRE® Subject Tests 

In calculating reported scores for traditional paper-delivered tests, the number of questions answered correctly is adjusted according to the difficulty level of the questions on the test form. Thus, the same number of correct responses on different test forms will not necessarily result in the same reported score.

In paper-delivered tests, the differences in difficulty among test forms are relatively small and are adjusted through a process known as score equating. The number of questions answered is also figured into the calculation of the reported score because it limits the number that can be answered correctly.

Scoring of the Subject Tests is a two-step process:

  • First, a raw score is computed. The raw score is the number of correct answers minus one-fourth the number of incorrect answers.
  • The raw score is then converted to a scaled score through a process known as equating that accounts for differences in difficulty among the different test editions. Thus, a given scaled score reflects approximately the same level of ability regardless of the edition of the test that was taken.

See also:

 

GRE Test Takers Want to Hear from You!

500,000+ prospects who have demonstrated graduate-level readiness — reach them now! Only with the GRE Search Service.

Business Schools: Raise Your Visibility

And be sure highly qualified GRE test takers know that YOU accept GRE scores.

New! 2014–15 GRE Guide to the Use of Scores

A must-have for any program using GRE scores. Includes best practices, GRE score interpretation data and more.

Start accepting official GRE scores.

Become a Score User >