Frequently Asked Questions

Using the Criterion® Service in Teaching

How can the Criterion service help students?

Students get a response to their writing while it is fresh in their minds. They find out immediately how their work compares to a standard and what they should do to improve it. The Criterion® service also provides an environment for writing and revision that students can use independently, 24 hours a day. This environment, coupled with the opportunity for instant feedback, provides the directed writing practice that is so beneficial for students.

How many topics are available?

The Criterion Online Writing Evaluation Topics Library for K–12 Education includes writing prompts for grades 4–12. Criterion essay topics are constructed to elicit writing in various modes that include persuasive, expository, descriptive and narrative. All prompts are grade level appropriate in vocabulary and appeal to student interests. Each topic may be scored on either a 6-point or 4-point scale and the associated rubrics are shown with each prompt.

Currently, there are 61 College Level I topics appropriate for first-year writing courses, practice and placement; 64 College Level II topics appropriate for second-year writing courses and practice; 10 College Preparatory topics; 14 GRE® test topics; and 35 TOEFL® test topics.

Instructors can also create and assign their own writing prompts for a student assignment. Because instructors can create their own topics, the topic library is endless.

The Criterion service library of topics contains assignments representing the following writing genres: persuasive, informative, narrative, expository, issue and argumentative.

Where do Criterion service topics come from?

Criterion topics come from a number of sources, including ETS testing programs such as The Praxis Series™ assessments, the GRE and TOEFL tests, and client programs such as NAEP® and the English Placement Test designed for California State University. Criterion topics have been developed based on representative samples that are mode-specific and that utilize 6-point holistic scales based on widely accepted writing standards.

How does the Criterion service handle an unusual writing style?

The Criterion service looks for specific features of syntax, organization and vocabulary. If the essay under consideration is not sufficiently similar to those in its database of already-scored essays, the Criterion service posts a warning, called an Advisory, saying that it is unable to give an accurate score. Advisories usually result from essays that are too brief or those in which the vocabulary is unusual or the content is off-topic.

Will the use of the Criterion service stifle creative writing among students?

Not necessarily. The Criterion service is designed to be used for evaluating writing done under testing conditions — situations in which even the most creative writers concentrate on "playing it safe" with straightforward and competent writing.

Will the Criterion service catch cheating or plagiarism?

No. The Criterion service simply evaluates the essay. It is up to the institution to ensure that students are working independently and submitting their own work.

Instructors can opt to display a writer's sample for some topics on the "Create Assignment" screen. Students can then view the samples and refer to them while they write their own essays. The sample essays are in a read-only format and cannot be copied and pasted into another document.

What information does the Criterion service report to educators?

Educators have easy and secure access to each student's portfolio of essays, diagnostic reports and scores, as well as summary information on the performance of entire classes.

What information does the Criterion service report to students?

Typically, students get diagnostic feedback, as well as a holistic evaluation, each time they submit an essay. However, educators can block students from seeing their scores — and may choose to do so if they use the Criterion service for benchmarking.

Can instructors limit student feedback?

Yes. Instructors can elect to report all, some or none of the feedback analysis. When creating an assignment, instructors turn the score analysis feature on or off, as well as select which diagnostic feedback to report.

Can instructors limit access to assignments?

Yes, instructors can limit access when selecting assignment options. For example, the date and time an assignment is available are selected by instructors during setup. They can also limit how many times a student can write and revise an assignment.

Can instructors impose time limits on assignments?

Yes. Many assignments available from the Criterion service library of topics have time limits associated with them. When creating the assignment, instructors select whether to impose a time limit, or they can turn off the time-limit function to allow unlimited writing and revision time.

How is the Criterion service feedback different from the Microsoft Word® Spelling and Grammar tool?

The Microsoft® Word Spelling and Grammar tool can provide writers with a quick analysis of common errors.  However, the Criterion service, as an instructional tool used to improve writing, targets more precise feedback. Research shows that the spelling error detection and correction module in the Criterion service has better precision than the spelling error detection and correction module used in MS Word 2007. We continually strive through research and user input to improve the precision of all our feedback categories.

What is the Writer's Handbook?

The Writer's Handbook is an intuitive online tool that a student can access while reviewing diagnostic feedback. It explains every error or feature reported by defining it and providing examples of correct and incorrect use. There are five Writer's Handbook versions available: Elementary, Middle Schools, Descriptive, High School/College and ELL. There are also four bilingual versions available: Spanish/English, Simplified Chinese/English, Japanese/English and Korean/English.

Using the Criterion Service for Remediation, Placement and Assessment

How often does the computer's score agree with the score of a instructor reader?

In the vast majority of cases, ETS researchers generally found either exact or adjacent agreement (within one point) between the Criterion service scores and those of a trained essay reader. Both used the same scoring guidelines and scoring system.

How can the Criterion service be used for writing remediation and in basic skills writing classes?

Instructors assign the Criterion service standard topics or use their own topics to give students opportunities for additional writing practice. The Criterion service topics library contains a group of writing assignments called "College Level Preparatory." These topics are graded against a lower level scoring rubric and can be assigned to gradually move incoming freshmen up to the first-year writing level. Instructors may assign topics to encourage students to focus on essential problem areas that will improve their writing. The immediate feedback features of the Criterion service provide additional motivation for students to write and revise their essays when writing on their own.

How are the Criterion scores used for placement?

Students may be assigned to classes on the basis of their scores on a Criterion service-scored essay — or the combination of a Criterion service score and other indicators. The electronic score should not be the sole basis for a placement decision. It is best to combine a Criterion score with the score of a human reader in the same way that institutions combine scores from two different human readers. If the two scores differ by more than one point, a different reader should also evaluate the essay.

How is the Criterion service used for assessment purposes?

Some institutions use the Criterion service scores for exit testing — combining a Criterion service score with the score from a reader in the same way they combine scores from two different readers. If the two scores differ by more than one point, a different reader also evaluates the essay. Some institutions use the Criterion service for benchmark testing, assigning the Criterion service-scored essays at specified points during an academic term.

How can the Criterion service be used in a writing lab?

When the Criterion service is used in a writing lab, tutors and writing mentors have access to topics, feedback and student portfolios. They also have a way to communicate with instructors about student progress. Use of the Criterion service in a writing lab facilitates writing across the curriculum when students use the lab to check in-progress writing for all of their classes. Providing access to an open-ended instructor's topic allows students to write an essay about any subject assigned by any instructor. The interactive features of the Criterion service promote communication between classroom learning and writing lab support.

How do students feel about being scored by a machine?

Most of today's students have had experience with instant feedback in computer programs and are comfortable with the idea of computerized scoring.

Can the Criterion service score essays on other topics?

Yes. Using the Scored Instructor Topic feature, teachers can create their own topics that are parallel to the Criterion service library prompts, and the students' essays will receive Criterion scores upon completion. A link in the Criterion service provides step-by-step instructions on how to create either a persuasive or expository topic that can be scored.

Understanding the Technology

What is a Criterion score?

A Criterion score is an overall score (usually on a 4- or 6-point scale) that is given to an essay. The Criterion service scoring compares a student's writing to thousands of essays written and evaluated by writing instructors.

The essays used to build the scoring models have been scored by trained readers and were written by students under timed-testing conditions. The writers had no opportunity to revise, use a spell-checker or reflect on what they had written. So when students write on the Criterion service topics in a regular class, working under more relaxed conditions, instructors and students should recognize that students' scores may not precisely compare to those of the samples.

The Criterion score is a holistic score based on the traits of word choice, convention and fluency/organization. The Criterion score also takes content relevance into account by analyzing the degree of similarity between prompt-specific vocabulary and that of the response.

Does Criterion trait scores?

The trait scores are shown as Developing, Proficient and Advanced. These are based on a "normative range," where the majority (60 percent) of student scores falls. Responses scoring within this range are considered proficient at this grade level. Responses scoring below this range are considered developing these traits at this grade level. Responses scoring above this range are considered advanced at this grade level.

How does the Criterion service come up with its scores?

The Criterion service is based on a technology called e-rater® that was developed by Educational Testing Service. The e-rater scoring engine compares the new essay to samples of essays previously scored by readers, looking for similarities in sentence structure, organization and vocabulary. Essays earning high scores are those with characteristics most similar to the high-scoring essays in the sample group; essays earning low scores share characteristics with low-scoring essays in the sample group.

What is the technology used in the e-rater scoring?

The e-rater scoring engine is an application of Natural Language Processing (NLP), a field of computer technology that uses computational methods to analyze characteristics of text. Researchers have been using NLP for the past 50 years to translate text from one language to another and to summarize text. Internet search engines currently use NLP to retrieve information.

The e-rater scoring engine uses NLP to identify the features of the faculty-scored essays in its sample collection and store them — with their associated weights — in a database. When e-rater evaluates a new essay, it compares its features to those in the database in order to assign a score.

Because the e-rater scoring engine is not doing any actual reading, the validity of its scoring depends on the scoring of the sample essays from which the e-rater database is created.

Can students trick the Criterion service?

Yes. Since the e-rater engine cannot really understand English, it can be fooled by an illogical, but well-written, argument. Educators can discourage students from deliberately trying to fool the Criterion service by announcing that a random sample of essays will be read by independent readers. The Criterion service will also display an Advisory along with the e-rater score when an essay displays certain characteristics that warrant attention compared to other essays scored against the same topic.

Must students be connected to the Internet to use the Criterion service?

Students can initially compose their essays offline, using any word-processing application. However, they will ultimately need an Internet connection to be able to cut and paste their essays into the Criterion essay submission box so their work can be scored and analyzed. For assignments that are timed, essays should be composed online only to ensure accountability by all students and to accurately reflect their writing skills in this environment.

Can I import student identifiers from my data management system?

Yes, the Criterion service has import capabilities for administrators at several levels. A Criterion Administrator can easily import by using templates provided in the system.

Details are provided in both the HELP text and the Criterion® User Manual and Administrator Supplement.

Can I save my data?

Yes, the Criterion service has export features that easily allow users to create export files, and an archive portfolios feature that can be used to create export files in a comma-delimited format (.csv) that can be opened by most text editors and spreadsheet programs. Detailed instructions for both features are provided in the Criterion® User Manual and Administrator Supplement.

Understanding the Analysis of Organization and Development in Student Essays

Why do educators value the Criterion automated analysis of essay-based organizational elements in student essays?

There is now broad acceptance of automated essay scoring technology for large-scale assessment and classroom instruction. Instructors and educational researchers encourage the development of improved essay evaluation applications that not only generate a numerical rating for an essay, but also analyze grammar, usage, mechanics and discourse structure. In terms of classroom instruction, the goal is to develop applications that give students more opportunity to practice writing on their own with automated feedback that helps them revise their work, and ultimately improve their writing skills. This technology is a helpful supplement to traditional teacher instruction. Specifically, it is more effective for students to receive feedback that refers explicitly to their own writing rather than just general feedback. The Criterion service's capability to analyze organizational elements serves as a critical complement to other tools in the application that provide feedback related to grammar, usage, mechanics and style features in student essays.

Which organizational elements are analyzed?

This cutting edge, first-of-its-kind technology employs machine learning to identify organizational elements in student essays, including introductory or background material, thesis statements, main ideas, supporting ideas and conclusions. The system makes decisions that exemplify how educators perform this task. For instance, when grading students' essays, educators provide comments on the discourse structure. Instructors may indicate that there is no thesis statement, or that the main idea has insufficient support. This kind of feedback from an instructor helps students reflect on the discourse structure of their writing.

How did the system learn how to do the analysis?

Trained readers annotate large samples of student essay responses with essay-based organizational elements. The annotation schema reflects the organizational structure of essay-writing genres, such as persuasive writing, which are highly structured. The increased use of automated essay-scoring technology allows for the collection of a large corpus of students' essay responses that we use for annotation purposes.

How can this analysis help students?

As students become more sophisticated writers, they start to think about the organizational structure in their writing. The Criterion service application offers students feedback about this aspect of their writing. Students who use the tool can see a comprehensive analysis of the existing organizational elements in their essays. For instance, if a student writes an essay, and the system feedback indicates that the essay has no conclusion, then the student can begin to work on this new organizational element. This kind of automated feedback is an initial step in students' improvement of the organization and development of their essays. This kind of feedback also resembles traditional feedback that a student might receive from a professor.

Understanding Organization and Development Feedback

How does the automated system make decisions about text segments in a student essay and the corresponding organizational labels?

The algorithm developed to automatically identify essay-based organizational elements is based on samples of teacher-annotated essay data. Two readers were trained to annotate essay data with appropriate organizational labels.

What is the agreement rate between two readers on the labeling task?

Two readers are in general agreement on all labeling tasks.

What is the agreement rate between the system and the reader?

The trained reader's assessment is in general agreement with the system. In the vast majority of cases, ETS researchers generally found either exact or adjacent agreement (within one point) between the Criterion service scores and those of a trained essay reader. Both used the same scoring guidelines and scoring system.

Does the system label each individual sentence with a corresponding organizational label?

Yes. Sometimes multiple sentences are associated with a single organizational element, and the entire block of text is highlighted and appears to be assigned a single label. In fact, each sentence is labeled individually.

Does the system label according to sentence position only?

No. Many features, including word usage, rhetorical strategy information, possible sequence of organizational elements and syntactic information are used to determine the final organizational label.

Getting Technical Information

What technical requirements must a user have to access the Criterion service site?

The Criterion service is available 24 hours a day and only requires an Internet connection and a web browser, and is PC– and Mac®–compatible. It can also be used on the iPad®, but an external keyboard is recommended.

For a complete description of minimum and recommended standards and network configuration suggestions, please refer to the System Requirements Sheet.

Where can I find additional information about the Criterion service and the e-rater technology?

The research papers on the ETS website are sources of more information about the Criterion service and its underlying technology.

 
Administrators, instructors and students using the Criterion service

How to Order

If you are interested in ordering, have questions about pricing or would like to speak to a Criterion® Specialist, contact us today.