Scoring for the iSkills™ assessment is completely automated. Each simulation-based task provides many opportunities to assess a test-taker’s ability to think critically in a digital environment. Several scored responses are produced for each task. A student’s overall score accumulates all individual scored responses across all assessment tasks.
The test score range is 0–500.
The assessment uses an evidence model to evaluate the level of skill implied by performance on a task. The evidence model was developed through several steps, including:
- Considering perfect opportunities for naturalistic observations
- Identifying sources of evidence in these situations and their value in understanding individual ability
- Listing characteristics of these observations and the circumstances under which they are observed
- Documenting the characteristics of these observations that most clearly distinguish among these levels of ability
The result of the evidence modeling process is a formal structure that represents valued evidence for each skill. This structure is used to inform the development of tasks as well as evaluate and score.
An example of a task designed to target particular information literacy skills is used below to illustrate how the scoring model works.
In one task, students are asked to locate resources (e.g., articles, web pages) relevant to a research issue. In this task the student would be asked to access information from a database using a search engine and identify the degree to which the information meets the needs of the task. Students are evaluated based on their ability to locate and identify relevant information with respect to an information need in a searchable database.
For more information about evidence models, view the report A Brief Introduction to Evidence-centered Design.
Order tests, manage test administrations, run reports (for existing customers only)