Comparing the Effect of Contextualized Versus Generic Automated Feedback on Students’ Scientific Argumentation QWK NGSS
- Author(s):
- Olivera-Aguilar, Margarita; Lee, Hee-Sun; Pallant, Amy; Belur, Vinetha; Mulholland, Matthew; Liu, Ou Lydia
- Publication Year:
- 2022
- Report Number:
- RR-22-03
- Source:
- ETS Research Report
- Document Type:
- Report
- Page Count:
- 16
- Subject/Key Words:
- Computerized Testing, Feedback, Scientific Argumentation, Uncertainty, Argumentation Skills, Climate Change, Writing Evaluation, Student Evaluation, Contextualized Assessment, Generic Scoring Model, Formative Assessment, Constructed Response, Scoring Rubric, Human Rater, Student Improvement, Achievement Gains, Automated Feedback, Immediate Feedback, Quadratic Weighted Kappa (QWK), Middle School Students, High School Students, Next Generation Science Standards (NGSS)
Abstract
This study uses a computerized formative assessment system that provides automated scoring and feedback to help students write scientific arguments in a climate change curriculum. We compared the effect of contextualized versus generic automated feedback on students’ explanations of scientific claims and attributions of uncertainty to those claims. Classes were randomly assigned to the contextualized feedback condition (227 students from 11 classes) or to the generic feedback condition (138 students from 9 classes). The results indicate that the formative assessment helped students improve their scores in both explanation and uncertainty scores, but larger score gains were found in the uncertainty attribution scores. Although the contextualized feedback was associated with higher final scores, this effect was moderated by the number of revisions made, the initial score, and gender. We discuss how the results might be related to students’ familiarity with writing scientific explanations versus uncertainty attributions at school.
Read More
- Request Copy (specify title and report number, if any)
- https://doi.org/10.1002/ets2.12344