Educational Applications of Natural Language Processing (NLP)
Besides scoring applications, ETS's Natural Language Processing (NLP) expertise has also resulted in other advanced capabilities to support student learning and assessment.
ETS's TextEvaluator® (formerly known as SourceRater℠) tool represents a new approach for modeling text complexity, designed to help test developers evaluate source material for use in developing new reading comprehension passages and items. The TextEvaluator tool combines a large, cognitively based feature set with advanced psychometric techniques in order to provide text complexity classifications that are highly correlated with classifications provided by experienced educators. This feature set extends beyond the limited dimensions of text complexity assessed by other methods (such as sentence length and vocabulary) to encompass text-level cohesion and account for differences between different text genres.
LanguageMuse℠ is a web-based, instructional authoring application intended to support K–12 teachers in the development of curricular materials for English-language learners (ELLs). The application offers linguistic feedback that highlights vocabulary, sentence structures and discourse relations found in classroom texts that may be unfamiliar to ELLs. The linguistic feedback supports teachers in creating linguistically-informed lesson plans, texts, activities and assessments with appropriate scaffolding. The LanguageMuse application has been used in formal teacher professional development settings to help teachers cultivate linguistic awareness so that they are better able to create a curriculum that addresses students' English language learning needs. The application contains self-guided professional development, so teachers can complete that portion on their own and continue to use the application in the classrooms to more easily design scaffolded materials appropriate to every K–12 grade level.
Automated Test Item Generation
Another area in which ETS has applied its natural language processing technology is in the automated generation of test items. This includes research both on completely automated generation of items from item models (in order to reduce the cost of item development and control item difficulty) and semi-automated item creation tools to help assessment developers identify appropriate source material for items or create draft items that can be augmented and edited by experienced item writers.
Below are some recent or significant publications that our researchers have authored on the subject of educational applications of natural language processing technology.
Towards Automated Assessment of Public Speaking Skills Using Multimodal Cues.
L. Chen, G. Feng, J. Joe, C. W. Leong, C. Kitchen, & C. M. Lee
Paper in Proceedings of the 16th International Conference on Multimodal Interaction (ICMI '14), pp. 200–203
Traditional assessments of public speaking skills rely on human scoring. This paper examines an initial study on the development of an automated scoring model for public speaking performances using multimodal technologies. View paper >
Using Multimodal Cues to Analyze MLA'14 Oral Presentation Quality Corpus: Presentation Delivery and Slides Quality
L. Chen, C. W. Leong, G. Feng, & C. M. Lee
Paper in Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge (MLA '14) pp. 45–52
Making presentation slides and delivering them effectively to convey information to the audience is important. The authors envision multimodal sensing and machine learning techniques that can help evaluate and potentially improve the quality of the content and delivery of public presentations. View citation record >
An Initial Analysis of Structured Video Interviews by Using Multimodal Emotion Detection
L. Chen, S. Yoon, C. W. Leong, M. Martin, & M. Ma
Paper in Proceedings of the 2014 workshop on Emotion Representation and Modelling in Human-Computer-Interaction-Systems (ERM4HCI '14), pp.1–6
The authors review a study of employing advanced multimodal emotion detection approaches to measure performance on an interview task that elicits emotion. They performed evaluations using a speech-based emotion recognition (SER) system and an off-the-shelf facial expression analysis toolkit. View citation record >
The TextEvaluator® Tool: Helping Teachers and Test Developers Select Texts for Use in Instruction and Assessment
K. M. Sheehan, I. Kostin, D. Napolitano, & M. Flor
The Elementary School Journal, Vol. 115, No. 2, pp. 184–209
This article describes TextEvaluator, a comprehensive text-analysis system designed to help teachers, textbook publishers, test developers and literacy researchers select reading materials that are consistent with the text complexity goals outlined in the Common Core State Standards. View citation record >
A Two-Stage Approach for Generating Unbiased Estimates of Text Complexity
K. M. Sheehan, M. Flor, & D. Napolitano
Proceedings of the Second Workshop of Natural Language Processing for Improving Textual Accessibility (NLP4ITA), pp. 49–58, Atlanta, Ga. Association for Computational Linguistics.
This paper presents a two-stage estimation technique that successfully addresses the tendency of automated approaches to text complexity to overestimate the complexity of informational texts, while simultaneously underestimating the complexity of literary texts. View citation record
A User Study: Technology to Increase Teachers' Linguistic Awareness to Improve Instructional Language Support for English Language Learners
J. Burstein, J. Sabatini, J. Shore, B. Moulder, & J. Lentini
In Proceedings of the Workshop for Improving Textual Accessibility in conjunction with the Annual Meeting of the North American Association for Computational Linguistics, Atlanta, Ga., June 14, 2013
This paper discusses user study outcomes with teachers who used LanguageMuse, a web-based teacher professional development (TPD) application designed to enhance teachers' linguistic awareness, and support teachers in the development of language-based instructional scaffolding (support) for their English-language learners (ELL). View citation record >
Difficulty Modeling and Automatic Generation of Quantitative Items: Recent Advances and Possible Next Steps
E. A. Graf & J. H. Fife
Chapter in Automatic Item Generation: Theory and Practice, pp. 157–180
Editors: M. Gierl & T. Haladyna
This ETS-authored chapter is part of a book volume that aims to summarize current knowledge about the field of automatic item generation. The chapter appears in Part III of the volume, which covers psychological and substantive characteristics of generated items. View citation record
Item Generation: Implications for a Validity Argument
Chapter in Automatic Item Generation: Theory and Practice, pp. 40–56
Editors: M. Gierl & T. Haladyna
This ETS-authored chapter is part of a book volume that aims to summarize current knowledge about the field of automatic item generation. The chapter appears in Part I of the volume, which covers initial considerations for automatic item generation. View citation record
Automated Grammatical Error Detection for Language Learners
C. Leacock, M. Chodorow, M. Gamon, & J. Tetreault
Monograph in Synthesis Lectures on Human Language Technologies
Morgan & Claypool
This volume describes the types of constructions English-language learners find most difficult — constructions containing prepositions, articles and collocations — and it provides an overview of the automated approaches to identifying and correcting such learner errors. View citation record
Opportunities for Natural Language Processing in Education
Computational Linguistics and Intelligent Text Processing 10th International Conference, CICLing 2009, Mexico City, Mexico, March 1–7, 2009. Proceedings
This paper discusses emerging opportunities for natural language processing researchers in the development of educational applications for writing, reading and content knowledge acquisition. View citation record
When Do Standard Approaches for Measuring Vocabulary Difficulty, Syntactic Complexity and Referential Cohesion Yield Biased Estimates of Text Difficulty?
K. M. Sheehan, I. Kostin, & Y. Futagi
Paper in Proceedings of the 30th Annual Meeting of the Cognitive Science Society
This paper demonstrates that many widely used approaches for assessing text difficulty tend to both overpredict the difficulty of informational texts and underpredict the difficulty of literary texts. View citation record
The Automated Text Adaptation Tool
J. Burstein, J. Shore, J. Sabatini, Y. Lee, & M. Ventura
Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT), pp. 3–4
Association for Computational Linguistics
This paper introduces the Automated Text Adaptation Tool v.1.0 (ATA v.1.0), an innovative, educational tool that automatically generates text adaptations similar to those teachers might create. View citation record
Item Distiller: Text Retrieval for Computer-Assisted Test Item Creation
ETS Research Memorandum RM-07-05
This paper describes Item Distiller, a tool developed at ETS to aid in the creation of sentence-based items. Item Distiller enables users to search for sentences that contain specific words, phrases or grammatical constructions, so that these sentences can be edited to produce finished items. This approach to test authoring is both more efficient than writing items from scratch and more principled, due to its links to item modeling. View citation record
SourceFinder: A Construct-Driven Approach for Locating Appropriately Targeted Reading Comprehension Source Texts
K. M. Sheehan, I. W. Kostin, & Y. Futagi
Proceedings of the 2007 Workshop of the International Speech Communication Association, Special Interest Group on Speech and Language Technology in Education, pp. 80–83
This paper describes a fully-automated approach for locating source material for use in developing reading comprehension/verbal reasoning passages. View citation record
Model Analysis and Model Creation: Capturing the Task-Model Structure of Quantitative Item Domains
P. Deane, E. A. Graf, D. Higgins, Y. Futagi, & R. Lawless
ETS Research Report RR-06-11
This study focuses on the relationship between item modeling and evidence-centered design (ECD); it considers how an appropriately generalized item modeling software tool can support systematic identification and exploitation of task-model variables, and then examines the feasibility of this goal, using linear-equation items as a test case. The first half of the study examines task-model structures for linear equations and their relevance to item difficulty within ECD. The second half of the study presents prototype software, a Model Creator system for pure math items, designed to partially automate the creation of variant item models reflecting different combinations of task-model variables. The prototype is applied to linear equations but is designed to generalize over a range of pure mathematical content types. View citation record
Find More Articles
View more research publications related to educational applications of natural language processing.
Read More from Our Researchers
View a list of current ETS researchers and their work.