This paper describes the development and evaluation of Interaction Competence Elicitor (ICE), a spoken dialog system (SDS) for the delivery of a paired oral discussion task in the context of language assessment. The purpose of ICE is to sustain a topic-specific conversation with a test taker in order to elicit discourse that can be later judged to assess the test taker’s oral language ability, including interactional competence. The development of ICE is reported in detail to provide guidance for future developers of similar systems. The performance of ICE is evaluated on two aspects: (a) by analyzing system errors that occur at different stages in the natural language processing (NLP) pipeline in terms of both their preventability and their impact on the downstream stages of the pipeline, and (b) by analyzing questionnaire and semistructured interview data to establish the test takers’ experience with the system. Findings suggest that ICE was robust in 90% of the dialog turns it produced, and test takers noted both positive and negative aspects of communicating with the system as opposed to a human interlocutor. We conclude that this prototype system lays important groundwork for the development and use of specialized SDSs in the assessment of oral communication, which includes interactional competence.