The increasing availability and performance of computer-based testing has prompted more research on the automatic assessment of language and speaking proficiency. In this investigation, we evaluated the feasibility of using an off-the-shelf speech-recognition system for scoring speaking prompts from the LanguEdge field test of 2002. We first established the level of agreement between two trained scorers. We then adapted a speech engine to the language backgrounds and proficiency ranges of the speakers and developed a classification and regression tree (CART) for each of the five prompts based on features computed from the output of the speech recognizer. In a validation on held-out data, we found that while our features are not sufficiently comprehensive to adequately score these prompts, collectively these features appear to capture reliably some aspects of speaking proficiency.