Automatically Assessing Oral Reading Fluency in a Computer Tutor that Listens
Joseph E. Beck, Peng Jia and Jack Mostow
Much of the power of a computer tutor comes from its ability to assess students. In some domains, including oral reading, assessing the proficiency of a student is a challenging task for a computer. Our approach for assessing student reading proficiency is to use data that a computer tutor collects through its interactions with a student to estimate his performance on a human-administered test of oral reading fluency. A model with data collected from the tutor’s speech recognizer output correlated, within-grade, at 0.78 on average with student performance on the fluency test. For assessing students, data from the speech recognizer were more useful than student help-seeking behavior. However, adding help-seeking behavior increased the average within-grade correlation to 0.83. These results show that speech recognition is a powerful source of data about student performance, particularly for reading.