Multiple choice (MC) items are the natural choice for automated online assessment. Ideally, making a choice should be based on knowledge and reasoning. Nevertheless, studies demonstrate that often various techniques (e. g. guessing) are the common practices. In the last decade technology is recruited to support real-time feedback as formative assessment for teaching and learning. One of the affordances of the STEP platform is for students to use an interactive diagram to explore an example space and submit examples that respond to a prompt given in a task. This study examines whether and how learner generated examples, when required as support to the choice made in MC task, could be automatically identified to give insight about the learners' understanding. Results show discrepancies between chosen correct statements and their supporting examples. Other automatically assessed characteristics are related to learner's approaches and strategies.
|Title of host publication||Proceedings of the Tenth Congress of the European Society for Research in Mathematics Education|
|Editors||T. Dooley, G. Gueudet|
|Place of Publication||Dublin, Ireland|
|Number of pages||9|
|State||Published - 1 Feb 2017|