Abstract
Validation as a field of study is important to the development of educational Interactive Learning Environments (ILEs), a type of software that allows dynamic and active engagement with educational material. However, as ILEs become more complex, borrowing from fields such as agent-based modeling,
simulation, and “serious games”, the educational domain has lagged in adopting the rigorous validation standards typical in these fields. Traditional methods, such as face validation by subject matter experts, are often criticized for their subjectivity and lack of thoroughness for validating pedagogical content or underlying theory. To address this, we present a machine learning-based methodology to validate the content and educational theory of ILEs in the context of complex systems. By demonstrating automated labeling of time-series data from VERA, an ecology focused agent-based modeling and simulation tool, we report a success rate of 92.79% on a manually collected sample. This promising result not only validates VERA but also suggests the broader applicability of our approach to other time series-based ILEs.
Using Comparative Machine Learning Methods to Validate Educational Content