Using Inverse Reinforcement Learning to Diagnose Learners' Misconceptions
By carefully observing student actions, a teacher can often assess the student's knowledge and recognize misconceptions. Detailed observation of all student activity is infeasible in a traditional classroom, but interactive educational technologies provide the means to collect such data from virtual environments, such as a simulated chemistry lab or an educational game. We propose an approach for automatically diagnosing student misconceptions based on observed actions within these environments. Our approach relies on modeling student action planning: We formalize the problem as a Markov decision process in which a student must choose what actions to take to complete a goal, and frame student misconceptions as erroneous beliefs about how one's actions affect the environment. We then apply a variation of inverse reinforcement learning to compute a posterior distribution over possible misconceptions. Through lab experiments, we show that this approach is accurate at recovering learners' beliefs in a simple planning environment and can be used to guide feedback. Given these successes, we are currently applying the model to data from a cell-biology game and to diagnosing algebra misconceptions. My talk will discuss joint work with Tom Griffiths and Michelle LaMar.