Tese

Towards AI-human hybrid online judges to support decision making for CS1 instructors and students

Introductory programming (also known as CS1 - Computer Science 1) may be complex for many students. Moreover, there is a high failure in these courses. A common agreement from computing education research is that programming students need practice and quick feedback on the correctness of their code....

ver descrição completa

Autor principal: Pereira, Filipe Dwan
Outros Autores: http://lattes.cnpq.br/1043535741108408, https://orcid.org/0000-0003-4914-3347
Grau: Tese
Idioma: eng
Publicado em: Universidade Federal do Amazonas 2022
Assuntos:
Acesso em linha: https://tede.ufam.edu.br/handle/tede/8981
Resumo:
Introductory programming (also known as CS1 - Computer Science 1) may be complex for many students. Moreover, there is a high failure in these courses. A common agreement from computing education research is that programming students need practice and quick feedback on the correctness of their code. Nonetheless, CS1 classes are usually large with high heterogeneity of students which make individual/group personalised support almost impractical. As an alternative to improve and optimise the learning process, researchers indicate a system that automatically evaluates students' codes, also called online judge. These systems provide assignments created by instructors and an integrated development environment, where the student can develop and submit the solutions to problems and receive immediate feedback about the code correctness. Additionally, these online judge systems have opened up new research opportunities since it is possible to embed software components capable of monitoring and recording fine-grained actions performed by students during their attempts to solve the programming assignments. Research in the areas of Intelligent Tutoring Systems, Adaptive Educational Hypermedia and AI in Education have shown that personalisation using data-driven analysis is essential to improve the teaching and learning process and can be useful to provide individualised/group support for stakeholders (instructors and students). In this sense, in this work we collected students' interaction logs within an online judge, recording very fine-grained data, such as keystroke, number of commands typed, number of submissions, etc., making it possible to do research of great precision into the exact triggers for students’ progress. From these logs, we extract students' programming behaviours to compose what we call programming profiles. Furthermore, we extract useful information from the program statements using Natural Language Processing (NLP). Using such programming profiles and NLP extracted information, we propose and validate descriptive, predictive, and prescriptive AI methods that combine the large-scale approach formula for generalities with the flexibility given by an in-house online judge system, allowing unprecedented research depth and amenability to provide personalised individualised/group support for stakeholders. Indeed, our AI methods have the potential of improving the CS1 students learning by stimulating effective practice at the same time that reducing the instructors' workload. Our results include: i) a cutting-edge interpretable machine learning method that predicts the learners' performance and explains individually and collectively factors that lead to failure or success; (ii) a method that, for the first time, to the best of our knowledge, detects early effective programming behaviours and indicates how those positive behaviours can be used to guide students with ineffective behaviours; iii) a novel prescriptive model that automatically detects the topic of problems achieving state-of-the-art results and makes problems' recommendations based on that and the students' programming profiles. Finally, we also explored how our AI methods could be used in collaboration with the instructors' intelligence, giving thus a move towards novel human/AI online judge architecture to support the decision-making of CS1 instructors and students. To do so, the results of our methods are represented in the format of hybrid human/AI concept designs, which are validated consistently and systematically by CS1 instructors, who are responsible for deciding which concept designs should be available to the students.