TY - GEN
T1 - Evolving Towards a Trustworthy AIEd Model to Predict at Risk Students in Introductory Programming Courses
AU - Quille, Keith
AU - Vidal-Meliá, Lidia
AU - Nolan, Keith
AU - Mooney, Aidan
N1 - Publisher Copyright:
© 2023 Owner/Author.
PY - 2023/12/14
Y1 - 2023/12/14
N2 - Artificial intelligence in education has the potential to transform the educational landscape and influence the role of the involved stakeholders. With recent European Commission publications on the use of artificial intelligence and data in education and training, in addition to the Ethics Guidelines for Trustworthy AI and the forthcoming AI act, there is now a precedent that using AI, in and for education, needs to be more trustworthy (transparent and explainable) to be considered for adoption. This paper further develops an AIEd model (PreSS - Predicting Student Success) that has been developed, validated and re-validated over almost two decades, focusing on explainability and transparency. PreSS aims to identify students who are at risk of failing or dropping out of introductory programming courses. This paper presents our recent work to describe PreSS in the context of a trustworthy model, that is both explainable and transparent. First, for explainability, we present Explainable AI (XAI) approaches to describe how predictions are made, and what set of inputs leads to such decisions, which has not been presented before for PreSS. Second, to examine transparency, we present confusion metrics, focusing first on predicting the performance of all students and then on performance at sub-group level such as by gender and age. Finally, we present a model card illustrating where the model performs well and where it does not. All of these steps provide an evolution towards trustworthy educational AI.
AB - Artificial intelligence in education has the potential to transform the educational landscape and influence the role of the involved stakeholders. With recent European Commission publications on the use of artificial intelligence and data in education and training, in addition to the Ethics Guidelines for Trustworthy AI and the forthcoming AI act, there is now a precedent that using AI, in and for education, needs to be more trustworthy (transparent and explainable) to be considered for adoption. This paper further develops an AIEd model (PreSS - Predicting Student Success) that has been developed, validated and re-validated over almost two decades, focusing on explainability and transparency. PreSS aims to identify students who are at risk of failing or dropping out of introductory programming courses. This paper presents our recent work to describe PreSS in the context of a trustworthy model, that is both explainable and transparent. First, for explainability, we present Explainable AI (XAI) approaches to describe how predictions are made, and what set of inputs leads to such decisions, which has not been presented before for PreSS. Second, to examine transparency, we present confusion metrics, focusing first on predicting the performance of all students and then on performance at sub-group level such as by gender and age. Finally, we present a model card illustrating where the model performs well and where it does not. All of these steps provide an evolution towards trustworthy educational AI.
KW - Computer Science Education
KW - CS1
KW - Machine Learning
KW - Predicting Success
KW - Programming
UR - http://www.scopus.com/inward/record.url?scp=85183325064&partnerID=8YFLogxK
U2 - 10.1145/3633083.3633190
DO - 10.1145/3633083.3633190
M3 - Conference contribution
AN - SCOPUS:85183325064
T3 - ACM International Conference Proceeding Series
SP - 22
EP - 28
BT - HCAIep 2023 - Proceedings of the 2023 Conference on Human Centered Artificial Intelligence - Education and Practice
PB - Association for Computing Machinery
T2 - 2023 Conference on Human Centered Artificial Intelligence - Education and Practice, HCAIep 2023
Y2 - 15 December 2023
ER -