Abstract
There is a significant gap in Computing Education Research (CER) concerning the impact of Large Language Models (LLMs) in advanced stages of degree programmes. This study aims to address this gap by investigating the effectiveness of LLMs in answering exam questions within an applied machine learning final-year undergraduate course. The research examines the performance of LLMs in responding to a range of exam questions, including proctored closed-book and open-book questions spanning various levels of Bloom’s Taxonomy. Question formats encompassed open-ended, tabular data-based, and figure-based inquiries. To achieve this aim, the study has the following objectives: Comparative Analysis: To compare LLM-generated exam answers with actual student submissions to assess LLM performance. Detector Evaluation: To evaluate the efficacy of LLM detectors by directly inputting LLM-generated responses into these detectors. Additionally, assess detector performance on tampered LLM outputs designed to conceal their AI-generated origin. The research methodology used for this paper incorporates a staff-student partnership model involving eight academic staff and six students. Students play integral roles in shaping the project’s direction, particularly in areas unfamiliar to academic staff, such as specific tools to avoid LLM detection. This study contributes to the understanding of LLMs' role in advanced education settings, with implications for future curriculum design and assessment methodologies.
Original language | English |
---|---|
Article number | 4 |
Journal | Revista de Educación a Distancia |
Volume | 24 |
Issue number | 78 |
DOIs | |
Publication status | Published - 2024 |
Keywords
- AI
- Applied Machine Learning
- ChatGPT
- Detection
- LLMs
- Performance
- Transformers