TY - JOUR
T1 - A Multimodal Measurement Of The Impact Of Deepfakes On The Ethical Reasoning And Affective Reactions Of Students
AU - Ramachandran, Vivek
AU - Hardebolle, Cécile
AU - Kotluk, Nihat
AU - Ebrahimi, Touradj
AU - Riedl, Reinhard
AU - Jermann, Patrick
AU - Tormey, Roland
N1 - Publisher Copyright:
© 2023 SEFI 2023 - 51st Annual Conference of the European Society for Engineering Education: Engineering Education for Sustainability, Proceedings. All Rights Reserved.
PY - 2023
Y1 - 2023
N2 - Deepfakes - synthetic videos generated by machine learning models - are becoming increasingly sophisticated. While they have several positive use cases, their potential for harm is also high. Deepfake production involves input from multiple engineers, making it challenging to assign individual responsibility for their creation. The separation between engineers and consumers may also contribute to a lack of empathy on the part of the former towards the latter. At present, engineering ethics education appears inadequate to address these issues. Indeed, the ethics of artificial intelligence is often taught as a stand-alone course or a separate module at the end of a course. This approach does not afford time for students to critically engage with the technology and consider its possible harmful effects on users. Thus, this experimental study aims to investigate the effects of the use of deepfakes on engineering students’ moral sensitivity and reasoning. First, students are instructed about how to evaluate the technical proficiency of deepfakes and about the ethical issues associated with them. Then, they watch three videos: an authentic video and two deepfake videos featuring the same person. While watching these videos, the data related to their attentional (eye tracking) and emotional (self-reports, facial emotion recognition) engagement is collected. Finally, they are interviewed using a protocol modelled on Kohlberg’s ‘Moral Judgement Interview’. The findings can have significant implications for how technology-specific ethics can be taught to engineers, while providing them space to engage and empathise with potential stakeholders as part of their decision-making process.
AB - Deepfakes - synthetic videos generated by machine learning models - are becoming increasingly sophisticated. While they have several positive use cases, their potential for harm is also high. Deepfake production involves input from multiple engineers, making it challenging to assign individual responsibility for their creation. The separation between engineers and consumers may also contribute to a lack of empathy on the part of the former towards the latter. At present, engineering ethics education appears inadequate to address these issues. Indeed, the ethics of artificial intelligence is often taught as a stand-alone course or a separate module at the end of a course. This approach does not afford time for students to critically engage with the technology and consider its possible harmful effects on users. Thus, this experimental study aims to investigate the effects of the use of deepfakes on engineering students’ moral sensitivity and reasoning. First, students are instructed about how to evaluate the technical proficiency of deepfakes and about the ethical issues associated with them. Then, they watch three videos: an authentic video and two deepfake videos featuring the same person. While watching these videos, the data related to their attentional (eye tracking) and emotional (self-reports, facial emotion recognition) engagement is collected. Finally, they are interviewed using a protocol modelled on Kohlberg’s ‘Moral Judgement Interview’. The findings can have significant implications for how technology-specific ethics can be taught to engineers, while providing them space to engage and empathise with potential stakeholders as part of their decision-making process.
KW - Deepfakes
KW - machine learning models
KW - engineering ethics education
KW - moral sensitivity
KW - ethical reasoning
KW - attentional engagement
KW - emotional engagement
KW - technology-specific ethics
KW - Ethics
KW - Artificial intelligence
KW - Emotions
KW - Moral judgement
UR - https://www.scopus.com/pages/publications/85179853740
U2 - 10.21427/eajr-we79
DO - 10.21427/eajr-we79
M3 - Article
SP - 1122
EP - 1132
JO - European Society for Engineering Education (SEFI)
JF - European Society for Engineering Education (SEFI)
ER -