The Human in the Machine: Power, Bias, and Governance in AI Societies

Research output: Contribution to journalArticlepeer-review

Abstract

As artificial intelligence (AI) systems become embedded in everyday life, they increasingly participate in decisions, interactions, and institutional processes once governed solely by humans. This article examines the evolving role of AI not as a neutral tool, but as a socio-technical agent shaped by and shaping human norms, values, and structures of power. Drawing on insights from computational sociology, behavioural experiments, and human-machine collaboration, we explore how gender bias, trust asymmetries, and algorithmic governance unfold across domains-from digital assistants and workplace management to collective intelligence and online platforms. Through case studies, including large-scale experiments and Wikipedia-based modelling, we illustrate the dynamics of cooperation, conflict, and consensus in hybrid human-machine systems. We argue that ethical design and regulation must move beyond principles to address structural inclusion, institutional accountability, and sociotechnical transparency. By situating AI within broader social and political contexts, we offer a framework for understanding and shaping its impact on human autonomy, fairness, and collaboration. The future of AI, we contend, is not determined by technical capacity alone, but by the values and institutions that govern its development and deployment.

Original languageEnglish
Pages (from-to)157-165
Number of pages9
JournalJournal of the Statistical and Social Inquiry Society of Ireland
Volume54
Issue number178
Publication statusPublished - 2025

Keywords

  • Algorithmic Bias
  • Artificial Intelligence
  • Collective Intelligence
  • Gender
  • Human-Machine Interaction

Fingerprint

Dive into the research topics of 'The Human in the Machine: Power, Bias, and Governance in AI Societies'. Together they form a unique fingerprint.

Cite this