Abstract
As artificial intelligence (AI) systems become embedded in everyday life, they increasingly participate in decisions, interactions, and institutional processes once governed solely by humans. This article examines the evolving role of AI not as a neutral tool, but as a socio-technical agent shaped by and shaping human norms, values, and structures of power. Drawing on insights from computational sociology, behavioural experiments, and human-machine collaboration, we explore how gender bias, trust asymmetries, and algorithmic governance unfold across domains-from digital assistants and workplace management to collective intelligence and online platforms. Through case studies, including large-scale experiments and Wikipedia-based modelling, we illustrate the dynamics of cooperation, conflict, and consensus in hybrid human-machine systems. We argue that ethical design and regulation must move beyond principles to address structural inclusion, institutional accountability, and sociotechnical transparency. By situating AI within broader social and political contexts, we offer a framework for understanding and shaping its impact on human autonomy, fairness, and collaboration. The future of AI, we contend, is not determined by technical capacity alone, but by the values and institutions that govern its development and deployment.
| Original language | English |
|---|---|
| Pages (from-to) | 157-165 |
| Number of pages | 9 |
| Journal | Journal of the Statistical and Social Inquiry Society of Ireland |
| Volume | 54 |
| Issue number | 178 |
| Publication status | Published - 2025 |
Keywords
- Algorithmic Bias
- Artificial Intelligence
- Collective Intelligence
- Gender
- Human-Machine Interaction
Fingerprint
Dive into the research topics of 'The Human in the Machine: Power, Bias, and Governance in AI Societies'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver