Review of S. Vallor (2024). The AI Mirror. How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford University Press, New York, 263 p.
Dr. G.J. van Bussel
Shannon Vallor is a professor of the ethics of data and artificial intelligence (AI) at the University of Edinburgh. In this role, she explores how new technologies such as AI, robotics, and data science are reshaping human moral character, cognitive habits, and social practices. As one of today’s leading philosophers of technology, she is a key figure in contemporary debates about humanity’s technological future. Her 2016 book, Technology and the Virtues, is an attempt to rethink virtue ethics in the context of technological upheaval and breakthroughs. In it, she presents ‘technomoral virtue ethics’, a framework that critiques how technological breakthroughs erode traditional moral capacities, while also proposing strategies for cultivating new virtues adapted to our era of technological disruption. [i] Her latest book, The AI Mirror (2024), builds on this research and focuses on three interconnected themes: the moral agency of AI systems; the effects of algorithms on human self-understanding; and the conditions needed for responsible innovation. In The AI Mirror, Vallor provides a moral and philosophical analysis of AI and its role in shaping knowledge, values, and morality.
Nearly a year after publication, The AI Mirror remains relevant, its title perfectly expressing Vallor’s central thesis: what we call ‘AI’ is not intelligence but a reflection — one that mimics human cognition while fundamentally lacking understanding. The mirror analogy proves very powerful, exposing how these systems merely process and refract historical data imbued with human conceptions (and misconceptions) including conscious or unconscious biases and misrepresentations of reality. [ii] AI serves as a mirror that exposes and intensifies societal biases, aspirations, and moral contradictions. Vallor argues that AI not only mirrors but recursively influences human cognition.
Employing the mirror metaphor, she warns that without deliberate human intervention, AI will perpetuate biases, distortions, and blind spots rather than fostering meaningful progress. These systems ‘aren’t designed to be accurate — they are designed to sound accurate’ (p. 121.) This distinction is vital, highlighting how large language models trade in plausibility rather than truth, and how their outputs, despite their fluency, remain cognitively hollow. Vallor asserts that the self-image projected by AI is deceptive and that an uncritical fascination with AI poses significant dangers. ‘It is their power to induce in us a type of self-forgetting — a selective amnesia that loosens our grip on our own human agency and clouds our self-knowledge. It is an illusion that can ensnare even the most technologically adept among us’ (p. 2.)
Continue reading