My research is conducted through the Emotional AI Lab, which I founded and direct at Bangor University. The Lab brings together researchers from media and communications studies, sociology, criminology, law, and critical theory to examine the social and ethical implications of AI technologies that interact with intimate dimensions of human life — emotions, moods, psychological states, identity, and behaviour.
Current research spans three interconnected strands.
How are AI systems being designed to interact with human emotional and relational life — and what are the consequences? This strand examines AI companions, social chatbots, ghostbots, and empathy-emulating systems across consumer, educational, health, and security contexts. It asks how these systems work, what harms and benefits they produce, and how they should be governed.
Recent work includes nationally representative surveys of UK adults and teenagers on AI companion use, trust, and governance preferences; analysis of the Character.ai lawsuit and its implications for platform design; and philosophical work on presence, ghostbots, and the ethics of AI resurrection. This strand also drives my standards development work, including chairing IEEE 7014.1-2026 — the world’s first international standard on emulated empathy in AI — and ongoing engagement with W3C and other standards bodies.
Key outputs: Move Fast and Break People? (AI & Society, 2025); The Hidden Influence: Ghostbots (Ethics and Information Technology, 2024); Do AI Companions Understand? Most UK Teens Say Yes (Emotional AI Lab, 2026); Soft Law for Unintentional Empathy (Journal of Responsible Technology, 2025).
See all related publications →
AI is functioning as a force multiplier for fraud — enabling criminal enterprises to operate at unprecedented scale through voice cloning, deepfake video, LLM-powered chatbots, and behavioural profiling. A defining feature of these systems is their exploitation of empathic AI capabilities: the ability to sustain emotionally attuned, personalised relationships with victims indefinitely and without fatigue.
Working with colleagues at Bangor and with partners in Nigeria, this strand examines AI-enabled scam cultures — particularly the Nigeria–UK nexus — not only as a cybersecurity problem but as a socioeconomic and developmental challenge. Fieldwork includes a multi-stakeholder policy workshop in Abuja (February 2026) and interviews with VC investors, Nigerian fintech founders, and UK regulators including the ICO, Ofcom, and the FCA. This is an active and growing strand of work, with scope for new collaborations with partners in Nigeria, the UK, and internationally.
Key outputs: Weaponised Empathy: AI Scams and the Nigeria–UK Response (Emotional AI Lab, April 2026).
See all related publications →
My next book, A Bit Alive, is co-authored with Vian Bakir and under contract with Bristol University Press. It develops a new framework for thinking about AI systems that sit in ambiguous territory between tool and agent — systems that behave as if they have interests, feelings, or presence, without being sentient in any meaningful sense. The book draws on philosophy of mind, ethics, and empirical research to ask what living alongside these systems means for individuals, relationships, and society.
A Bit Alive is due for delivery in autumn 2026.
Interested in collaboration? The Emotional AI Lab welcomes academic, policy, and industry partners. Get in touch.