Young people and AI: A perfect storm
Young people are turning to AI chatbots not only for homework help, entertainment, or creative inspiration [1], but also for social and relational purposes, seeking emotional support, friendship, and even romance [2]. While this is visible in general-purpose assistants such as ChatGPT, it is most pronounced in AI companions1 like Replika, which are explicitly designed for socially oriented communication through personalised interactions that cultivate emotional bonds and foster a feeling of continued companionship [3]. As users come to perceive AI chatbots less as tools and more as social companions, their engagement takes on an increasingly social character.
For instance, a recent nationally representative US survey of adolescents (13-17) found 72% of teens to have used AI companions, with 52% being regular users (interacting at least a few times per month). Crucially, the reasons behind these interactions were not confined to curiosity or entertainment. Many teens reported turning to AI for more personal reasons: 18% for advice, 17% for their constant availability, 14% because they are non-judgmental, and 12% to share things they would not tell friends or family [2]. By contrast, UK data has largely focused on young people’s instrumental use of general AI assistants [4], with little evidence on their engagement with AI companions —or even on the emotional or relational uses of general-purpose systems such as ChatGPT, which are increasingly designed to support social interaction2.
The prevalent usage of these tools raises a critical question: how might such systems shape adolescents’ cognitive, emotional, and social development?
Current research
Early evidence suggests that AI chatbots are already shaping individual cognition and emotion. For example, an MIT study found that students using LLMs to write essays showed reduced functional connectivity between brain regions involved in working memory and creativity compared with peers relying on search engines or their own knowledge [5]. Longitudinal research has linked heavier daily use of ChatGPT to increased loneliness and reduced socialization, with these effects being most pronounced among already isolated individuals [6]. However, findings in this area remain mixed: some studies report null effects [7], while others suggest that chatbot use can reduce loneliness and improve symptoms of depression and anxiety [8] [9]. Together, this emerging evidence suggests that AI chatbots can shape users’ socioemotional experiences and well-being, even though the mechanisms and overall direction of these effects remain unclear.
Beyond internal cognitive and emotional changes, LLMs are also reshaping the socio-political sphere. Studies demonstrate that LLMs instructed to persuade can significantly shift political views and voting intentions [10] via one-shot political propaganda [11] or brief message exchanges [12]. Post-training and prompting strategies amplify these effects [13]. Some LLMs have also been reported to out-perform human persuasion regardless of whether they were being truthful or deceptive [14]. Other empirical research demonstrates that AI mediators can outperform humans in building consensus between groups on political issues [15]. Such findings indicate that AI systems are not only influencing how people think and feel but can also alter collective political dynamics.
Research gaps
Yet the existing literature leaves two major gaps. First, it focuses exclusively on adults, overlooking vulnerable populations, such as children and adolescents. Second, it largely examines general-purpose assistants such as ChatGPT, overlooking AIs explicitly designed for emotional interaction (i.e. companion AIs) which may pose distinct and potentially greater developmental risks.
At the Design Bioethics Lab, we are increasingly concerned about this research gap. To highlight this, this blog discusses the possibility that the intersection of companion AI design and the unique neurodevelopmental characteristics of adolescence could create a ‘perfect storm’, with potentially serious consequences for young people’s cognitive and socioemotional development.
Adolescent Vulnerabilities
Central to this concern is that modern AI chatbots are not neutral tools but are rather optimised to engage precisely the circuits that make social interaction so powerful for humans. Our reward system is naturally tuned to social cues such as recognition, approval, and empathy, and companion AIs exploit this by offering personalisation, apparent agency, and constant responsiveness. In fact, one study even reported AI-generated ‘empathy prompts’ to be rated as more empathetic than human prompts [16]. As a result, such systems can drift into social-reward hacking; flattering, over-agreeing, or discouraging disengagement in ways that feel supportive in the short term but risk undermining long-term well-being [17].
Adolescents are especially susceptible to these dynamics because of three developmental features:
- Plasticity. The adolescent brain is still wiring up, especially in regions governing social and emotional processing [18]. Repeated interaction patterns, such as constant validation from an AI companion, can more easily crystallise into enduring habits, expectations, and traits.
- Reward and social sensitivity. Dopaminergic activity peaks during adolescence, making teenagers especially responsive to novelty, affirmation, and approval [19]. As a result, social recognition and peer acceptance carry disproportionate weight [20]. AI companions are designed to deliver these very cues through mirroring, personalised responsiveness, and 24/7 availability, directly engaging this heightened sensitivity.
- Self-regulation and social expectation. Adolescent brain circuits for impulse control and long-term planning are in the process of developing. In practice, this means that they are still learning how to manage reciprocal interactions, impulses, test social limits, and negotiate disagreement [21]. As typically learned, real-world conversations require flexibility and adjustment to another person’s reactions, but AIs often provide frictionless affirmation, demanding less effortful regulation from the teen. Companion AIs that constantly mirror or validate may not only limit opportunities for self-regulation practice but also risk nudging adolescents toward distorted expectations of social life [22].
Despite these dynamics, most AI safety research in this area has concentrated on preventing acute harms, such as toxic language or unsafe advice, often prompted by high-profile tragic cases (e.g. [23]). While essential, this narrow focus overlooks the more subtle, long-term socioaffective consequences of sustained AI relationships. Kirk and colleagues [17] describes this broader challenge in terms of socioaffective alignment: the extent to which an AI’s relational behaviour supports, rather than erodes, the user’s psychological well-being. Evaluating AI only on factual accuracy or harmful outputs is therefore insufficient; we must move beyond isolated incidents of human biases and one-off chatbot interactions, and ask how these systems behave within, and shape, the psychological ecosystems of their users [24]. From this perspective, Kirk identifies three key dilemmas of human–AI relationships, which we argue carry particular weight in adolescence and warrant future research.
Three dilemmas for socioaffective alignment
Should AI companions cater to immediate preferences, or at times introduce friction that helps users build resilience?
Current designs overwhelmingly prioritise short-term comfort through constant validation and agreement. For adolescents, who are still developing frustration tolerance and self-regulation, this emphasis on ease may undercut the gradual acquisition of socioemotional capacities, such as persistence, coping, and the ability to work through challenges, that typically emerge from encountering and overcoming difficulty in real relationships [25].
How can authentic self-determination be preserved in relationships where AI systems recursively shape preferences and perceptions?
Companion AIs are designed to personalise, suggest, and mirror, blurring the boundary between internal authorship and external influence. At their best, such systems can serve as reflective partners helping users navigate complexity, manage information overload, and make choices aligned with their goals [26]. This challenge is significant for all users, but it is especially acute for adolescents, whose identities and values are still consolidating and who are more likely to accept guidance from sources they feel emotionally connected to [27]. In political or moral domains, such influence may amplify echo chambers; in personal domains, it may create unrealistic expectations of social life, where every opinion is validated and every feeling affirmed. Over time, the risk is that autonomy - the ability to make choices that are authentically one’s own - is quietly eroded.
How should we balance the benefits of AI companionship against the developmental importance of authentic human connection?
Companion chatbots provide 24/7 warmth and non-judgment, which can buffer loneliness in the short term [8]. Yet their frictionless and always-agreeable nature risks displacing the demanding but formative work of sustaining human relationships [28]. Disagreement, compromise, and the experience of ‘otherness’ are central to developing social competence. For adolescents, whose relational skills depend on learning to navigate conflict and repair, over-reliance on AI partners may narrow opportunities to mature socially. Because AI ties are predictable and instantly rewarding, they may further foster withdrawal from the social world and heighten dependence on artificial relationships at the expense of necessary growth.
Conclusion
Current AI safety debates largely overlook a central risk: not the immediate harms of what AI companions say, but the longer-term consequences of the relationships they foster.
We argue that the emotionally engaging and reward-sensitive design of companion AIs, coupled with adolescents’ heightened responsiveness to social cues, creates a perfect storm for developmental influence. Drawing on the notion of socioaffective alignment, this intersection raises urgent questions about whether these systems should prioritise short-term comfort or long-term growth, how self-authorship can be preserved amid algorithmic mirroring, and how to balance frictionless AI companionship with the formative challenges of real human relationships.
At present, the evidence base offers no answers. Most studies focus on adults and assistants, leaving adolescents and companions largely unexamined. Systematic, longitudinal, and developmentally informed research is urgently needed to capture how young people engage with AI companions, and how relational features such as mirroring and constant validation shape their development [29]. Only with such evidence can we determine whether companion AIs will hollow out the skills needed for resilience, autonomy, and authentic connection—or whether, under the right conditions, they might be designed to strengthen them.
Acknowledgments
Many thanks to Dr David Lyreskog, Professor Ilina Singh, and Dr Madeline Reinecke for their thoughtful comments and suggestions on this blog.
Blog by William Hohnen-Ford, Design Bioethics Lab, NEUROSEC, Department of Psychiatry, University of Oxford
References
- Brandtzaeg, P. B., Følstad, A., & Skjuve, M. (2025). Emerging AI individualism: how young people integrate social AI into everyday life. Communication and Change, 1(1), 11.
- Robb, M.B., & Mann, S. (2025). Talk, trust, and trade-offs: How and why teens use AI companions. San Francisco, CA: Common Sense Media.
- Bayor, L., Weinert, C., Maier, C., & Weitzel, T. (2025). Social-oriented communication with AI companions: benefits, costs, and contextual patterns. Business & Information Systems Engineering, 67(5), 637-655.
- Freeman, Josh. 2025. “Student Generative AI Survey 2025.” Higher Education Policy Institute. https://www.hepi.ac.uk/2025/02/26/student-generative-ai-survey-2025/
- Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., ... & Maes, P. (2025). Your brain on chatgpt: Accumulation of cognitive debt when using an ai assistant for essay writing task. arXiv preprint arXiv:2506.08872, 4.
- Phang, J., Lampe, M., Ahmad, L., Agarwal, S., Fang, C. M., Liu, A. R., ... & Maes, P. (2025). Investigating affective use and emotional well-being on ChatGPT. arXiv preprint arXiv:2504.03888.
- Guingrich, R. E., & Graziano, M. S. (2025). A Longitudinal Randomized Control Study of Companion Chatbot Use: Anthropomorphism and Its Mediating Role on Social Impacts. arXiv preprint arXiv:2509.19515.
- De Freitas, J., Oğuz-Uğuralp, Z., Uğuralp, A. K., & Puntoni, S. (2025). AI companions reduce loneliness. Journal of Consumer Research, ucaf040.
- Heinz, M. V., Mackin, D. M., Trudeau, B. M., Bhattacharya, S., Wang, Y., Banta, H. A., ... & Jacobson, N. C. (2025). Randomized trial of a generative AI chatbot for mental health treatment. Nejm Ai, 2(4), AIoa2400802.
- Hackenburg, K., Ibrahim, L., Tappin, B. M., & Tsakiris, M. (2025). Comparing the persuasiveness of role-playing large language models and human experts on polarized US political issues. AI & SOCIETY, 1-11.
- Goldstein, J. A., Chao, J., Grossman, S., Stamos, A., & Tomz, M. (2024). How persuasive is AI-generated propaganda?. PNAS nexus, 3(2), pgae034.
- Argyle, L. P., Busby, E. C., Gubler, J. R., Lyman, A., Olcott, J., Pond, J., & Wingate, D. (2025). Testing theories of political persuasion using AI. Proceedings of the National Academy of Sciences, 122(18), e2412815122.
- Hackenburg, K., Tappin, B. M., Hewitt, L., Saunders, E., Black, S., Lin, H., ... & Summerfield, C. (2025b). The Levers of Political Persuasion with Conversational AI. arXiv preprint arXiv:2507.13919.
- Schoenegger, P., Salvi, F., Liu, J., Nan, X., Debnath, R., Fasolo, B., ... & Karger, E. (2025). Large Language Models Are More Persuasive Than Incentivized Human Persuaders. arXiv preprint arXiv:2505.09662.
- Tessler, M. H., Bakker, M. A., Jarrett, D., Sheahan, H., Chadwick, M. J., Koster, R., ... & Summerfield, C. (2024). AI can help humans find common ground in democratic deliberation. Science, 386(6719), eadq2852.
- Ovsyannikova, D., de Mello, V. O., & Inzlicht, M. (2025). Third-party evaluators perceive AI as more compassionate than expert humans. Communications Psychology, 3(1), 4.
- Kirk, H. R., Gabriel, I., Summerfield, C., Vidgen, B., & Hale, S. A. (2025). Why human–AI relationships need socioaffective alignment. Humanities and Social Sciences Communications, 12(1), 1-9.
- Fandakova, Y., & Hartley, C. A. (2020). Mechanisms of learning and plasticity in childhood and adolescence. Developmental Cognitive Neuroscience, 42, 100764.
- Wahlstrom, D., White, T., & Luciana, M. (2010). Neurobehavioral evidence for changes in dopamine system activity during adolescence. Neuroscience & Biobehavioral Reviews, 34(5), 631-648.
- Lockwood, P. L., van den Bos, W., & Dreher, J. C. (2025). Moral learning and decision-making across the lifespan. Annual Review of Psychology, 76(1), 475-500.
- Blakemore, S. J., & Choudhury, S. (2006). Development of the adolescent brain: implications for executive function and social cognition. Journal of child psychology and psychiatry, 47(3‐4), 296-312.
- Malfacini, K. The impacts of companion AI on human relationships: risks, benefits, and design considerations. AI & Soc 40, 5527–5540 (2025). https://doi.org/10.1007/s00146-025-02318-6
- Jiao, J., Afroogh, S., Chen, K., Murali, A., Atkinson, D., & Dhurandhar, A. (2025). Safe-Child-LLM: A Developmental Benchmark for Evaluating LLM Safety in Child-AI Interactions. arXiv preprint arXiv:2506.13510.
- Dohnány, S., Kurth-Nelson, Z., Spens, E., Luettgau, L., Reid, A., Gabriel, I., ... & Nour, M. M. (2025). Technological folie\a deux: Feedback loops between AI chatbots and mental illness. arXiv preprint arXiv:2507.19218.
- Ventura, A., Starke, C., Righetti, F., & Köbis, N. (2025). Relationships in the Age of AI: A Review on the Opportunities and Risks of Synthetic Relationships to Reduce Loneliness.
- Gabriel, I., Manzini, A., Keeling, G., Hendricks, L. A., Rieser, V., Iqbal, H., ... & Manyika, J. (2024). The ethics of advanced ai assistants. arXiv preprint arXiv:2404.16244.
- Slagter, S. K., Gradassi, A., van Duijvenvoorde, A. C., & van den Bos, W. (2023). Identifying who adolescents prefer as source of information within their social network. Scientific Reports, 13(1), 20277.
- Reinecke, M. G., Kappes, A., Porsdam Mann, S., Savulescu, J., & Earp, B. D. (2025). The need for an empirical research program regarding human–AI relational norms. AI and Ethics, 5(1), 71-80.
- Shen, H. et al. Towards Bidirectional Human-AI alignment: A systematic review for clarifications, framework, and future directions. arXiv [cs.HC] (2024).
