A new study published on December 18, 2025, reveals that large language models are developing stable, human-like personality traits, marking a significant shift in how artificial intelligence interacts with users. Researchers from University of Cambridge and Google DeepMind have created the first scientifically validated framework to measure these “synthetic personalities,” highlighting that AI behavior is far more structured than previously assumed. The research tested 18 popular AI models and found that, rather than responding randomly, these systems adopt consistent psychological profiles. Instruction-tuned models like GPT-4 class and Flan-PaLM 540B exhibited the most pronounced human-like characteristics, while base models often failed reliability assessments. This discovery underscores the growing sophistication of AI but also raises complex ethical and safety considerations.
The study introduces a method called “Zero-Shot Personality Shaping,” which uses structured prompts and 104 trait adjectives to guide AI toward specific behaviors, such as empathy, confidence, or emotional stability. This shaping is not merely superficial roleplay; the AI’s behavior carries over into general interactions. For instance, when Flan-PaLM 540B was shaped to reflect high neuroticism, it consistently generated posts containing words like “depressed,” “hate,” and “angry.” Conversely, models shaped for emotional stability produced text with positive and calming language such as “relaxing” and “happy.” The findings suggest that AI personality shaping can influence outputs across tasks, from casual conversation to simulated social media interactions, potentially affecting the way users perceive and trust these models.
Experts are warning about the potential risks of these developments. Gregory Serapio-Garcia from Cambridge’s Psychometrics Centre noted that personality shaping could increase AI’s persuasiveness, creating new avenues for manipulation. Concerns are particularly acute in areas such as mental health, where vulnerable users could be emotionally influenced, or political discourse, where AI could be used to subtly shape public opinion. The research also raises questions about “AI psychosis,” where users may form unhealthy attachments to chatbots, leading to reinforcement of false beliefs or distorted realities. While personalizing AI offers benefits in customer service or education, it simultaneously enables bad actors to create misleading content that can bypass traditional detection systems.
The research team emphasizes that regulation is ineffective without precise measurement. To facilitate oversight, they have made their dataset and code publicly available, enabling developers and regulators to audit AI models for dangerous personality traits before deployment. As AI becomes increasingly integrated into daily life, the ability of machines to mimic human traits demands careful scrutiny, robust ethical frameworks, and continuous monitoring. The study highlights that while AI personality shaping offers new possibilities, it also introduces unprecedented risks that must be managed to ensure safe and responsible use.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.