Concerns are growing within the global research community that the pace of artificial intelligence development may exceed the world’s ability to manage its safety risks, according to a senior expert at the United Kingdom’s advanced research agency. David Dalrymple, a programme director and AI safety specialist at Aria, has warned that governments, regulators and societies may not have sufficient time to put effective safeguards in place as increasingly capable systems emerge. His comments add to a broader international debate over how to balance innovation with control as AI capabilities expand at an unprecedented rate.
Dalrymple said the public should be concerned about AI systems that could outperform humans across a wide range of real world tasks. He cautioned that machines capable of performing all functions required to operate within society, but at higher levels of efficiency and accuracy, could displace human dominance in critical domains. According to him, such a shift could undermine humanity’s ability to maintain control over key aspects of civilisation, governance and environmental stewardship. He also highlighted a growing gap between the understanding held by public institutions and the expectations within private AI companies regarding the scale and speed of forthcoming breakthroughs. Dalrymple stressed that progress in the field is moving rapidly and warned that safety research may struggle to keep pace with commercial and economic pressures driving deployment.
Addressing assumptions around reliability, Dalrymple urged governments not to treat advanced AI systems as inherently dependable. While Aria operates independently despite being publicly funded, its mandate includes directing research into safeguarding AI use in sensitive areas such as energy infrastructure. Dalrymple explained that the scientific foundations required to guarantee reliable behaviour may not mature quickly enough given market incentives. As a result, he said the most realistic approach in the near term is to focus on controlling and mitigating potential harms rather than assuming full system dependability. He described the potential outcome of unchecked progress as destabilisation of both security and the global economy, noting that significantly more technical work is required to understand and manage the behaviour of advanced systems.
Recent findings from the UK government’s AI Security Institute reinforce these concerns. The institute reported that advanced AI model capabilities are improving rapidly across all domains, with performance in certain areas doubling approximately every eight months. According to its assessment, leading models can now complete apprentice level tasks successfully around half of the time, a substantial increase from roughly ten percent recorded a year earlier. The institute also found that the most advanced systems are capable of autonomously completing tasks that would require more than an hour of focused effort from a human expert. While tests involving self replication showed success rates exceeding sixty percent in controlled environments, the institute noted that such scenarios are unlikely to succeed under everyday real world conditions.
Dalrymple believes the trajectory of development could accelerate further within the next year. He estimates that by late 2026, AI systems may be able to automate the equivalent of a full day of research and development work, enabling them to improve their own mathematical and computational foundations. This feedback loop, he said, could drive a faster expansion of capabilities and amplify existing risks. While some researchers hope such progress could deliver broad benefits, Dalrymple characterised the transition as high risk, warning that human society may be moving toward it without sufficient awareness or preparation.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.