UK Lawmakers Call for Tougher AI Controls


A growing group of UK politicians is ramping up pressure on the government to introduce binding controls on the most powerful artificial intelligence systems, warning that the nation risks entering into a future shaped more by Silicon Valley than by democratic oversight.

According to an investigation by The Guardian, more than 100 parliamentarians from across the political spectrum have joined a coordinated call for tougher AI regulation amid concerns that ministers are moving too cautiously amid industry lobbying and pressure from the United States.

The campaign is being led by Control AI, a nonprofit backed by big names in the business, including Skype co-founder Jaan Tallinn. Its message to Prime Minister Keir Starmer is blunt: show independence from Washington and push for enforceable limits on frontier AI before the technology outruns political control.

While the language around UK “independence” from the US has been circulating for years in defence, trade, and technology policy, AI is fast becoming the truest test of whether Britain is genuinely willing to diverge from US policy.

Warnings from inside politics and AI

Several are backing the push, including former defence secretary Des Browne, Conservative peer Zac Goldsmith, and Jonathan Berry, who served as the UK’s first AI minister under Rishi Sunak. Their concern is not simply about job losses or productivity shifts but about what happens if future systems become powerful enough to evade human control.

Browne has compared the development of superintelligent AI to the invention of nuclear weapons, calling it potentially “the most perilous technological development” humanity has ever created. Without coordinated safeguards, he argues, countries and companies could enter a destabilising race for advantage that undermines national and global security alike.

Goldsmith, meanwhile, warns that even as respected figures within the tech sector “blow the whistle,” governments remain far behind the pace of corporate development. One of AI’s founding researchers, Yoshua Bengio, recently said that advanced AI is now less regulated than a sandwich, a comment that campaigners say appropriately sums up the problem facing policymakers.

From Bletchley Park to everyday life

The renewed calls land awkwardly for a government that once positioned Britain as a global leader on AI safety. In 2023, the UK hosted the AI Safety Summit at Bletchley Park and launched what is now known as the AI Security Institute, an organisation widely respected by international partners. Yet critics say the political energy that followed the summit has since faded. The summit acknowledged the potential for “catastrophic harm” from advanced AI systems, but calls for binding international action have largely given way to softer, voluntary approaches.

At the same time, AI is rapidly becoming part of everyday British life. Research from Lloyds Banking Group suggests that more than 28 million UK adults now use AI tools to help manage their money, whether through chatbots offering budgeting advice, debt planning, or investment research. Many users report real savings, but surveys also show high levels of concern about data privacy, misinformation, and overreliance on automated advice, which are precisely the kinds of risks regulation is meant to manage.

The impact is also being seen in the labour market. Anxiety about AI-driven layoffs is pushing some young people toward skilled trades they believe machines are less likely to replace, while studies show companies most exposed to AI have already begun cutting headcount, particularly in junior office roles. These trends are unfolding well ahead of any comprehensive legal framework governing how powerful AI systems should be developed or deployed.

A narrowing window for action

Labour pledged before the election to introduce legislation imposing requirements on developers of advanced AI models, but no draft bill has yet appeared. Officials insist that the UK is already regulated and that they must avoid hindering innovation. Campaigners, however, argue that US resistance to strong AI rules is quietly shaping British policy, leaving the UK positioned as a follower rather than a shaper of global norms.

Andrea Miotti, chief executive of Control AI, has criticised what he describes as a “timid” approach, warning that mandatory safety standards could be needed within the next two years, given the speed of AI development. Jonathan Berry has echoed that view, arguing that systems posing existential risks should be explicitly regulated, with requirements for off-switches, retraining mechanisms, and independent oversight.

As cyberattacks disrupt public services, companies openly cite AI when cutting jobs, and consumers entrust chatbots with sensitive decisions, critics say the cost of inaction is becoming clearer. Yet responsibility for regulating the most powerful systems remains up in the air.

For many of the MPs and peers now urging action, the question is no longer whether AI regulation might slow innovation, but whether failing to act will leave the UK permanently reacting to decisions made elsewhere. If Britain is serious about technological leadership or meaningful independence in the age of AI, they argue, it may have little time left to prove it.

The Labour government’s recent budget demonstrates renewed intent to support AI in the UK, but there is still no fully integrated digital strategy.



Source link

Recent Articles

Related Stories