LONDON — The U.K. says it wants to do its “own thing” when it comes to regulating artificial intelligence, hinting at a possible divergence from approaches taken by its main Western peers.
“It’s really important that we as the U.K. do our own thing when it comes to regulation,” Feryal Clark, Britain’s minister for AI and digital government, told CNBC in an interview that aired Tuesday.
She added the government already has a “good relationship” with AI companies like OpenAI and Google DeepMind, which have voluntarily opened their models up to the government for safety testing purposes.
“It’s really important that we bake in that safety right at the beginning when models are being developed … and that’s why we’ll be working with the sector on any safety measures that come forward,” Clark added.
Her comments echoed remarks from Prime Minister Keir Starmer on Monday that Britain has “freedom now in relation to the regulation to do it in a way that we think is best for the U.K.” after Brexit.
“You’ve got different models around the world, you’ve got the EU approach and the U.S. approach – but we have the ability to choose the one that we think is in our best interest and we intend to do so,” Starmer said in response to a reporter’s question after announcing a 50-point plan to make the U.K. a global leader in AI.
Divergence from the U.S., EU
So far, Britain has refrained from introducing formal laws to regulate AI, instead deferring to individual regulatory bodies to enforce existing rules on businesses when it comes to the development and use of AI.
This is different from the EU, which has introduced comprehensive, pan-European legislation aimed at harmonizing rules for the technology across the bloc taking a risk-based approach to regulation.
The U.S., meanwhile, lacks any AI regulation whatsoever at a federal level and has instead adopted a patchwork of regulatory frameworks at the state and local level.
During Starmer’s election campaign last year, the Labour Party committed in its manifesto to introducing regulation focusing on so-called “frontier” AI models — referring to large language models like OpenAI’s GPT.
However, so far, the U.K. is yet to confirm details on proposed AI safety legislation, instead saying it will consult with the industry before proposing formal rules.
“We will be working with the sector to develop that and bring that forward in line with what we said in our manifesto,” Clark told CNBC.
Chris Mooney, partner and head of commercial at London-based law firm Marriott Harrison, told CNBC that the U.K. is taking a “wait and see” approach to AI regulation even as the EU is forging ahead with its AI Act.
“While the U.K. government says it has taken a ‘pro-innovation’ approach to AI regulation, our experience of working with clients is that they find the current position uncertain and, therefore, unsatisfactory,” Mooney told CNBC via email.
One area Starmer’s government has spoken up on reforming rules for AI has been around copyright.
Late last year, the U.K. opened a consultation reviewing the country’s copyright framework to assess possible exceptions to existing rules for AI developers using artists and media publishers’ works to train their models.
Businesses left uncertain
Sachin Dev Duggal, CEO of London-headquartered AI startup Builder.ai, told CNBC that, although the government’s AI action plan “shows ambition,” proceeding without clear rules is “borderline reckless.”
“We’ve already missed crucial regulatory windows twice — first with cloud computing and then with social media,” Duggal said. “We cannot afford to make the same mistake with AI, where the stakes are exponentially higher.”
“The U.K.’s data is our crown jewel; it should be leveraged to build sovereign AI capabilities and create British success stories, not simply fuel overseas algorithms that we can’t effectively regulate or control,” he added.
Details of Labour’s plans for AI legislation were initially expected to appear in King Charles III’s speech opening U.K. Parliament last year.
However, the government only committed to establishing “appropriate legislation” on the most powerful AI models.
“The U.K. government needs to provide clarity here,” John Buyers, international head of AI at law firm Osborne Clarke, told CNBC, adding he’s learned from sources that a consultation for formal AI safety laws is “waiting to be released.”
“By issuing consultations and plans on a piecemeal basis, the U.K. has missed the opportunity to provide a holistic view of where its AI economy is heading,” he said, adding that failure to disclose details of new AI safety laws would lead to investor uncertainty.
Still, some figures in the U.K. tech scene think that a more relaxed, flexible approach to regulating AI may be the right one.
“From recent discussions with the government, it is clear that considerable efforts are underway on AI safeguards,” Russ Shaw, founder of advocacy group Tech London Advocates, told CNBC.
He added that the U.K is well positioned to adopt a “third way” on AI safety and regulation — “sector-specific” regulations that rules to different industries like financial services and health care.