Free Ads

Bank of England raises alarm over threat from AI 'too dangerous to release'

 The Bank of England is to warn City chiefs about the risk of a new artificial intelligence model it is feared could break into the financial system and wreak havoc.

Officials will meet top bank and insurance bosses to discuss how they are preparing for the threat posed by Claude Mythos, a new AI system from Anthropic.

Anthropic, a Silicon Valley AI company, has deemed the tool too dangerous to release to the public after the AI discovered previously hidden flaws in computer systems quicker than any human.

The revelations about the new AI’s capabilities prompted Scott Bessent, the US treasury secretary, and Jerome Powell, the chairman of the US Federal Reserve, to summon Wall Street bank executives to a crisis meeting this week.

Meanwhile, Duncan Mackinnon, the Bank of England’s risk chief, will chair a gathering of the Cross Market Operational Resilience Group in the next fortnight that will discuss the new AI threat, The Telegraph understands.

Officials from the Treasury, the Financial Conduct Authority and the National Cyber Security Centre will also attend the meeting.

It comes amid fears the AI model could breach the IT security of the financial system.

‘Everyone has a right to be concerned’

Anthropic, which revealed Mythos earlier this week, said it had already found thousands of security vulnerabilities in popular web browsers and operating systems that could leave users exposed to hacks.

Government cyber experts at the UK’s AI Security Institute (AISI) are testing Mythos to help develop defences against it.

Ciaran Martin, the former head of the UK’s National Cyber Security Centre, said: “There is a lot of excitable talk and hype about Mythos, but its security implications are real and need addressing.

“The timeline for finding and fixing vulnerabilities collapses to seconds, minutes and hours, rather than days, months or years.”

He added: “There’s plenty we can do to shore up defences – and there’s actually a real opportunity here to fix a lot of the internet’s hidden bugs.”

Anthropic confirmed that researchers at AISI, a taxpayer-backed lab launched by Rishi Sunak’s government, had begun stress-testing the AI bot.

It has given British security experts early access to its latest AI tools for “pre-deployment testing” before they are launched publicly to check them for flaws or unexpected capabilities.

AISI, which is led by Adam Beaumont, the former chief AI officer of GCHQ, has performed checks on AI apps from companies including Google and OpenAI.

Anthropic has signed up tech giants including Apple, Microsoft and Amazon to test Mythos, in a deal dubbed Project Glasswing. They aim to plug holes in their own software.

Danny Kruger, a Reform UK MP, has written to Darren Jones, the Chancellor of the Duchy of Lancaster, urging the Government to co-operate with Anthropic and asking if it would offer to collaborate with Project Glasswing.

Don Smith, a cybersecurity expert at Sophos, said: “I think everyone has a right to be concerned. I’m concerned. My colleagues are concerned.”

In a blog post this week, Anthropic said it had contained the release of Mythos for now, but warned: “It will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout – for economies, public safety, and national security – could be severe.”

Officials at the Bank of England have ramped up monitoring of AI and cybersecurity risks in recent years, citing it as a top potential threat to financial stability.

In 2024 and 2025, the Bank ran a “stress test” which simulated an attack on the payments system to see how financial institutions would cope with a major outage.

Liz Oakes, a member of its Financial Policy Committee, warned of the type of risk apparently presented by the new Anthropic model in a speech last year.

“AI might increase malicious actors’ capabilities to launch cyberattacks against financial institutions,” she said.

David Raw, of trade body UK Finance, said it was aware of the press reports on the Anthropic AI development and was speaking to its members about it.

A government spokesman said: “We take the security implications of frontier AI seriously. We have world-leading expertise in this area and maintain continuous engagement with global technology leaders.

“To stay ahead of evolving threats, businesses should act now to strengthen their online defences, including by following established cyber security best practice, securing cyber essentials certification, and ensuring they can patch quickly in response to new vulnerabilities.”


0 Response to "Bank of England raises alarm over threat from AI 'too dangerous to release'"

Post a Comment