AI hacking tools like Mythos can be 'net positive' says top cyber official
Getty ImagesAI tools have the opportunity to be a major boost for cyber-security defences if they are secured, according to the UK's top cyber official.
The threat of AI such as Claude Mythos has made headlines around the world after its maker Anthropic revealed it to be extremely good at hacking.
The company is restricting access to the model to help governments, tech giants and banks secure their systems as the cyber-security world braces for its general release.
But head of the National Cyber Security Centre (NCSC) Richard Horne says advanced AI tools can be a "net positive" to public cyber-security if the technology is secured from misuse.
It comes as the UK's security minister is urging AI companies to "work with the government on national cyber-defence capabilities".
Anthropic, the maker of the popular chatbot Claude, has not said when it will release its newest model Mythos.
But the company sparked widespread concerns when it claimed the bot was an expert hacker as good as, if not better than, the best humans.
The fear is that if Mythos gets into the wrong hands or goes rogue it might lead to major data breaches or debilitating cyber-attacks.
In a speech to the NCSC's annual conference CyberUK on Wednesday, Horne will make a more positive case arguing AI tools can make things safer and more secure.
He is urging companies and organisations not to fear new AI attacks but to make sure they are doing the basics of cyber-security right.
"As we have seen in the media in recent days, frontier AI is rapidly enabling discovery and exploitation of existing vulnerabilities at scale, illustrating how quickly it will expose where fundamentals of cyber-security are still to be addressed," he will say.
Horne's warnings echo similar messages from recent years - for example the urgency for people to ensure they update the software on their systems and upgrade legacy IT.
He is also urging AI companies to make sure their models are secure by following newly-created European safety guidelines.
At the same event, Security Minister Dan Jarvis will implore AI firms to work with the government on the "generational endeavour" to make sure AI is used to protect critical networks from attackers.
All the most powerful and advanced AI models - known as frontier AI - are developed outside of the UK, with the top-tier companies based in the US or China.
That means the UK relies on companies like Anthropic to give it access to Mythos and has no control over how it is built, trained or released.
Open AI also has a cyber-security model it says is very capable called GPT 5.4 Cyber.
The speeches at CyberUK will also press home the ongoing threat of nation state and hacktivist attacks, particularly from Russia and China.
The NCSC warns that cyber is now "the home front" of defence in the UK with recent events such as the Iran attacks showing that cyber plays an increasingly important role in all modern conflicts.

Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
