The UK government has announced £100m to support ‘more agile’ AI regulation.
It comes as £10 million is announced to prepare and upskill regulators to address the risks and harness the opportunities of this defining technology. The fund will help regulators develop cutting-edge research and practical tools to monitor and address risks and opportunities in their sectors, from telecoms and healthcare to finance and education.
Secretary of State for Science, Innovation, and Technology, Michelle Donelan said: “The UK’s innovative approach to AI regulation has made us a world leader in both AI safety and AI development.
“I am personally driven by AI’s potential to transform our public services and the economy for the better – leading to new treatments for cruel diseases like cancer and dementia, and opening the door to advanced skills and technology that will power the British economy of the future.”
As part of the package of measures, nearly £90 million will go towards launching nine new research hubs across the UK and a partnership with the US on responsible AI. The hubs will support British AI expertise in harnessing the technology across areas including healthcare, chemistry, and mathematics.
£19 million will also go towards 21 projects to develop innovative trusted and responsible AI and machine learning solutions to accelerate deployment of these technologies and drive productivity.
The government will also be launching a steering committee in spring to support and guide the activities of a formal regulator coordination structure within government in the spring.
These measures sit alongside the £100 million invested by the government in the world’s first AI Safety Institute to evaluate the risks of new AI models, and the global leadership shown by hosting the world’s first major summit on AI safety at Bletchley Park in November.
Cybersecurity expert Andy Ward, VP International for Absolute Software, commented: “The heightened risk of cyber-attacks, amplified by evolving AI-powered threats, makes vulnerable security systems a prime target for cyber attackers. By investing in secure, trusted, and responsible AI systems, the government initiative contributes to strengthening the national cybersecurity infrastructure and protects against AI-related threats.”
“Organisations must always look to adopt a comprehensive cybersecurity approach with proactive and responsive measures, especially around rapidly evolving innovations such as AI. This involves assessing current cyber defences, integrating resilient Zero Trust models for user authentication, and establishing complete visibility into the endpoint, giving organisations details on device usage, location, which apps are installed, and the ability to freeze and wipe data if a device is compromised or lost.”
Oseloka Obiora, CTO of RiverSafe said: “This investment is a good first step but in tandem part of the investment should be targeted towards defence and response research to some of the clearer threats understood around AI. These research activities should prioritise critical national infrastructure and treat scenarios posed through the use of AI now.”
“Boosting regulation is a key step forward, but we need to see much greater resources set aside for the inevitable fallout when hackers and cyber criminals gain access to AI systems to wreak havoc and steal data. We need a much more ambitious, broader international strategy to tackle the AI threat, bringing together governments around the world, regulators, and businesses to tackle this rapidly emerging threat.”