Americas

Asia

Oceania

mhill
UK Editor

Cybersecurity industry cannot retrofit security into developing AI technology, says UK NCSC chief

News
14 Jun 20233 mins
Artificial IntelligenceGenerative AI

UK NCSC CEO Lindy Cameron reflects on the key cybersecurity challenges the UK faces from rapidly developing AI technologies like generative AI and LLMs.

The cybersecurity industry cannot rely on its ability to retrofit security into developing machine learning (ML) and artificial intelligence (AI) technology to prevent security risks introduced by innovations such as generative AI and large language models (LLMs), according to Lindy Cameron, CEO of the UK National Cyber Security Centre (NCSC). Cameron was speaking today in the opening keynote of the Chatham House Cyber 2023 conference where she addressed the key cybersecurity challenges the UK faces from rapidly developing AI technologies like OpenAI’s ChatGPT chatbot.

Security has often been a secondary consideration when the pace of technology development is high, but AI developers must predict possible attacks and identify ways to mitigate them, Cameron said. Failure to do so will risk designing vulnerabilities into future AI systems, she warned. “Amid the huge dystopian hype about the impact of AI, I think there is a danger that we miss the real, practical steps that we need to take to secure AI.”

UK NCSC focuses on three elements to help secure developing AI

Being secure is an essential pre-requisite for ensuring that AI is safe, ethical, explainable, reliable, and as predictable as possible, Cameron said. “Users need reassurance that machine learning is being deployed securely, without putting personal safety or personal data at risk. In addition to the overarching need for security to be built into AI and ML systems, and for companies profiting from AI to be responsible vendors, the NCSC is focusing on three elements to help with the cybersecurity of AI.”

First, the NCSC believes it is essential that organisations using AI need to understand the risks they are running by using it – and how to mitigate them, Cameron stated. “For example, machine learning introduces an entirely new category of attack: adversarial attacks. As machine learning is so heavily reliant on the data used for the training, if that data is manipulated, it creates potential for certain inputs to result in unintended behaviour, which adversaries can then exploit.”

LLMs pose entirely different security challenges, Cameron continued. “For example – an organisation’s intellectual property or sensitive data may be at risk if their staff start submitting confidential information into LLM prompts.”

As the disruptive power of AI becomes increasingly apparent, CEOs at major companies will be making investment decisions about AI and we need to ensure that security considerations are central to these deliberations, Cameron argued.

Second, there is a need to maximise the benefits of AI to the cyber defence community. “AI has the potential to improve cybersecurity by dramatically increasing the timeliness and accuracy of threat detection and response. We [also] need to remember that in addition to helping make our country safer, the AI cybersecurity sector also has huge economic potential.”

Third, the cybersecurity sector must understand how adversaries – whether they are hostile states or cybercriminals – are using AI, and how to disrupt them, Cameron said. “We can be in no doubt that our adversaries will be seeking to exploit this new technology to enhance and advance their existing tradecraft.” China is positioning itself to be a world leader in AI and, if successful, we must assume that it will use this to secure a dominant role in global affairs, Cameron added. “LLMs also present a significant opportunity for states and cybercriminals too. They lower barriers to entry for some attacks. For example, they make writing convincing spear-phishing emails much easier for foreign nationals without strong linguistic skills.”

mhill
UK Editor

Michael Hill is the UK editor of CSO Online. He has spent the past 8 years covering various aspects of the cybersecurity industry, with particular interest in the ever-evolving role of the human-related elements of information security. A keen storyteller with a passion for the publishing process, he enjoys working creatively to produce media that has the biggest possible impact on the audience.

More from this author