UK NCSC CEO Lindy Cameron reflects on the key cybersecurity challenges the UK faces from rapidly developing AI technologies like generative AI and LLMs. Credit: Jonny Lindner The cybersecurity industry cannot rely on its ability to retrofit security into developing machine learning (ML) and artificial intelligence (AI) technology to prevent security risks introduced by innovations such as generative AI and large language models (LLMs), according to Lindy Cameron, CEO of the UK National Cyber Security Centre (NCSC). Cameron was speaking today in the opening keynote of the Chatham House Cyber 2023 conference where she addressed the key cybersecurity challenges the UK faces from rapidly developing AI technologies like OpenAI’s ChatGPT chatbot.Security has often been a secondary consideration when the pace of technology development is high, but AI developers must predict possible attacks and identify ways to mitigate them, Cameron said. Failure to do so will risk designing vulnerabilities into future AI systems, she warned. “Amid the huge dystopian hype about the impact of AI, I think there is a danger that we miss the real, practical steps that we need to take to secure AI.”UK NCSC focuses on three elements to help secure developing AIBeing secure is an essential pre-requisite for ensuring that AI is safe, ethical, explainable, reliable, and as predictable as possible, Cameron said. “Users need reassurance that machine learning is being deployed securely, without putting personal safety or personal data at risk. In addition to the overarching need for security to be built into AI and ML systems, and for companies profiting from AI to be responsible vendors, the NCSC is focusing on three elements to help with the cybersecurity of AI.”First, the NCSC believes it is essential that organisations using AI need to understand the risks they are running by using it – and how to mitigate them, Cameron stated. “For example, machine learning introduces an entirely new category of attack: adversarial attacks. As machine learning is so heavily reliant on the data used for the training, if that data is manipulated, it creates potential for certain inputs to result in unintended behaviour, which adversaries can then exploit.” LLMs pose entirely different security challenges, Cameron continued. “For example – an organisation’s intellectual property or sensitive data may be at risk if their staff start submitting confidential information into LLM prompts.”As the disruptive power of AI becomes increasingly apparent, CEOs at major companies will be making investment decisions about AI and we need to ensure that security considerations are central to these deliberations, Cameron argued. Second, there is a need to maximise the benefits of AI to the cyber defence community. “AI has the potential to improve cybersecurity by dramatically increasing the timeliness and accuracy of threat detection and response. We [also] need to remember that in addition to helping make our country safer, the AI cybersecurity sector also has huge economic potential.”Third, the cybersecurity sector must understand how adversaries – whether they are hostile states or cybercriminals – are using AI, and how to disrupt them, Cameron said. “We can be in no doubt that our adversaries will be seeking to exploit this new technology to enhance and advance their existing tradecraft.” China is positioning itself to be a world leader in AI and, if successful, we must assume that it will use this to secure a dominant role in global affairs, Cameron added. “LLMs also present a significant opportunity for states and cybercriminals too. They lower barriers to entry for some attacks. For example, they make writing convincing spear-phishing emails much easier for foreign nationals without strong linguistic skills.” Related content brandpost Shifting security left: DevSecOps meets virtualization By Anthony Ricco, CMO of Corellium. 01 Jul 2023 4 mins Security news analysis Attackers add hacked servers to commercial proxy networks for profit Proxyjacking allows attackers to sell unknowing victims' unused network bandwidth. By Lucian Constantin 30 Jun 2023 4 mins Cybercrime news Command-and-control framework PhonyC2 attributed to Iran’s Muddywater group PhonyC2 was used to exploit the log4j vulnerability in the Israeli software SysAid, the attack against Israel’s Technion institute, and the ongoing attack against the PaperCut print management software. By Apurva Venkat 30 Jun 2023 4 mins Advanced Persistent Threats Cyberattacks Vulnerabilities news First state-sponsored cyberattack against UK government revealed two decades later Rare insight marks the 20th anniversary of a state-backed malware attack on a UK government department. By Michael Hill 30 Jun 2023 3 mins Cyberattacks Government Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe