The UK's Information Commission’s Office reminds organizations that data protection laws still apply to unfiltered data used to train large language models. Credit: Thinkstock The UK’s data regulator has issued a warning to tech companies about protecting personal information when developing and deploying large language, generative AI models.Less than a week after Italy’s data privacy regulator banned ChatGPT over alleged privacy violations, the Information Commission’s Office (ICO) published a blog post reminding organizations that data protection laws still apply when the personal information being processed comes from publicly accessible sources.“Organisations developing or using generative AI should be considering their data protection obligations from the outset, taking a data protection by design and by default approach,” said Stephen Almond, the ICO’s director of technology and innovation, in the post.Almond also said that, for organizations processing personal data for the purpose of developing generative AI, there are various questions they should ask themselves, centering on: what their lawful basis for processing personal data is; how they can mitigate security risks; and how they will respond to individual rights requests. “There really can be no excuse for getting the privacy implications of generative AI wrong,” Almond said, adding that ChatGPT itself recently told him that “generative AI, like any other technology, has the potential to pose risks to data privacy if not used responsibly.”“We’ll be working hard to make sure that organisations get it right,” Almond said. The ICO and the Italian data regulator are not the only ones to have recently raised concerns about the potential risk to the public that could be caused by generative AI.Last month, Apple co-founder Steve Wozniak, Twitter owner Elon Musk, and a group of 1,100 technology leaders and scientists called for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4.In an open letter, the signatories depicted a dystopian future and questioned whether advanced AI could lead to a “loss of control of our civilization,” while also warning of the potential threat to democracy if chatbots pretending to be humans could flood social media platforms with propaganda and “fake news.”The group also voiced a concern that AI could “automate away all the jobs, including the fulfilling ones.”Why AI regulation is a challengeWhen it comes to regulating AI, the biggest challenge is that innovation is moving so fast that regulations have a hard time keeping up, said Frank Buytendijk, an analyst at Gartner, noting that if regulations are too specific, they lose effectiveness the moment technology moves on.“If they are too high level, then they have a hard time being effective as they are not clear,” he said. However, Buytendijk added that it’s not regulation that could ultimately stifle AI innovation but instead, a loss of trust and social acceptance because of too many costly mistakes..“AI regulation, demanding models to be checked for bias, and demanding algorithms to be more transparent, triggers a lot of innovation too, in making sure bias can be detected and transparency and explainability can be achieved,” Buytendijk said. Related content brandpost Shifting security left: DevSecOps meets virtualization By Anthony Ricco, CMO of Corellium. 01 Jul 2023 4 mins Security news analysis Attackers add hacked servers to commercial proxy networks for profit Proxyjacking allows attackers to sell unknowing victims' unused network bandwidth. By Lucian Constantin 30 Jun 2023 4 mins Cybercrime news Command-and-control framework PhonyC2 attributed to Iran’s Muddywater group PhonyC2 was used to exploit the log4j vulnerability in the Israeli software SysAid, the attack against Israel’s Technion institute, and the ongoing attack against the PaperCut print management software. By Apurva Venkat 30 Jun 2023 4 mins Advanced Persistent Threats Cyberattacks Vulnerabilities news First state-sponsored cyberattack against UK government revealed two decades later Rare insight marks the 20th anniversary of a state-backed malware attack on a UK government department. By Michael Hill 30 Jun 2023 3 mins Cyberattacks Government Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe