Americas

Asia

Oceania

deb_radcliff
Contributing Writer

How the new deepfake reality will impact cyber insurance

Feature
29 Jun 20238 mins
CybercrimeData and Information SecurityInsurance Industry

Cyber liability insurers are beginning to take notice of the threats posed by deepfakes. That may mean changes in insurance policies and what it takes to qualify for one.

deepfakes fake news tv head manipulation superimposed brainwashed
Credit: Getty Images

With the explosion of generative AI programs such as ChatGPT, DALL-E, and Bing, it's becoming easier to create convincing deepfakes that sound, look, move, and express realistically enough to fool business users and customers into falling for new forms of trickery. And the types of deepfakes we're seeing today, such as the fake of Russian President Vladimir Putin declaring martial law over trusted television and radio stations, are only the beginning.

Deepfakes can ruin a company's reputation, bypass biometric controls, phish unsuspecting users into clicking malicious links, and convince financial agents to transfer money to offshore accounts. Attacks leveraging deepfakes can happen over many channels from social media to fake person-to-person video calls over Zoom. Voicemail, Slack channels, email, mobile messaging, and metaverses are all fair game for distributing deepfake scams to businesses and personal users.

Cyber liability insurers are beginning to take notice, and as they do, their security requirements are beginning to adjust to the new 'fake' reality. This includes, but is not limited to, better hygiene across the enterprise, renewed focus on home worker systems, enforced multifactor authentication, out-of-band confirmation to avoid falling for deepfake phishing attempts, user and partner education, and third-party context-based verification services or tools.

Even the diligent can be deepfake-fooled

In early June, two instances of voicemail impersonation were reported to Rob Ferrini, cyber insurance program manager at McGowanPRO, headquartered in Framingham, Massachusetts, with 5,000 cyber-insured clients covered by its insurance partners. 

One led to an open claim under investigation, in which the insured was an accounting firm and an accountant there received a voicemail from one of his business customers to change the instructions for a vendor and make payment on a $77,000 invoice. "The accountant then called their client to verify, and his client reported that he got the same voicemail from their vendor account, so it's probably OK. It ended up that the accountant's client paid a $77,000 invoice to a fraudulent bank account," Ferrini says.

While the accountant did his due diligence and called his client, the client did not do their diligence and call their vendor for confirmation that the voicemail was real. If the insurance investigators cannot claw the money back, the accountant's client may not get reimbursed. Inversely, in that same week, a wealth manager contacted Ferrini to tell him how out-of-band authentication (OOBA) protected his client from falling for an impersonator trying to get him to open a fake mortgage. Before giving away any information to the scammer, the client simply called to ask the wealth manager if that was true, and he told him it was fake.

Other layers need to adapt to the deepfake threat

"Many cyber insurance carriers require out-of-band authentication controls to underwrite policies. Out-of-band authentication would mean you call them directly, making sure with two different methods that this person is who they say they are before wiring money to a new account," says Ryan Bell, threat intelligence manager at Corvus, a cyber insurance company based in Boston.

On its security tips page, Corvus provides education on OOBA along with employee awareness, multifactor authentication, email security and logging -- all of which can be applied to deepfake prevention and education. For example, the same prevention recommendations on wire-transfer fraud should apply to social engineering through deepfakes, but preventing deepfake-initiated scams from succeeding may need to be explicitly stated in the insurance policies.

The other issue is how to protect the voice, likeness, interests, and expressions of CEOs and other executives who can be scooped and input into the generative AI program to create the deepfakes, Bell says. Since it's impossible to keep these people off the web, then organizations will need to tune their dark web and external threat intelligence to look for precursors to deepfake creation.

Will insurers require new deepfake detection tools or services?

Employee awareness training will only go so far as deepfakes get more realistic and interactive with AI-generated images, trained facial expressions, and manipulated voices. Take, for example, work-from-home deepfake interviewees, which the FBI's Internet Crime Complaint Center (IC3) started warning about in 2022. At some point, it becomes impossible for the HR person to identify the fake, so deepfake detection and verification tools will become an important tool in the employer's arsenal.  

"There are programs that can detect if a person is live or fake by way of blood flow, movements, background, and more criteria, taking a recorded or live image down to the pixel level. These services look for signs of fakes that are largely undetectable at the human level," says Geoff Kohl, a senior director at the Security Industry Association (SIA) who authored a paper on the impact of deepfakes on cybersecurity programs that includes interviews with SIA members. "Today's emerging deepfake detection solutions are largely delivered as standalone software, but for these offerings to scale and become available to businesses everywhere, it is inevitable that these software solutions will be offered as cloud-based services."

Work is underway on open-source standards to assist viewers and security tools in verifying the authenticity of an image. Microsoft's Video Authenticator was built and tested on data sets prior to its release in 2020, and Microsoft concedes that the model will not hold up to advances in generative AI. And, since 2021, policy groups have been talking about contextualization engines to determine the realness of online media including images and voice, and companies like Google are announcing new tools to detect fakes in image search results. A handful of API and browser-based plugins are also emerging to detect the 'aliveness' of videos in social media.

And while these tools may serve media companies and social media users, they seem to neglect the key channels where deepfakes can impact business, government, and critical infrastructure agencies, such as over voicemail, e-mail, Zoom, Slack, or other common business communication channels. Based on their lack of readiness for generative AI at the April 2023 RSA Security Conference, don't expect traditional security vendors to fill this space any time soon. So, a new breed of cloud-based detection and inspection services will likely emerge to serve enterprises in the future.

What's in your policy about deepfakes?

For the most part, organizations will need to focus on requirements that are in their cyber insurance policies. Since most policies call for multifactor authentication as a prerequisite for granting coverage, Ferrini of McGowanPRO suggests that business users need to strengthen these capabilities across their organizations. So, if deepfakes are used to thwart biometric access controls, a second form of authentication would help protect against unauthorized access.

Wendy Esposito, the lead of the GenAI Commission at Benesch Law, an AmLaw 200 firm with offices in the US and China, advises CISOs to review their cyber liability policies regularly, even quarterly or semi-annually if possible. "Organizations should be looking at their policies now to understand what is and is not covered, because technology and cyber threats are changing so quickly. CISOs should pick up that document regularly and hold discussions with risk, legal, and finance teams to measure current and future risks against potential gaps in coverage. At least annually, boards of directors need to be briefed on the organization's cyber liability coverage and any identified gaps."

Take reputational damage coverage, for example. If financial losses are caused through deepfakes posted on social media, podcasts, YouTube, or network television or radio (such as in the case with Russia's president), then current policies likely won't cover them because they weren't caused by a breach of the company's network or systems, she adds.

Esposito has seen many policies on behalf of her clients, and they only cover such losses if they were caused by network penetration or a cyberattack. Inversely, if the attackers use deepfakes to phish an employee and then install ransomware on company systems, then under certain policies that would probably be covered because it involves an intrusion -- so long as the proper email security controls and user training programs are actively in place (and those programs will need to be deepfake aware). It’s through this lens that CISOs and insurers will need to reexamine existing policy requirements and their technical controls. Esposito predicts that deepfake-related losses will be added onto cyber insurance policies, but for additional costs to the insured, adding that "cyber liability insurance is very expensive and deepfake coverage will likely add to that expense."