Americas

Asia

Oceania

Contributing Writer

4 alternatives to encryption backdoors, but no silver bullet

News Analysis
08 Feb 20227 mins
EncryptionPrivacy

Alternatives to backdoors in end-to-end encryption exist, but not all address privacy and security concerns, say experts at last week’s Enigma conference.

A protected padlock with checkmark amid a field of abstract data.
Credit: Matejmo / Getty Images

End-to-end encrypted communication has been a boon to security and privacy over the past 12 years since Apple, Signal, email providers, and other early adopters first started deploying the technology. At the same time, law enforcement authorities around the globe have pushed for technological solutions to pry open the chain of protected end-to-end encrypted content, arguing that the lack of visibility provides a haven for criminals, terrorists and child abusers to hatch their plans with impunity.

In 2016, Apple prevailed in a now-famous legal standoff with FBI Director James Comey to unlock an encrypted phone used by a mass shooter in San Bernardino, California. In 2019, Attorney General William Barr revived the so-called backdoor debate to advocate some means of breaking encryption to thwart those who distribute child sexual abuse material. Last month, the UK government kicked off a PR campaign to lay the groundwork for killing off end-to-end encryption ostensibly to crack down on child sex abusers.

Cybersecurity experts and privacy advocates have consistently condemned these efforts as misguided attempts to break what they view as an essential means to keep the internet safer and users more protected from malice. Yet it’s hard to argue that abuse, crime, and malware aren’t rapidly rising across the internet.

The question then arises: Without introducing harmful encryption backdoors, how can organizations identify miscreants communicating over their networks if their communications are hidden from view? Experts speaking at last week’s Enigma conference offered some solutions.

4 proposals for spotting threats and questionable content

The first step in solving this dilemma is defining end-to-end encryption, Mallory Knodel, CTO at the Center for Democracy and Technology (CDT), said. “What is end-to-end encryption is surprisingly not agreed upon. There is some agreement or some convergence around the use of end-to-end encrypted messaging, for example, the use of the Signal protocol. There’s an effort in the IETF to standardize something called the Messaging Layer Security protocol, which can be used in messaging, and in video and a variety of contexts.”

The concept remains largely undefined. “Does E2EE have to include perfect forward secrecy or deniability or other features? That is not necessarily agreed upon,” Knodel said. “Now is a really critical time to define what are the required features of end-to-end encryption, what breaks that, and what doesn’t.”

Citing a study that CDT released last August, Knodel said several proposals have been floated on how to spot threats and questionable content in end-to-end encrypted environments. The first is user reporting, where users can block and report suspicious content, often using empowerment features. “We didn’t think this is terrible,” she said, particularly if done in a privacy-enhancing way.

The downside to user reporting as a means of spotting undesirable content is that plausible deniability, an essential feature of end-to-end encryption that allows users to cast doubt on whether their data exists, wouldn’t be possible, Knodel said.

Another technique is metadata analysis, which looks at the typical type of metadata that typically accompanies most content transmitted over the internet, such as file size, type of file, date and time of the send, who sent it, who received it, and so forth. “We would not necessarily suggest creating more metadata to do analysis, but in general, what metadata is there might be a way of doing some degree of content moderation, especially in terms of behavior,” she said. “Platforms should always be reducing the amount of metadata they keep for sure.”

A third technique for spotting unwanted content is traceability, used in India and Brazil primarily over WhatsApp to look for disinformation. It’s a scheme that doesn’t have anything to do with looking at content but asks where the data came from, who is the first person to send this message, how many people have seen it, and more.

“I think of traceability as enhanced metadata. It’s exactly what we don’t want platforms to do,” Knodel said. “Platforms are probably not tracking every single origin of a message, who it passes through, who sees it. That’s an extra ask that is taking an end-to-end encrypted messaging system and building additional features on top of it to aid law enforcement. We would reject traceability.”

A fourth model is called perceptual hashing, which compares known, disallowed content in a database to content flowing over the network using a “fingerprint” derived from the disallowed content. “It’s not something we would recommend. There are questions about whether it’s effective or good enough.”

Finally, predictive matching is a way to identify undesirable content and hunt for matches between the “bad stuff” and novel content. “Predictive modeling is essentially worse than perceptual hashing, so we reject it,” she said.

No silver bullet for online abuse

Riana Pfefferkorn, research scholar at Stanford Internet Observatory, considers these methods content-oblivious because they don’t necessarily require access to content on the provider’s part. The Observatory surveyed 13 online providers, such as WhatsApp, Facebook Messenger, Instagram Messaging, and other providers who collectively serve most of the world’s internet users.

Based on the survey results, it’s clear that while end-to-end encryption prevents automated scanning as a tool for detecting abuse because it’s content-dependent, it does not affect user reporting or metadata because those are content-oblivious. It’s also clear, Pfefferkorn said, that encryption does not affect providers’ abuse detection efforts uniformly. Specifically, content-oblivious tools are considered much less helpful than automated scanning for child sexual abuse information (CSAI) in an end-to-end encrypted environment.

That alone should not be a pretext for breaking encryption. “For policymakers, the big takeaway that I want you to get from this talk is that there is just not a silver bullet for online abuse. Automated content scanning is too often treated as though it is a silver bullet and cure-all for online abuse,” Pfefferkorn said. Moreover, “CSAI content is unique. It can’t be used as the basis for developing a trust and safety program, much less passing a law. There’s no guarantee that automated content scanning is going to continue to be effective against CSAI as it is right now.”

The rise of hate and harassment calls for new ideas

Kurt Thomas, research scientist at Google, said that the recent rapid rise of hate and harassment online should foster a rethinking of how providers deal with this kind of content. “The problem is that a lot of the existing protections we have in this context really focus on for-profit cybercrime,” he said. “We’ve made immense strides in warning people about spam, phishing, and malware and getting them not to go to dangerous websites. We’ve warned them about data breaches and password reuse, and behaviors that put them at risk of being taken over. None of these map onto the security and privacy needs in the hate and harassment context. We need to expand our security threat models to address these attacks that lack the same scale or profit incentives.”

Hate and harassment threat actors aren’t motivated by money, Thomas said. “The goal is to silence their [victim’s] voice, damage their reputation, reduce their sexual or physical safety or even reduce their financial security or their ability to operate independently.”

Addressing hate and harassment “is going to require some unique combination of warnings, nudges, moderation, automated detection, or even just conscious design,” Thomas said. “How we go about solving toxic content is going to be fundamentally different than how we go about surveillance, or impersonation, or intimate content leaking online.”