Americas

Asia

Oceania

dswinhoe
Editor

Why fake data is a serious IoT security concern

Feature
07 Nov 201810 mins
Internet of ThingsSecuritySecurity Infrastructure

Fake internet of things data could break your business and kill your customers.

industrial iot robotics ai automation programmin code
Credit: Getty Images

Data powers the internet of things (IoT). Billions of devices feeding terabytes of data into systems that predict failures, control buildings and cities, and even drive our cars.

For people to be safe in a world of internet-connected things, we must be able to trust the data coming in from those devices. What if, amongst the hundreds of terabytes of data your IoT devices are generating, malicious actors are injecting fake information to throw off your business or do physical harm?

IoT security hasn’t kept up, opening the door for fakery

Depending on your preferred analyst, the number of connected devices in use over the next few years could range from 20 billion to an eye-popping 100 billion. However, IoT security has been notoriously poor, and IoT malware is becoming increasingly sophisticated.

The Mirai botnet – powered by tens of thousands of insecure printers, IP cameras, residential gateways and baby monitors – took down large swaths of the internet with a distributed denial of service (DDOS) attack. Whether focused on mining cryptocurrency, sending out malware, or pure DDOS attacks, new variants of IoT-focused malware are being discovered with alarming regularity.

What if, instead of simply taking over devices, attackers try something more insidious and manipulate the data collected from IoT devices? Gartner has predicted that by 2020, there will be a black market exceeding $5 billion dedicated to selling fake sensor and video data for enabling criminal activity. “A black market for fake or corrupted sensor and video data will mean that data can be compromised or substituted with inaccurate or deliberately manipulated data,” Ted Friedman, Gartner vice president and distinguished analyst said at the time.

Friedman tells CSO a market dedicated to such data is yet to appear – several other security firms CSO contacted agree, but believe Gartner was “aggressive” in its prediction as a cautionary warning for readers.

However, despite criminals not showing interest in the area, it doesn’t mean there isn’t a real threat. Security researchers are regularly finding new vulnerabilities, flaws and weaknesses within IoT devices – including attacks involving fake or manipulated data – and at least one instance of a fake data attack has occurred in the wild.

The damage fake IoT data can do

The effects of man in the middle (MitM) or fake data injection (FDI) attacks can be varied in both size and scope depending on the device and the industry in which those devices operate. In the healthcare industry, for example, fitness trackers could be altered to change insurance rates. At the more extreme end, modifying data coming from connected devices could see patients receive incorrect treatment or doses. Last year over 400,000 pacemakers had to be recalled for a firmware update due to fears attackers could cause “inappropriate pacing” of patients’ heart rates.

In the oil and gas industry, where predictive maintenance is becoming a major use case around IoT, tampering with sensors could cause costly equipment failures. In manufacturing, manipulating any sensors involved in process automation could, for example, interfere with just-in-time assembly lines and cause bottlenecks, or ruin anything that involves a chemical reaction and potentially lead to defective goods.

A recent study by IBM into smart city infrastructure found a number of zero-day vulnerabilities might have dangerous compromises. “Attackers could manipulate water-level sensor responses to report flooding in an area where there is none—creating panic, evacuations and destabilization,” Daniel Crowley, head of research at IBM’s X-Force Red, said in the report.

“Controlling additional systems could enable an attacker to set off a string of building alarms or trigger gunshot sounds on audio sensors across town, further fuelling panic. Manipulation of this sensor data [In an agriculture scenario] could result in irreversible crop damage, targeting a specific farm or an entire region.”

One probable real-world example of fake sensor data involved GPS spoofing in the shipping industry. Last year, over 20 ships in the Black Sea suffered attacks to their GPS systems – normally accurate to within a few meters – showing them 25 nautical miles off their actual location.

In less catastrophic examples, machine learning models require huge amounts of data to be trained on. Given the difficulties companies are having in removing racism and sexism from their machine learning systems when using legitimate data sets, tampered data sets could cause even more harm. Researchers are already looking at ways attackers can compromise machine learning models and neural networks, and polluting the data before it even gets the machine learning systems could be very effective.

In the future, as companies grow networks and start feeding IoT data into partner networks or creating public data streams, ensuring that data is validated and trusted becomes even more important as it could undermine trust in your company. “As there is more ‘sharing,’ this opens the opportunity to push such fake data into the operations of another party,” says Gartner’s Friedman. “For this reason, quality controls around sensor data become more important in the future.”

Many IoT attack vectors, difficult to spot

The range of ways to modify or pollute IoT data streams means the attack surface is large. Directly attacking sensors is the least scalable approach but hardest to spot and defend against, while attacking a centralized data store is the most scalable and valuable target but hardest to hit and easiest to defend. Between that, attackers could hack the device and change the data it sends, add spoof devices to a network to send their own fake data streams, or target any edge processing devices.

Yasser Shoukry, assistant professor at the University of Maryland, has given various presentations looking at different way to attack IoT networks. “Sensors are a very weak point inside the whole system, and you can either corrupt the information or you can add extra sensors that do not exist on the actual network,” he says. “It is a very hard problem. The problem is we don’t have enough redundancy. if you don’t have enough redundancy, once you attack those few sensors you have corrupted the whole information coming from that sector of your network.”

Different attack methods Shoukry has researched range from compromising a car’s automatic braking system using electromagnetic actuator and interfering with the gyroscopes of drones to attacking vehicle-to-infrastructure (V2I) systems by spoofing the number of vehicles on the road and creating virtual congestion with non-present cars. “You can create a complete traffic jam inside the smart city infrastructure by manipulating the sensory information,” he says

He has also looked at ways to interfere with the electrical grid. “Sensor information is used to stabilize the grid, and corrupting that information, you can destabilize the whole grid and take it down,” says Shoukry.

Industrial Control Systems (ICS) are often found lacking when it comes to security. A recent Trend Micro study found thousands of  energy and water systems exposed online. The TRITON malware specifically targets ICS and is designed to modify safety controls.

Detecting direct attacks on sensors is almost impossible, and they often leave little digital record of tampering. For example, last year researchers at University of Michigan found a way to attack accelerometers using only sound waves. On a larger scale, a survey by Kaspersky found nearly half of industrial companies say they would have no way to detect any attacks on their industrial control system (ICS) devices.

“At the moment there aren’t any standards in place for how sensors capture data. They just capture and send it,” says Bharat Mistry, principal security strategist at Trend Micro. “Some vendors might do encrypted tunnels between the sensor and the data acquisition server, but the data itself isn’t digitally signed in any way so you could quite easily gather data from one set of sensors and say it’s from another set of sensors.”

Given the mass volumes of data, a human is unlikely to spot any manipulation of data unless it creates a large and unexpected spike in the metrics, while a machine will simply ingest whatever is fed into it. The unstructured nature of much of the data can also make tampering difficult to spot. Attacking multiple sensors within a network to make them all report similar information can make them look credible together, even if they are out of with sync with other sensors.

“It is very difficult problem especially when the data attacked is all collated together, says Shoukry. “Unless you can cross-check different combinations of sensors together, you may not be able to find such an attack.”

While nation-state-style disruption attacks – as seen during the 2015 Ukraine power grid attack or the Stuxnet attack on an Iranian nuclear facility – would be the most likely, Mistry says there is scope for extortion attacks. “Criminals can easily bring the FUD [fear, uncertainty and doubt] into the equation and say, ‘your data’s been corrupted but we’re not going to tell what’s been changed, but if you pay us we’ll tell what’s been changed and how.’”

“If criminals start tampering with data but don’t tell an organization what’s been changed, then you’re in a situation of what do you do,” says Mistry. “You don’t know how long that data has been corrupted, what data has been tampered, how much product do you have to recall. I can see a lot of organizations paying [to find out what data from what sensors over what timeline] especially if that’s data they can’t recover.”

Securing the internet of things

While few attacks involving fake IoT data are known, there has been no shortage of normal attacks involving insecure things. Both device manufacturers and the companies deploying such devices should be looking to improve their IoT security.        

In the UK, the Government Communications Headquarters (GCHQ) intelligence agency released a Code of Practice for Consumer IoT Security, a non-mandatory document that provides guidance on how things such as not using default passwords, having a vulnerability disclosure policy, patching regularly, and implementing secure storage and data communication would improve device security. The state of California is attempting to go one step further and legislate what good IoT security should look like.

While security in the consumer IoT space continues to be lacking, it seems enterprises are starting to take the issues more seriously. A 451 Research survey found that over half of enterprises deploying IoT projects ranked security as their number one priority, and the 2018 Ponemon Global Encryption Trends Report found that 49 percent of enterprises are either partially or extensively deploying encryption of IoT data on IoT devices.

“A lot of companies are worried about the devices but forget about the data acquisition services around it and the repositories where the data has been churned,” says Trend Micro’s Mistry. “You need checks and balances in place to make sure the data you’re using has that authenticity around it so you’ve got an audit trail from the sensor to the point of storage.”

Preventative steps to reduce the chance of polluted IoT data entering your systems include encrypting data, both at rest and in transit, timestamping and hashing data, and creating a digital signature around each record that’s been collected. Mistry says we should treat our IoT data in a similar fashion to how we handle highly regulated payment or health data.

When it comes to spotting attacks, Mistry suggests ensuring you have well-established baselines to ensure anything unusual is more easily noticeable. If possible, having predictive modelling that is constantly being compared to actual results and can alert you to discrepancies. “Because of the mass ingestion of data, you would have to have some oversight. You need another set of algorithms watching the machine learning algorithms.”