Edit This Guide Record
Guides Technology Will IoT Devices Make Your Home Less Secure?

Will IoT Devices Make Your Home Less Secure?

Published on 08/24/2016 | Technology

265 2

Thu Anh Le

Thu Anh Le wears multiple hats - from doing social media management, content marketing, to SEO analysis, PPC campaign management and marketing automation.

IoT GUIDE

Overview

Purchasing a new home security device might make your home less secure. According to a  2015 report released by HP researchers, 10 out of 10 connected home security systems experienced security failures. Users were able to brute-force unauthorized access into all the tested systems and “watch the home videos remotely”. This is scary enough, but an identical scenario outside the scope of a testing environment has actually happened. The security firm Imperva detected a DDoS attack  that was flooding malicious requests to a cloud service which overloaded the server. DDoS attacks from botnets aren’t extremely unusual, but this time, the flood of requests were coming from 900 surveillance cameras that were being used by businesses. The cameras were hijacked by a stream of simple brute-forced pushes.

Why should we be concerned?

Sites like the  Shodan search engine show how quickly attackers are able to find potential IoT security systems for targeting. Armed with a search engine, potential attackers can easily locate vulnerable IoT devices to leverage known exploit methods, or simply allow unauthorized users to watch live video streams from any available unsecured device. In fact, there’s a chance that your home might be streaming live on somebody else’s network if you own a home IoT product. The ubiquitous deployment and proliferation of IoT devices is accelerating, and soon will play a huge role in each of our daily lives. For a simple explanation, the Internet of Things (IoT) is the technology concept of linking physical devices together into a single ecosystem. Once connected, these physical devices can be interacted with and controlled by users or even other devices via the Internet. At the current adoption rate, it’s predicted by 2020 that there will be 50 billion IoT devices. This averages to 7 connected devices for each living person on Earth, and that equates to a $7.1 trillion market valuation. The landscape of IoT is rapidly evolving, and the implications are enormous. One of the many growth drivers is the mass adoption and deployment of ‘smart’ home devices. These devices let users monitor and control them remotely. Some are even able to ‘learn’ and accomplish tasks in an automated fashion. The idea of being able to remotely adjusting the air-conditioner temperature before returning home, locking the front door, shutting the garage after you’ve left the house, or even having a smart fridge keep track of your groceries and alert you of what to pick up at the store are no longer too far-fetched for our imaginations. However, by joining the IoT ecosystem, companies are compromised to add several new layers of complexity onto their products. When it comes to security, added complexities are always a problem. As the old saying goes, the devil is always in the details and it very much is the case when it comes to ensuring the security of IoT.

So how do we make this a solvable problem?

Human capabilities alone are not sufficient enough to stay in the race.1 Traditional security solutions need to be leveraged at machine speed, and at a much bigger scale. And that’s where Artificial Intelligence and machine learning, or cognitive security, come into play. A common solution that a lot of companies employ is to look for abnormal patterns and notify the user when an unexpected event is detected. To determine a ‘threat’, the platform needs to be able to point out unusual behaviors on the user’s end. Basically, if the system detects that you were doing something you normally wouldn’t, it will record the event as ‘unusual’. Then, it fires off an alert to the user and lets them know that an anomaly was detected.

SparkCognition’s technology takes security to another level

First, our algorithms locally train on your home IoT network. It starts by observing your behaviour, takes the inputs, and gradually ‘learns’ what is ‘normal’ from the data (your actual behaviors) presented to them. These ‘normal’ behaviors are then used to form a baseline to make comparisons across future behaviors. Now the machine has learned what’s considered normal for your IoT network, the next stage is to watch for abnormal behaviors by comparing and looking at patterns of communications between your devices, including traffic patterns, data transfer patterns, and traffic characteristics. Basically, if your device happened to be used in a way that significantly deviates away from the trained baseline (normal behavior patterns), then the event will be recorded as ‘abnormal’. When looking at traffic patterns, the machine seeks for connections from abnormal hosts or networks. For example, if you have always been logging into your device from Austin and suddenly, on a fine morning, you’re logging in all the way from Russia, then that would be considered an ‘unusual’ event. In addition, data transfer patterns – the retrieval or sending of data, or making requests- are also taken into account. For example, like the CCTV cameras attack in the example above, malicious requests were flooding to and overloading the server. With this technology, we would be able to identify the type of threat, when it started, and where it was coming from. Another thing that our algorithms look into is traffic characteristics – the profile of where the traffic came from. Some of the variables that make up a ‘profile’ are protocols, ports, number of connections per unit of times, and many other measures. For instance, if your connections were coming from a trojan port such as 31337, or 666, then that would be alarming. But our technology doesn’t simply stop at looking for statistical differences between formed behaviors. What sets us apart is our DeepNLP technology.

What makes DeepNLP technology special?

DeepNLP bridges the gap between behaviors that are simply ‘anomalous’ and behaviors that are actually malicious. After recording ‘unusual’ behaviors from the previous step, instead of immediately firing off an alert to end users like most other technologies, we add an extra step to our data interpretation process and make our machine ‘smarter’. We take the recorded alerts and pass them through our own proprietary and patented dynamic natural language processing (NLP) engine. The NLP engine is what we use to predict the likelihood of the anomalous event being ‘malicious’ in nature. Then, if the recorded alert did not meet the threshold to be considered statistically significant, the machine would not send the user an alert. The events were ‘anomalous’, but they weren’t ‘malicious’. This reduces false positives and provides a much better user experience. Basically, DeepNLP does much of what a human security analyst can do. After DeepNLP runs, the algorithm then generates a report and sends it to the user in a condensed, nicely put format that includes the name of the threat, evidence with description, and remediation information. But the learning process does not stop there. When the user receives the alert, they will be able to vote whether the event was normal or malicious (positive or negative). Our Artificial Intelligence and machine learning based platform then takes the inputs and continues to ‘learn’ from them, therefore making them more ‘intelligent’. More and more IoT apps and devices are being developed everyday, and manufacturers have to find a way to keep up with it at a much faster pace than ever before. There’s a real need to accelerate every moment to catch up with the exponential growth of IoT devices. At SparkCognition, we’re creating powerful and meaningful advances using Artificial Intelligence technology to help organizations evolve with the ever growing complexity of safety and security. This article originally appeared on SparkCognition. You can read the original post here.

test test