Industry 4.0 is here, and smart factories are at the heart of this new technology. We’re now using artificial intelligence to create factories of the future, to optimise production, reduce bottlenecks and increase productivity and speed. These machines we’re creating are able to communicate and make decisions of their own. Whilst this provides not only an improved manufacturing experience, it also introduces challenges in privacy and security.

Last week at the AI in Industrial Automation Summit in San Francisco, experts in the field joined us in the panel ‘Industry 4.0 & Cybersecurity: How do we safeguard our data and manage risk?’ Joining the panel was our moderator Abhishek Gupta from McGill University and District 3, Andy Grotto from Hoover Institute, and Jeremy Marvel from NIST who shared their expertise in the discussion.

What did we learn in the discussion?

Abhishek: What are your opinions on the current landscape of cybersecurity and how is the industrial landscape effected?

Andy: There’s a longstanding divide between the IT world and the industrial world, and we’re only starting to see these things coming together. We have companies that are only just realising that they’re part of the IT world. This is reflected in my experience in national security council at the White House, and this division needs to come down. This reflects a major scene in how we collectively think about security risks. If people don’t even realise they’re at risk or threat of cybersecurity breaches, it’s challenging to manage. In my opinion, I see security, safety, and privacy acting as 3 legs of a stool, as a pyramid that are inseparable. If one isn’t present, they all collapse.

Jeremy: From a manufacturing perspective, one area where issues lie is that we have a battle of key terms. The advent of the industrial internet of things means that there’s more data, more sensors, and more elements in general. There’s a conflict of interest here - some people want everything that can be measured to be measured and pushed onto the cloud, but then people want security too - you can’t have both. If you have both it gets messy as you’re trying to push data onto the cloud from one side, but back in on the other. There’s a huge amount of privacy associated with this in both personal privacy - for example in companies with lots of data you don’t want workers negatively impacted, but there are also trade ‘secrets’ that could be used by other companies in order to gain an advantage. There’s an issue where no one’s looking to see how the huge amounts of data pushed out could be damaging.

Abhishek: Absolutely, there’s definitely a conflict of interest, I read an article recently that said ‘you can’t spell idiot without IoT’ - that gives you an idea of where things stand. Moving on, when we’re looking at embedded devices, they come with limited computing power. How can we manage risk on those sorts of devices? If you’re looking at doing security from a point of having everything robust, that requires resources.

Andy: There’s no short answer to this. Risk management is about optimising both cost and benefits. A baby monitor for example has one set of security and privacy expectations, but if it’s an industrial control system in a power plant we have different expectations - it’s all about managing risk. We’re seeing a debate on whether product liability should apply in IoT - I think we’ll see some significant changes in the liability landscape, first in Europe, then in the US and in China.

Jeremy: There’s another question: where in the chain of communication do you put the cybersecurity? There are some systems that don’t have the computational power to handle the security measures, so where does the line of security go? The PLC level? The fleet management level? Everyone wants someone to point to to say ‘you’re responsible for that security breach’.

Andy: And that gets to the notion that it’s as much a government issue as it is a technology one. We have members of congress talking about Silicon Valley which is a sign that politics have changed. I teach a class at Stanford on Cybersecurity Law and I threw up Apple’s terms of service for the iPhone and there’s the disclaimer of responsibility and liability. These Stanford students were floored by what Apple can do. When you combine that attitude with the changes in Washington, it makes me think we’ll see a federal privacy law come in soon. One of the artifacts of GDPR is that a lot of big sophisticated global companies will figure it out, a few will get hit hard, but most of them will figure out a way around it and use it as a source of competitive advantage - competitors can’t invest as much in lawyers etc. as they can. This makes the underlying privacy issues very different, we’re going to see a lively debate start in Congress over what an American federal privacy statute looks like.

Jeremy: I certainly hope it’s a debate. 2015 was a bad year in terms of hacks - if you’ve ever registered to vote in the US, your information was stolen, if you applied to work in the government, your information was stolen - this was because of lax cybersecurity. In some places cybersecurity was in place, it was just old. The government replied, took action, and the cybersecurity standards were created with extensive requirements - passwords with 15 characters of uppercase, lowercase, special characters. It was then hard for people to guess, but easy for algorithms to figure out because it was all public information - if you have to change your password every 90 days, there will be patterns. People used keyboard walks which are very easy to crack.

Abhishek: if we start to think about things like passwords, we’re moving into an era where we have all these in biometrics, like fingerprints. When you have this information, it’s forever. You can’t change your fingerprints.

Andy: That’s why I think biometrics is an awful idea.

Abhishek: The centralisation of credentials becomes a single point of failure.

Andy: Who’s more effective at keeping a password safe?  Is it password locks, a post it, password walks? For most people, any of these are a rational, appropriate way to keep things ‘safe.’

Abhishek: Going back to competitive advantages, what about small companies that lose out?

Andy: It becomes a business risk they have to manage against. Big companies can afford things that smaller companies can’t. The flipside is that there are small companies that collect huge amounts of data who want to protect this, so there’s another side, it’s not just small vs. large. There’s not a baseline for federal privacy laws .

Jeremy: The more you have to adhere to, the more likely the small companies are not to adhere, because it becomes a burden. There’s no such thing as actions without consequences.

Abhishek: When rolling out a new product, comes at the risk of cybersecurity, where do we draw the line?

Andy: There are a lot of companies that get breached, if you do an analysis of stock price variation, there are some companies that show a sustained dip after the breaches, but most don’t see a dip, which is strange. We can’t draw any general conclusions. Apparently the average cost of a data breach is $7.53million, but that’s more than the ‘average’ revenue of a company, but then for other companies, it’s more like a $20000 loss, so there’s a huge difference, so the data isn’t that easy to analyse

Jeremy: Large companies are more aware of how much their data is worth. There’s also the issue of protecting trade secrets - do  you trademark, do you patent, that’s an issue too.

Abhishek: What are some of the potential consequences of using AI from a cyber offensive perspective?

Andy: We don’t fully appreciate the consequences yet, but I hesitate to speculate too much.

Jeremy: For us, we do a lot in terms of cryptography and making sure everything is as secure as possible by encrypting information. As tech advances, the terms of AI being able to decrypt and gain information to this is challenging. How do we protect against this - I have no clue! The AI itself is getting very good at finding patterns, so the whole security through obscurity mentality is starting to get away. You can try to be as obscure as you want, but patterns occur.

Interested to learn more about privacy & security and how it’s impacted by AI? Watch the whole panel discussion here. Contribute to our White Paper. Are you, or do you know someone working in Privacy & Security in AI? Recommend them to contribute to the upcoming white paper.