What financial services can learn from the Uber breach when preparing for GDPR

Dov Goldman of Opus, outlined exactly how the concealed Uber breach came about, what GDPR would have thought and how big businesses can prevent data breaches in the future. Tell us about yourself. My title is vice president of innovation but practically that makes me an entrepreneur in residence. I’m responsible for meeting the market […]

Author
Date published
March 08, 2018 Categories

Dov Goldman of Opus, outlined exactly how the concealed Uber breach came about, what GDPR would have thought and how big businesses can prevent data breaches in the future.


Tell us about yourself.

My title is vice president of innovation but practically that makes me an entrepreneur in residence. I’m responsible for meeting the market and hunting down industry pain points that need addressing. I then look at how current technology is letting them down and devise new solutions with our engineering teams.

My focus at Opus has primarily been information security and the role of third parties in that. Information security and software has been at the forefront of my twenty year career and I’m bringing that experience to the same method of listening to the market and devising new solutions.

And what is the market telling you about pain points around GDPR?

This has been one of the biggest focuses, especially over the last six months. Information security, privacy and business continuity are all interlinked within the enterprise and yet this legislation affects them all across many different departments. You might have attorneys looking after privacy, and a Chief Information Security Officer and they all speak different languages, so coordinating compliance is very difficult.

What can we learn from Uber’s data breach?

The Uber breach came about when a pair of attackers gained access to a GitHub coding site – this is the chief third party in this picture. The Uber software engineers were building some kind of software application on GitHub; an inherent cloud service.

Here’s where it gets interesting. A lot of people use GitHub as a network for publishing and sharing what will then become open source code; so the GitHub community is much like an academic community where they build upon each other’s “repos” (a GitHub software application package) or ideas. A coder can effectively update a former work, bring it into 2018 and re-publish it to the community. The Uber code was on a private version of GitHub – it was set outside from the public area but still very much a part of GitHub. The code written by Uber’s developers contained credentials that enabled those programmes to access confidential Uber data.

There are a few problems here: you have very sensitive information that is protected by those credentials loaded into source codes housed in GitHub which is designed for sharing, not security; this is a recipe for disaster and exactly what happened. The attackers hacked GitHub which isn’t secure and they found the login details within Uber’s open source codes and accessed data which was stored on a different cloud account, Amazon web services. They found personal data of Uber riders as well as 600,000 US drivers’ licence numbers. They then contacted Uber and asked for money, which Uber duly paid in an effort to conceal the breach.

Take this analogy, the builders of a bank left the blueprints (and weak points) of the vault’s brick behind on the worksite under a rusty padlock.

So who is responsible for the breach?

Given that the authorities haven’t allowed full access to the relevant information, this answer will have a degree of educated conjecture. Multiple parties were ultimately responsible for the breach. The first people responsible was the party that failed to sufficiently secure GitHub. The manager of the team of developers is responsible for including credentials within the source code.

Those are both pretty obvious, but the less obvious responsibility lies again with the developers who contravened information security protocol by having access to the production environment; developers shouldn’t be allowed access to production data but rather test data when building the software.

I’m going to prescribe more blame and that is rider and driver data should not have been siloed together. As with the Target breach, point of sale data was siloed with other production networks. You need to separate them as a precaution. In Uber’s case, rider and driver data should not have been connected to other production areas but segmented out. The last criticism of Uber is, should that data have even been kept?

If this had happened post-GDPR, what would happen to Uber?

Under GDPR, Uber could have been fined 2% of total annual revenue (assuming this figure would be higher alternative €10m fine). There are caveats with this particular article, but the fact they failed to report the breach (or demonstrate any attempt to report) and subsequently covered it up, means they would be liable as the data controller to report the breach.

GDPR hasn’t been enforced yet, but other authorities have gone after Uber including the Information Commissioner’s Office who could fine them £500,000 for concealing the breach.

Uber is a young company with a yet to mature corporate structure. Is it fair to say the SLT made naive decisions?

If there was someone at Uber who had taken responsibility of privacy and cared about it, I would have to agree there is a degree of corporate immaturity. Uber’s tremendous business growth has not been matched by their corporate responsibility and that’s true of many corporations that experience significant business growth.

Let’s not forget that we also hear about breaches from other well respected companies but the crux of the problem is how to focus on preventing the breaches.

How do you prevent data breaches on a corporate level?

There are many many potential vulnerabilities within any organisations, especially when you involve third parties – and you need those third parties to operate on the level Uber does. Having said that, there are well known best practices and standards that address third party issues.

I’ll highlight two – the ISO 270001, this is the most accepted global standard for information security management and a US standard, National Institute of Standards and Technology Cybersecurity Framework (NIST CSF), which resembles and was based on the former standard; these are the cookbook of recipes to follow in order to protect data.

No one is expected to predict everything that might happen but these standards have been built by the smartest experts in the field to complete a to-do list for information security. If those practices are implemented, you’ll see a significant effect of defence.

These standards should also be applied to any third parties. This means someone should have an inventory of all third parties as well as evaluating the privacy and security practices. I went back and applied the NIST standard to Uber and, according to that standard, the mistakes were:

a) Failing to evaluate GitHub’s security. This violates “you must manage access permissions managed and incorporating the principles of least privilege” – this means that data of any level of sensitivity has the least privilege necessary in order to access it.

b) The credentials to access other Uber assets were included in the code. This violates “The credentials and identities are managed by authorised devices and users”. The idea here is you must manage those credentials within the business.

c) The developers had access to the production environment. This violates “access permissions are managed and separation of duties” – it should have been a separate team.

d) They archived the driver and rider data in the same network area. This violates “The network is protected, ensuring network segregation where appropriate”

e) Retaining data. NIST prescribes “Data should be destroyed according to policy”

The point here is that if the NIST standard had been applied, the breach would most likely have been avoided.

This is essentially a kill chain, if you follow all of those rules, you kill the access to the data. If any two of these had been implemented you probably would have killed the hack in its tracks.

Third party breaches aren’t rare either unfortunately. A report that we were involved in found that 56% of data executives had experienced a third party breach. This simply reinforces the fact that you need to focus on implementing these standards on your third party relationships.

In the cyber arms race, is there any way that organisations can proactively defend against the cyber attacks of 2018?

Firstly, I fully believe that this is a war where we’ll never be 100% defensible, but with every attack we find mistakes that mean the standards weren’t fully implemented. Far smarter people than me have come to the conclusion that for all the sophistication and innovation of the hackers, they still operate hacks in patterns so the kill chain prescribed by standards is still very effective.

The first step is to know your data flows and transactions and build your defences around them, but that implicitly includes third parties too.

I don’t think you’ll see a single dominant type of attack. We’re starting to see a new type of machine learning driven attack that learns what normal looks like within a particular company and imitates that to hide the penetration. But big business and particularly financial institutions have state of the art defences and more or less implement these standards, it’s not hard to predict that hackers will go for the soft underbelly of the beast, the third parties.


This article originally appeared on GTNews’ sister-publication, bobsguide, who is hosting a webinar titled 100 day countdown to GDPR: Are you ready? on March 7. Register for this free guide to GDPR compliance today.

Exit mobile version