BUPA suffers an incident

News stories reinforce the reality of privacy incidents. Today I plan to discuss a story from The Register, one of my favorite tech sites. The article is short and well worth reading. Don’t miss the comments, which raise interesting points.

Bupa: Rogue staffer stole health insurance holders’ personal deets

BUPA, a UK health insurance company, suffered a leak of more than 100,000 customer records, including phone numbers and email addresses, due to the actions of an employee.

In its breach notification statement, Bupa global managing director Sheldon Kenton said the breach was a “not a result of cyber attack or external data breach, but a deliberate act by an employee”.

Note first that a breach notification was filed. Whereas in the past, some companies may have been able to hide such embarrassing disclosures, under GDPR breach notification is required. Note also the favorable mention that BUPA receives for its handling of the notification:

Data privacy watchdogs at the Information Commissioner’s Office confirmed they were looking into the incident. It is all rather unfortunate but Bupa ought to be credited with handling the breach notification and (by the looks of it) incident response process promptly and professionally.

A data incident may be the result of bad luck in spite of best efforts, but there is no excuse for a substandard response.

Note to self: Outline an incident-response workflow and checklist for responding to possible event types; records of known incidents are a good place to start. While you’re at it, throw in incidents due to accidents or carelessness, such as donating computers to charity stores without clearing their data.

Cyber attacks get a lot of press coverage, and are dramatized in movies and television shows, enabling us to imagine them vividly and thus to consider them a likely occurrence. Insider misconduct is, in my experience, less vivid and even less pleasant to think about. For one thing, we need to trust our co-workers in order to be productive. We also want to like those we know, and it’s uncomfortable to imagine someone we work with as being untrustworthy.

My point is not that we need to stop trusting people, but that our systems should be designed, as much as possible, to minimize the damage that a single person can cause. (I am reminded here of some old movies where the characters are in a nuclear-war situation, and firing the missiles requires two or three people to carry out. Such a system is designed precisely to prevent any one person from going rogue and firing the missiles).

If you look around your own organization you will likely find many roles (administrators, programmers, analysts) who have access to a lot of data, sometimes even entire databases. You yourself may have such access. This may be because users have more privileges than they need, or because data is left lying around (e.g., in staging areas, backups, or shared filesystems). Though I have no global answer to this problem, I hope in future posts to provide examples of how to reduce its scope.

Security measures such as data loss protection are designed to prevent data from being leaked or stolen. If they were in play, and properly configured, they ought to have stopped a rogue staffer from uploading sensitive information to the net or emailing it out. USB ports arguably ought to be disabled at a health insurance provider but that still leaves the possibility of more stealthy data extraction methods. DLP tech is thus far limited to prevent accidental data leaks and unimaginative data thieves only. (emphasis added)

The author doesn’t make it clear if there was anything BUPA could have done, given current technology, to prevent this leak. The USB-port route is mentioned, as are email and internet (cloud), but is that how the leak was carried out? On this detail the article is silent.

Even if the leak is promptly detected, detection is not prevention; the data may already be in the wild. But detection at least enables us to notify the authorities and data subjects, and to close whatever door was used to extract the data.

As far as “properly configured”, that seems inconsistent with the author’s later statement that maybe the state of the art is not keeping up with the threats. If there was a breach, something can likely be found that was not perfect, but does that help us? One problem here is that errors look obvious in hindsight (the way that code bugs look obvious after they’re found), so it will be easy for others to say “well, clearly you should have done X“.

The mention of “unimaginative data thieves” is a reminder that there are clever data thieves who are able to carry out their attacks and go undetected.

Finally, I’d like to draw your attention to this section of the article:

Mark James, security specialist at infosec firm ESET, warned that the breach exposes Bupa customers to the risk of more convincing phishing scams that might be crafted using the leaked data.

Marco Cova, senior security researcher at cybersecurity firm Lastline, added: “Unfortunately, the data revealed from this breach is the type that criminals can use to launch additional attacks. They merge data from multiple sources, building dossiers on potential victims, including spear phishing targets. The information that they gather does not have to be highly confidential in order to create successful attacks. Data breaches provide a distribution hub for malware for years to come.” ®

You got that? Even PII with little or no value by itself is still an asset: it can serve as the launchpad for scams, or can be saved and combined with other fragments of data to build profiles on potential victims.

Leave a Reply