Coping with inherent uncertainty, part 1

This week’s post is aimed at dealing with the unavoidable uncertainty of processing private data. No matter how careful you are, you can always have a problem. No matter how much time and money you spend on precautions, you never have certainty that everything is secure. If you can’t guarantee perfection, then what will be considered good enough?

It would seem straightforward to consider all threats, the likelihood and impact of each one, and then to choose the most cost-effective measures to prevent them. Unfortunately, it is not straightforward.

One problem is that the likelihood of most recognized threats cannot be usefully estimated. Add to this the possible emergence of new threats during your target time window. We don’t know the probabilities; we don’t even know all the threats. What follows is a suggested framework for making decisions and trade-offs in the face of uncertain (and even missing) knowledge.

A second difficulty is that cost and effectiveness are not readily apparent. I will argue in this (and future) posts that privacy assurance is not a state that you reach, but rather a system, a set of ongoing processes, that are rationally calculated to reduce the probability of adverse events, given such constraints as resources, budget, and development time. This may be the best that you can do.

The risks are multi-dimensional

The potentially adverse impact of personal-data processing to data subjects includes:

  • embarrassment, shame, humiliation
  • loss of employment or future employment prospects
  • becoming a crime victim (e.g., of a stalker or identity thief)
  • suffering exclusion, discrimination
  • and many others

Potential impacts to you, as data controller or processor, include (starting with the easiest to estimate):

  • fines
  • damage payments to data subjects
  • business interruption (e.g., if a major problem causes a loss of data, or forces you to suspend processing)
  • lost customers
  • lost reputation

Some risks can be estimated, some cannot

GDPR fines are proportional and have an upper limit; court-awarded damages are also likely to be in proportion to your ability to pay. Neither fines nor legal damages are likely to bankrupt your concern, as that would quickly make privacy enforcement a political problem (link).

There is no upper limit, on the other hand, for intangible risks, such as business interruption, loss of customers, and damage to reputation. These losses could indeed put you out of business. Not only that, these losses are not necessarily visible; you don’t see the customers who went elsewhere because of your poor reputation.

Much information that would help us estimate privacy risk is simply absent, a problem that risk-philosopher Nassim Taleb refers to as “silent evidence” (link). Lost customers are but one example. Another uncertainty is how useful reports of attacks and breaches are in attempting to estimate risk; they tell us what has been reported but they don’t tell us much about the breaches that went un-reported or even undetected.

High-impact, low-probability: a cognitive blind spot

The foregoing has listed risks, many of which have unknown probability of occurrence and unknown worst-case impact if they do occur. People have difficulty making rational decisions in such circumstances, as asserted by Prospect Theory (link, and by Daniel Kahneman’s work generally link), which helps to explain why people purchase lottery tickets or one-trip flight insurance, purchases where the price is vastly greater than the probabilistic value. In those cases we unconsciously resort to a heuristic, such as the Availability Heuristic (link), which states that we will rate as more likely those risks which we can readily imagine (that is, they’re  mentally available).

We can readily imagine, to continue the example above, winning the lottery or being in an airplane crash, so the risks are more vivid and (unconsciously) credible. In my own experience I have noticed, for example, a markedly more diligent attitude toward data backup among colleagues, whether business or technical, who have at any time in their careers experienced a data loss.

Decisions as a series of options

My point here is that, faced with unknown risks, we resort to non-rational bases for decisions. Better to adopt a mental framework that enables us to think in a more rational and structured way about the decisions we must make.

One way to do this is to imagine each decision as a market option (link). If you purchase an option to buy, say, 1 share of IBM at $100 per share, you have the right (but not the obligation) to buy one share for $100 until the option expires, after which is has no value. If you buy an option, you pay for someone else to take the opposing risk, in this case the risk that the price of IBM will rise above $100. If you sell an option, it’s the reverse: someone pays you to take a risk.

Each option has a cost for the buyer, which becomes income (called the premium) for the seller. Depending on the price movement of the underlying asset (in our example above, 1 share of IBM) the parties to the option, the buyer and seller, can receive negative, positive, or zero payoffs. We can use the concept of options as an analogy to help us to structure and model our decisions in privacy risk.

For example, suppose you purchase equipment to protect your systems from being damaged by lightning. Further suppose that this protective equipment has a lifetime of 10 years. What you paid for the equipment is your cost. This is similar to insurance, with the exception that this strategy aims at prevention, while insurance offers compensation.

If lightning strikes and the equipment in fact prevents your systems from being damaged, you have a payoff equal to the value of your systems, plus any related costs, such as loss of data or loss of customers. Your option can be said to have paid off. But it’s not as if you made a cash profit; rather, you avoided losses.

If 10 years passes and lightning never strikes, or the equipment fails to protect your systems, then your option did not pay off and its cost is lost. Note that I don’t say that the cost was wasted, but it’s a matter of how you think about it. For example, insurance that is purchased but never needed represents the same kind of option; in fact, portfolio managers speak of ‘buying insurance’ for their portfolios when they buy options that will pay off in the event that their chosen investment strategy loses money (a practice called risk hedging).

What if, on the other hand, you decide that the risk of  lightning damage is too remote, or that the protection equipment is too expensive, and so you do nothing. In this case you have, in effect, sold an option: the amount you save by not buying the equipment is your premium for selling the option (that is, saving the money is equivalent to being paid). If lightning never strikes, then your strategy has a positive payoff. If lightning does strike and you have losses, then you can think of this as a negative payoff.

In fact, we can think of our entire privacy-protection effort (as well as life’s big decisions) as a series of options. This approach has several advantages:

  • it provides a common framework for considering many types of risk from a variety of domains
  • if serves to remind us that, once a risk is identified, by doing nothing we are still, in effect, making a choice
  • it gives us a way to structure an approach to what is necessarily guess-work; we can avoid both taking large risks for small gains (a kind of reverse lottery), and neglecting effective measures that carry little or no cost
  • it provides a structured method which can be documented as evidence not only of risk mitigation, but also of privacy by design (after all, you are attempting to allocate your privacy efforts in the most effective way)
  • it provide a way to justify why we chose to hedge one risk and not another

For heaven’s sake, cover the simple stuff

In the category of defenses that cost little but can help a lot are simple, common-sense security measures, such as one might see in the first chapter of a text on system security. Because the cost is low, we should see these opportunities as cheap options, as bargains.  Some examples of simple measures include restrictions on:

  • use of portable media (such as mobile phones and USB sticks or drives) for storing protected personal data (link link)
  • use of bluetooth communication between office and personal devices (link)
  • storing of personal data on laptops (which are frequently lost or stolen)
  • sending of personal data (typically as attachments) via email; send links to shared repositories instead
  • sending data to a third party (such as a data processor) without both encryption and a secure channel (link)
  • insecure disposal of data-carrying waste (PCs, disks, printouts)
  • use of publicly-accessible email services

The main cost of simple measures like these is inconvenience; in other words, these are free (or very cheap) options. They are like finding a lottery ticket on the ground; even if you think that winning is highly improbable, the ticket is still worth picking up, given that it’s free.

Not only are these measures low cost; if neglected, their very simplicity will make any resulting failure appear especially negligent. After all, of us may fall victim to a resourceful internet hacker, but anyone can understand that USB keys get lost, laptops stolen, and so forth. In option-speak, you will have an enhanced negative payoff (reputation damage) due to Hindsight Bias (link).

 

Leave a Reply