A basic principle most people don’t understand about risk
Almost everybody makes a fundamental error when it comes to assessing a risk (what might happen).
It doesn’t matter whether they are using a heat map, a risk register, or a risk profile.
They show the level of risk as a point: the likelihood of a potential impact or consequence.
But 99% of the time this is wrong.
99% of the time, there is a range of potential consequences, each with its own likelihood.
Even if you ignore the fact that there are more often than not multiple consequences from an event, situation, or decision, anybody trying to understand risk and its effect on objectives needs to stop presenting the level of risk as a point.
This was brilliantly illustrated in the Ponemon Institute’s latest report on cyber. Their 13th Cost of a Data Breach Study (sponsored by IBM) is an excellent read. It has a number of interesting findings that I will discuss in a separate blog.
The content that is relevant to this discussion is a graphic that shows the range of potential consequences from a cyber breach. Their graphic shows the likelihoods of having anywhere from 10,000 to 100,000 records stolen. (They separately discuss the cost of what they call a ‘mega breach’, when more than a million records are stolen.)
Using their number for the average cost to the business (across all sectors and geographies) of the loss of a single data record, I created the graphic below. (The probabilities are for the next 24 month period.)
As you can see, in their estimation a cyber breach can result in a loss that is anywhere from $1.5 million to $14.8 million. (The losses suffered by organizations in the medical sector are about triple that amount). They can extend to $350 million for the very few who have 50 million records stolen.
If this is reality, which point do you select to put on a heat map or risk profile?
If you want people to make intelligent and informed decisions relating to this risk, they have to understand the full picture. That picture starts with a chart that shows the range of potential consequences. Ideally, it shows how they might affect enterprise objectives.
What is an acceptable level of risk? For certain it’s not an ‘amount’, as preached by COSO. I talk about an acceptable likelihood of achieving your objectives.
But let’s just focus on this graphic for now.
Is the range of potential consequences and their likelihood acceptable?
Are there any individual points in the range that are unacceptable?
Does it make sense to use techniques like Monet Carlo to replace a chart with a single number?
How do you provide actionable information that enables intelligent and informed business decisions?
I welcome your comments.
For more of my views on risk management, consider my best-selling books:
- World-Class Risk Management for practitioners, and
- Risk Management in Plain English: Enabling Success through Intelligent and Informed Risk-Taking for board members and executives
I work with a firm that counts 19 of 20 largest banks as subscribers to help quantify cybersecurity risk exposures by business process. Only way to have intelligent business discussions with the BOD, CSuite and Stakeholers.
Hi Norman,
Thanks for your another great piece on IA/ RM.
If one go by your suggestion then wouldn’t you think that the data will get overly complicated which may hinder the understanding of the risk and consequently the ability of managers/ executives/ those charged with governance to take informed and appropriate actions? May be just showing a weighted average of all scenarios could be more useful – after all the risk assessment data is a compilation of thoughtful guesses.
Good point. I think it will all depend on the nature of the decision that has to be made. A weighted average is less meaningful than a calculation of the area below the curve, such as Monte Carlo can produce. But unless there are a great many risks, I would discuss all of them using a range and a chart.
James Lam also commented in an a WSJ article this week on the need to quantify cybersecurity risk exposure. If you want to discuss further send me a good time to explore. Mike
Thanks, Mike. I saw the piece and recommend it to all. But even James doesn’t talk about a range.
Hi Michel, do you really believe that complicating the risk assessment to that level brings forth better results than a heat map/ risk register?
The said technique could increase the cosmetics of a consulting firm (after all they sell these beautifully crafted charts) but in term of value they create is something I could not grasp till now. May be there is a lot I need to learn from the experienced professional 🙂
If you are serious, I am partnered with a firm that quantifies cybersecurity risk exposures by business process. Private firm backed by top 10 wealth person (not named and I do not care ). Not sure of your interest in this space, but happy to chat.
This curve demonstrates that outcomes of risks are ranges rather than fixed numbers with some or another fixed likelihood. the key reason heat maps are wrong to the extent of being dangerous.
We do need tools, and Monte Carlo simulation is probably one of the most useful to do this.
Based on this (e.g. the curve presented) the company can assess to which extent they can survive (risk capacity) or will accept (risk tolerance) the risk – or will take actions to reduce this in some way.
If the risk is within risk tolerance, actions should be based on prudent cost/benefit analytics only (i.e. a business decision rather than a risk management decision). Furthermore, If the company wishes to be highly competitive – the difference between risk capacity and risk tolerance must be small as no-one wins giving 80% of full throttle.
Yes, but I am not persuaded that you should aggregate dissimilar risks into a single number. Even for one source of risk, the overall number may be OK while one point on the curve is not.
I suggest it all depends on what you are assessing and the nature of the decision to be made.
Really what you are talking about is a change in a snapshot or portion of the risk register. Of course an event (Change in the intensity of or type of stressor on the business assets) will have different degrees of likelihood across the enterprise if they share different mitigation. To me, saying risk to data is like saying risk to the company, there is not enough granularity. Your system characterization for each location’s or function’s intellectual property could and should be very different; HIPAA, PII, Financial, Client vs Internal Data and the list goes on. I do strategic risk at the National and regional level and it still surprises me that some folks think they can look at such a large and diverse group of assets and say the risk is…… Its an amalgamate of the assets of which the enterprise is comprised and not a single piece of analysis.
Great point Norman and for sure food for thoughts. I am keen to learn more what the COSO supporters think…
Best
Antonio Caldas
http://riskmanagementguru.com
Nice post! Thanks.
Norman, agreed and very insightful. I think this approach can be very useful in bringing forward complex risks (like cyber risks) to the board and/or executive management for deeper discussion. Another role of the risk assessment process is to also assist in deciding which risks relative to each other risks are more important to apply resources to in order to mitigate them. In this instance the relative rankings are important than a value or point, how you achieve this with a range of options might be a challenge.
Fully agree on the single point vs. range “point” you are making.
However, in a specific risk scenario analysis, it would be important from the onset to specify the asset that is impacted i.e. the range of records. For example, in a particular risk scenario, a threat agent might only get individual records of specific persons one by one, which will slow down the “dump” and increase the probability of detection.
In another scenario, an insider might be involved who has access to a particular range of records. In another case a privileged insider who can access even more records.
In another scenario, one might assume the existence of some zero-day vulnerability which might lead to a dump of all records. The point is that in each scenario the asset (a probable range of records) is changing (and the threat agent) and so does the likelihood of occurrence of a loss event.
To summarize: I would look at the probable range of records as an input into the analysis and not as the resulting risk itself.