Anthony Fitzsimmons recently sent me a review copy of his new book, Rethinking Reputation Risk. He says that it “Provides a new perspective on the true nature of reputational risk and damage to organizations and traces its root causes in individual and collective human behavior”.
I am not sure that there is much that is new in the book, but if you want to understand how human behavior can be the root cause (in fact, it is very often the root cause) of problems for any organization, you may find it of interest.
The authors (Fitsimmons and Professor Derek Atkins) describe several case studies where human failures led to serious issues.
Humans as a root cause is also a topic I cover in World-Class Risk Management.
As I was reading the book, I realized that I have a problem with organizations placing separate attention to reputation risk and its management. It’s simply an element, which should not be overlooked, in how any organization manages risk – or, I should say, how it considers what might happen in its decision-making activities.
The same thing applies to cyber risk and even compliance risk.
They are all dominoes.
A case study:
- There is a possibility that the manager in HR that recruits IT specialists leaves.
- The position is open for three months before an individual is hired.
- An open position for an IT specialist who is responsible for patching a number of systems is not filled for three months.
- A system vulnerability remains open because there is nobody to apply a vendor’s patch.
- A hacker obtains entry. CYBER RISK
- The hacker steals personal information on thousands of customers.
- The information is posted on the Internet.
- Customers are alarmed. REPUTATION RISK
- Sales drop.
- The company fails to meet analyst expectations for earnings.
- The price for the company’s shares drop 20%.
- The CEO decides to slash budgets and headcounts by 10% across the board.
- Individuals in Quality are laid off.
- Materials are not thoroughly inspected.
- Defective materials are used in production.
- Scrap rates rise, but not all defective products are detected and some are shipped to customers.
- Customers complain, return products and demand compensation. REPUTATION RISK
- Sales drop, earnings targets are missed again, and …….
- At the same time as the Quality staff is downsized, the capital expenditure budget is cut.
- The Information Security Officer’s request for analytics to detect hackers who breach the company’s defenses is turned down.
- Multiple breaches are not detected. CYBER RISK
- Hackers steal the company’s trade secrets.
- Competitors acquire the trade secrets and are able to erode any edge the company may have.
- The company’s REPUTATION for a technology edge disappears. REPUTATION RISK
- Sales drop. Earnings targets are not achieved, and……..
It is true that every domino and the source of risk to its stability (what might happen) needs to be addressed.
But, focusing on one or two dominoes in the chain is unlikely to prevent serious issues.
One decision at a low level in the company can have a domino effect.
I welcome your comments.
My apologies in advance to all those who talk about third-party risk, IT risk, cyber risk, and so on.
We don’t, or shouldn’t, address risk for its own sake. That’s what we are doing when we talk about these risk silos.
We should address risk because of its potential effect on the achievement of enterprise objectives.
Think about a tree.
In root cause analysis, we are taught that in order to understand the true cause of a problem, we need to do more than look at the symptoms (such as discoloration of the leaves or flaking of the bark on the trunk of the tree). We need to ask the question “why” multiple times to get to the true root cause.
Unless the root cause is addressed, the malaise will continue.
In a similar fashion, most risk practitioners and auditors (both internal and external) talk about risk at the individual root level.
Talking about cyber, or third party risk, is talking about a problem at an individual root level.
What we need to do is sit back and think about the potential effect of a root level issue on the overall health of the tree.
If we find issues at the root level, such as the potential for a breach that results in a prolonged systems outage or a failure by a third party service provider, what does that mean for the health of the tree?
Now let’s extend the metaphor one more step.
This is a fruit tree in an orchard owned and operated by a fruit farmer.
If a problem is found with one tree, is there a problem with multiple trees?
How will this problem, even if limited to a single tree or branch of a single tree, affect the overall health of the business?
Will the owner of the orchard be able to achieve his or her business objectives?
Multiple issues at the root level (i.e., sources of risk) need to be considered when the orchard owner is making strategic decisions such as when to feed the trees and when to harvest the fruit.
Considering, reporting, and “managing” risk at the root level is disconnected from running the business and achieving enterprise objectives.
I remind you of the concepts in A revolution in risk management.
Use the information about root level risk to help management understand how likely and to what extent it is that each enterprise business objective will be achieved.
Is the anticipated level of achievement acceptable?
I welcome your thoughts.
It’s not that long since we were dismissing the Internet of Things as something very much ‘next generation’. But, as you will see from Deloitte’s collection of articles (Deloitte Review Issue 17), many organizations are already starting to deploy related technologies. I also like Wired magazine’s older piece.
Have a look at this article in the New York Times that provided some consumer-related examples. Texas Instruments has a web page with a broader view, mentioning building and home automation; smart cities; smart manufacturing; wearables; healthcare; and automotive. Talking of the latter, AT&T is connecting a host of new cars to the Internet through in-auto WiFi.
At the same time, technology referred to as Machine Learning (see this from the founder of Sun Microsystems) will be putting many jobs at risk, including analysis and decision-making (also see this article in The Atlantic). If that is not enough, the IMF has weighed in on the topic with a piece called Toil and Technology.
Is your organization open to the possibilities – the new universe of potential products and services, efficiencies in operations, and insights into the market? Or do you wait and follow the market leader, running the risk of being left in their dust?
Do you have the capabilities to understand and assess the risks as well as the opportunities?
Do your strategic planning and risk management processes allow you to identify, assess and evaluate all the effects of what might be around the corner? Or do you have one group of people assessing potential opportunity and another, totally separate, assessing downside risk?
How can isolated opportunity and downside risk processes get you where you need to go, making intelligent decisions and optimizing outcomes?
When you are looking forward, whether at the horizon or just a few feet in front of you, several situations and events are possible and each has a combination of positive and negative effects.
Intelligent decision-making means understanding all these possibilities and considering them together before making an informed decision. It is not sufficient to simply net off the positive and negative, as (a) they may occur at different times, and (b) their effects may be felt in different ways, such as a potentially positive effect on profits, but a negative potential effect on cash flow and liquidity; the negative effect may be outside acceptable ranges.
With these new technologies disrupting our world, every organization needs to question whether it has the capability to evaluate them and determine how and when to start deploying them.
COSO ERM and ISO 31000 are under review and updates are expected in the next year or so. I hope that they both move towards providing guidance on risk-intelligent and informed decision-making where all the potential effects of uncertainty are considered, rather than guiding us on the silo of risk management.
Are you ready?
I welcome your comments.
For more on this and related topics, please consider World-Class Risk Management.
Recently, a compliance thought leader and practitioner asked my opinion about the relevance of risk management and specifically risk appetite to compliance and ethics programs.
The gentleman also asked for my thoughts on GRC and compliance; I think I have made that clear in other posts – the only useful way of thinking about GRC is the OCEG view, which focuses on the capability to achieve success while acting ethically and in compliance with applicable laws and regulations. Compliance issues must be considered within the context of driving to organizational success.
In this post, I want to focus on compliance and risk management/appetite.
Let me start by saying that I am a firm believer in taking a risk management approach to the business objective of operating in compliance with both (a) laws and regulations and (b) society’s expectations, even when they are not reflected in laws and regulations. This is reinforced by regulatory guidance, such as in the US Federal Sentencing Guidelines, which explain that when a reasonable process is followed to identify, assess, evaluate, and treat compliance-related risks, the organization has a defense against (at least criminal) prosecution. The UK’s Bribery Act (2010) similarly requires that the organization assess and then treat bribery-related risks.
I think the question comes down to whether you can – or should – establish a risk appetite for (a) the risk of failing to comply with rules or regulations, or (b) the risk that you will experience fraud.
I have a general problem with the practical application of the concept of risk appetite. While it sounds good, and establishes what the board and top management consider acceptable levels of risk, I believe it has significant issues when it comes to influencing the day-to-day taking of risk.
Here is an edited excerpt from my new book, World-Class Risk Management, in which I dedicate quite a few pages to the discussion of risk appetite and criteria.
Evaluating a risk to determine whether it is acceptable or not requires what ISO refers to as ‘risk criteria’ and COSO refers to as a combination of ‘risk appetite’ and ‘risk tolerance’.
I am not a big fan of ‘risk appetite’, not because it is necessarily wrong in theory, but because the practice seems massively flawed.
This is how the COSO Enterprise Risk Management – Integrated Framework defines risk appetite.
Risk appetite is the amount of risk, on a broad level, an organization is willing to accept in pursuit of value. Each organization pursues various objectives to add value and should broadly understand the risk it is willing to undertake in doing so.
One of the immediate problems is that it talks about an “amount of risk”. As we have seen, there are more often than not multiple potential impacts from a possible situation, event, or decision and each of those potential impacts has a different likelihood. When people look at the COSO definition, they see risk appetite as a single number or value. They may say that their risk appetite is $100 million. Others prefer to use descriptive language, such as “The organization has a higher risk appetite related to strategic objectives and is willing to accept higher losses in the pursuit of higher returns.”
Whether in life or business, people make decisions to take a risk because of the likelihood of potential impacts – not the size of the impact alone. Rather than the risk appetite being $100 million, it is the 5% (say) likelihood of a $100 million impact.
Setting that critical objection aside for the moment, it is downright silly (and I make no apology for saying this) to put a single value on the level of risk that an organization is willing to accept in the pursuit of value. COSO may talk about “the amount of risk, on a broad level”, implying that there is a single number, but I don’t believe that the authors of the COSO Framework meant that you can aggregate all your different risks into a single number.
Every organization has multiple types of risk, from compliance (the risk of not complying with laws and regulations) to employee safety, financial loss, reputation damage, loss of customers, inability to protect intellectual property, and so on. How can you add each of these up and arrive at a total that is meaningful – even if you could put a number on each of the risks individually?
If a company sets its risk appetite at $10 million, then that might be the total of these different forms of risk:
Non-compliance with applicable laws and regulations $1,000,000 Loss in value of foreign currency due to exchange rate changes $1,500,000 Quality in manufacturing leading to customer issues $2,000,000 Employee safety $1,500,000 Loss of intellectual property $1,000,000 Competitor-driven price pressure affecting revenue $2,000,000 Other $1,000,000
I have problems with one risk appetite when the organization has multiple sources of risk.
- “I want to manage each of these in isolation. For example, I want to make sure that I am not taking an unacceptable level of risk of non-compliance with applicable laws and regulations irrespective of what is happening to other risks.”
- “When you start aggregating risks into a single number and base decisions on acceptable levels of risk on that total, it implies (using the example above) that if the level of quality risk drops from $2m to $1.5m but my risk appetite remains at $10m, I can accept an increase in the risk of non-compliance from $1m to $1.5m. That is absurd.”
The first line is “non-compliance with applicable laws and regulations”. I have a problem setting a “risk appetite” for non-compliance. It may be perceived as indicating that the organization is willing to fail to comply with laws and regulations in order to make a profit; if this becomes public, there is likely to be a strong reaction from regulators and the organization’s reputation would (and deserves to) take a huge hit.
Setting a risk appetite for employee safety is also a problem. As I say:
…. no company should, for many reasons including legal ones, consider putting a number on the level of acceptable employee safety issues; the closest I might consider is the number of lost days, but that is not a good measure of the impact of an employee safety event and might also be considered as indicating a lack of appropriate concern for the safety of employees (and others). Putting zero as the level of risk is also absurd, because the only way to eliminate the potential for a safety incident is to shut down.
That last sentence is a key one.
While risk appetites such as $1m for non-compliance or $1.5m for employee safety are problematic, it is unrealistic to set the level of either at zero. The only way to ensure that there are no compliance or safety issues is to close the business.
COSO advocates would say that risk appetite can be expressed in qualitative instead of quantitative terms. This is what I said about that.
The other form of expression of risk appetite is the descriptive form. The example I gave earlier was “The organization has a higher risk appetite related to strategic objectives and is willing to accept higher losses in the pursuit of higher returns.” Does this mean anything? Will it guide a decision-maker when he considering how much risk is acceptable? No.
Saying that “The organization has a higher risk appetite related to strategic objectives and is willing to accept higher losses in the pursuit of higher returns”, or “The organization has a low risk appetite related to risky ventures and, therefore, is willing to invest in new business but with a low appetite for potential losses” may make the executive team feel good, believe they have ‘ticked the risk appetite box’, but it accomplishes absolutely nothing at all.
Why do I say that it accomplishes absolutely nothing? Because (a) how can you measure whether the level of risk is acceptable based on these descriptions, and (b) how do managers know they are taking the right level of the right risk as they make decisions and run the business?
If risk appetite doesn’t work for compliance, then what does?
I believe that the concept of risk criteria (found in ISO 31000:2009) is better suited.
Management and the board have to determine how much to invest in compliance and at what point they are satisfied that they have reasonable processes of acceptable quality .
The regulators recognize that an organization can only establish and maintain reasonable processes, systems, and organizational structures when it comes to compliance. Failures will happen, because organizations have human employees and partners. What is crucial is whether the organization is taking what a reasonable person would believe are appropriate measures to ensure compliance.
I believe that the organization should be able to establish measures, risk criteria, to ensure that its processes are at that reasonable level and operating as desired. But the concept of risk appetite for compliance is flawed.
A risk appetite statement tends to focus on the level of incidents and losses, which is after the fact. Management needs guidance to help them make investments and other decisions as they run the business. I don’t see risk appetite helping them do that.
By the way, there is another problem with compliance and risk appetite when organizations set a single level for all compliance requirements.
I want to make sure I am not taking an unacceptable level of risk of non-compliance with each law and regulation that is applicable. Does it make sense to aggregate the risk of non-compliance with environmental regulations, safety standards, financial reporting rules, corruption and bribery provisions, and so on? No. Each of these should be managed individually.
Ethics and fraud are different.
Again, we have to be realistic and recognize that it is impossible to reduce the risk of ethical violations and fraud to zero.
However, there is not (in my experience) the same reputation risk when it comes to establishing acceptable levels – the levels below which the cost of fighting fraud starts to exceed the reduction in fraud risk.
When I was CAE at Tosco, we owned thousands of Circle K stores. Just like every store operator, we experienced what is called “shrink” – the theft of inventory by employees, customers, and vendors. Industry experience was that, though undesirable, shrink of 1.25% was acceptable because spending more on increased store audits, supervision, cameras, etc. would cost more than any reduction in shrink.
Managing the risks of compliance or ethical failures is important. But, for the most part I find risk appetite leaves me hungry.
What do you think?
BTW, both my World-Class Risk Management and World-Class Internal Auditing books are available on Amazon.
Here is another excerpt from the World-Class Risk Management book. Your comments are welcome.
As you can see, I spend a fair amount of time in the book challenging ‘traditional’ precepts, such as (in this case) the value of heat maps in providing useful information about risks across the enterprise.
Some prefer a heat map to illustrate the comparative levels (typically using a combination of potential impact and likelihood) of each risk.
A heat map is very effective in communicating which risks rate highest when you consider their potential impact and the likelihood of that impact. The reader is naturally drawn to the top right quadrant (high significance and high likelihood), while items in other quadrants receive less attention.
But there are a number of problems with a report like this, whether it is in the form of a heat map or a table.
- It is a point-in-time report.
When management and the board rely on the review of a report that purports to show the top risks to the organization and their condition, unless they are reviewing a dynamically changing report (such as a dashboard on a tablet) they are reviewing information that is out-of-date. Its value will depend on the extent that risks have emerged or changed.
In some cases, that information is still useful. It provides management with a sense of the top risks and their condition, but they need to recognize that it may be out of date by the time they receive it.
- It is not a complete picture.
This is a list of a select number of risks. It cannot ever be a list of all the risks, because as discussed earlier risks are created or modified with every decision. At best, it is a list of those risks that are determined to be of a continuing nature and merit continuing attention. At worst, it is a list of the few risks that management has decided to review on a periodic basis without any systematic process behind it to ensure new risks are added promptly and those that no longer merit attention are removed. In other words, the worst case is enterprise list management.
There is a serious risk (pun intended) that management and the board will be lulled into believing that because they are paying regular attention to a list of top risks that they are managing risk and uncertainty across the organization – while nothing could be further from the truth.
- It doesn’t always identify the risks that need attention.
Whether you prefer the COSO or ISO guidance, risks require special attention when they are outside acceptable levels (risk appetite for COSO and risk criteria for ISO). Just because a risk rates ‘high’ because the likelihood of a significant impact is assessed as high doesn’t mean that action is required by senior management or that significant attention should be paid by the board. They may just be risks that are ‘inherent’ in the organization and its business model, or risks that the organization has chosen to take to satisfy its objectives and to create value for its stakeholders and shareholders.
This report does not distinguish risks that the organization has previously decided to accept from those that exceed acceptable levels. Chapter 13 on risk evaluation discusses how I would assess whether a risk is within acceptable levels or not.
- The assessment of impact and likelihood may not be reliable.
I discuss this further in chapter 12 on risk analysis.
- It only shows impact and likelihood
As I will explain in chapter 13 on risk evaluation, sometimes there are other attributes of a risk that need to considered when determining whether a risk at acceptable levels. Some have upgraded the simple heat map I show above to include trends (whether the level of risk is increasing or decreasing) and other information. But it is next to impossible to include every relevant attribute in a heat map.
- It doesn’t show whether objectives are in jeopardy.
As I mentioned above, management and the board need to know not only which specific risks merit attention, but whether they are on track to achieve their objectives.
On the other hand, some risk sources (such as the penetration of our computer network, referred to as cyber risk) can have multiple effects (such as business disruption, legal liability, and the loss of intellectual property) and affect multiple objectives (such as those concerned with compliance with privacy regulations, maintaining or enhancing reputation with customers, and revenue growth). It is very important to produce and review a report that highlights when the total effect of a risk source, considering all affected objectives, is beyond acceptable levels. While it may not significantly affect a single objective, the aggregated effect on the organization may merit the attention of the executive leadership and the board.
 As noted in the Language of Risk section, many refer to these as “risks” when, from an ISO perspective, they should be called “risk sources” (element which alone or in combination has the intrinsic potential to give rise to risk). For example, the World Economic Forum publishes annual reports on top global risks, which it defines as “an uncertain event or condition that, if it occurs, can cause significant negative impact for several countries or industries within the next 10 years.”
The National Association of Corporate Directors (NACD) has published a discussion between the leader of PwC’s Center for Board Governance, Mary Ann Cloyd, and an expert on cyber who formally served as a leader of the US Air Force’s cyber operations, Suzanne Vautrinot.
It’s an interesting read on a number of levels; I recommend it for board members, executives, information security professionals and auditors.
Here are some of the points in the discussion worth emphasizing:
“An R&D organization, a manufacturer, a retail company, a financial institution, and a critical utility would likely have different considerations regarding cyber risk. Certainly, some of the solutions and security technology can be the same, but it’s not a cookie-cutter approach. An informed risk assessment and management strategy must be part of the dialogue.”
“When we as board members are dealing with something that requires true core competency expertise—whether it’s mergers and acquisitions or banking and investments or cybersecurity—there are advisors and experts to turn to because it is their core competency. They can facilitate the discussion and provide background information, and enable the board to have a very robust, fulsome conversation about risks and actions.”
“The board needs to be comfortable having the conversation with management and the internal experts. They need to understand how cybersecurity risk affects business decisions and strategy. The board can then have a conversation with management saying, ‘OK, given this kind of risk, what are we willing to accept or do to try to mitigate it? Let’s have a conversation about how we do this currently in our corporation and why.’”
“Cloyd: What you just described doesn’t sound unique to cybersecurity. It’s like other business risks that you’re assessing, evaluating, and dealing with. It’s another part of the risk appetite discussion. Vautrinot: Correct. The only thing that’s different is the expertise you bring in, and the conversation you have may involve slightly different technology.”
“Cloyd: Cybersecurity is like other risks, so don’t be intimidated by it. Just put on your director hat and oversee this as you do other major risks. Vautrinot: And demand that the answers be provided in a way that you understand. Continue to ask questions until you understand, because sometimes the words or the jargon get in the way.”
“Cybersecurity is a business issue, it’s not just a technology issue.”
This was a fairly long conversation as these things go, but time and other limitations probably affected the discussion – and limited the ability to probe the topic in greater depth.
For example, there are some more points that I would emphasize to boards:
- It is impossible to eliminate cyber-related risk. The goal should be to understand what the risk is at any point and obtain assurance that management (a) knows what the risk is, (b) considers it as part of decision-making, including its potential effect on new initiatives, (c) has established at what point the risk becomes acceptable, because investing more has diminishing returns, (d) has reason to believe its ability to prevent/detect cyber breaches is at the right level, considering the risk and the cost of additional measures (and is taking corrective actions when it is not at the desired level), (e) has a process to respond promptly and appropriately in the event of a breach, (f) has tested that capability, and (g) has a process in place to communicate to the board the information the board needs, when it needs it, to provide effective oversight.
- Cyber risk should not be managed separately from enterprise or business risk. Cyber may be only one of several sources of risk to a new initiative, and the total risk to that initiative needs to be understood.
- Cyber-related risk should be assessed and evaluated based on its effect on the business, not based on some calculated value for the information asset.
- The board can never have, or maintain, the level of sophisticated knowledge required to assess cyber risk itself. It needs to ask questions and probe management’s responses until it has confidence that management has the ability to address cyber risk.
I welcome your comments and observations on the article and my points, above.
COSO’s ERM Framework defines risk appetite in a way that many have adopted:
“Risk appetite is the amount of risk, on a broad level, an organization is willing to accept in pursuit of value. Each organization pursues various objectives to add value and should broadly understand the risk it is willing to undertake in doing so.”
The problem I want to discuss is whether there is such a thing as an “amount of risk”.
The traditional way of assessing a risk is to establish values for its potential impact (or consequences) and their likelihood. The assessment might also include qualitative attributes of the risk, such as the speed of impact and so on.
But, for many risks there is more than one possible impact, with varying levels of likelihood.
Take the example of an organization that wants to expand and sell its products in a new country. It has set a sales target of 10,000 units in the first year, but recognizes not only that the target may not be reached but that, if things work well, it might be exceeded.
If the sales target is not reached, the initiative will result in a loss of as much as 500 units of currency. The likelihood of that loss is estimated at 5% and is considered unacceptable. There is also a 10% likelihood of a 250 loss, also unacceptable.
Management decides to treat the risk through a number of actions, including advertising and the use of in-country agents, which should reduce the likelihood and extent of losses. However, the cost of these actions will reduce the profits achieved when sales reach or exceed target.
The chart below shows the distribution of possible P&L results, both before and after treating the risk.
So there is no single “amount of risk”. There are many possible outcomes.
It is not sufficient to place a value on the distribution of all possible outcomes and compare that to some other value established as the acceptable level – because some of the points may individually be unacceptable and require treatment.
In this example, management has decided that the likelihood of the greatest levels of loss is unacceptable. If they had reduced the array of possibilities to a calculated number (perhaps based on the area under the curve), they probably would not have considered whether each possibility was acceptable and would not have taken the appropriate action.
Knowing whether the possibilities are acceptable or not, and making appropriate actions to treat them, is critical. A single “amount of risk” fails that test.
We could take this discussion a lot further, but I will stop here. What do you think?