Anthony Fitzsimmons recently sent me a review copy of his new book, Rethinking Reputation Risk. He says that it “Provides a new perspective on the true nature of reputational risk and damage to organizations and traces its root causes in individual and collective human behavior”.
I am not sure that there is much that is new in the book, but if you want to understand how human behavior can be the root cause (in fact, it is very often the root cause) of problems for any organization, you may find it of interest.
The authors (Fitsimmons and Professor Derek Atkins) describe several case studies where human failures led to serious issues.
Humans as a root cause is also a topic I cover in World-Class Risk Management.
As I was reading the book, I realized that I have a problem with organizations placing separate attention to reputation risk and its management. It’s simply an element, which should not be overlooked, in how any organization manages risk – or, I should say, how it considers what might happen in its decision-making activities.
The same thing applies to cyber risk and even compliance risk.
They are all dominoes.
A case study:
- There is a possibility that the manager in HR that recruits IT specialists leaves.
- The position is open for three months before an individual is hired.
- An open position for an IT specialist who is responsible for patching a number of systems is not filled for three months.
- A system vulnerability remains open because there is nobody to apply a vendor’s patch.
- A hacker obtains entry. CYBER RISK
- The hacker steals personal information on thousands of customers.
- The information is posted on the Internet.
- Customers are alarmed. REPUTATION RISK
- Sales drop.
- The company fails to meet analyst expectations for earnings.
- The price for the company’s shares drop 20%.
- The CEO decides to slash budgets and headcounts by 10% across the board.
- Individuals in Quality are laid off.
- Materials are not thoroughly inspected.
- Defective materials are used in production.
- Scrap rates rise, but not all defective products are detected and some are shipped to customers.
- Customers complain, return products and demand compensation. REPUTATION RISK
- Sales drop, earnings targets are missed again, and …….
- At the same time as the Quality staff is downsized, the capital expenditure budget is cut.
- The Information Security Officer’s request for analytics to detect hackers who breach the company’s defenses is turned down.
- Multiple breaches are not detected. CYBER RISK
- Hackers steal the company’s trade secrets.
- Competitors acquire the trade secrets and are able to erode any edge the company may have.
- The company’s REPUTATION for a technology edge disappears. REPUTATION RISK
- Sales drop. Earnings targets are not achieved, and……..
It is true that every domino and the source of risk to its stability (what might happen) needs to be addressed.
But, focusing on one or two dominoes in the chain is unlikely to prevent serious issues.
One decision at a low level in the company can have a domino effect.
I welcome your comments.
With all the press and concern about cyber at all levels of the organization, with the regulators, and among the public, it is a worthwhile exercise to consider what this should mean for the Chief Information Security Officer (CISO) or equivalent.
Some point to the need to elevate the position of CISO to report directly to a senior executive, even to the CEO.
Elevating the position, in my opinion, will not necessarily do more than elevate the voice of cyber in the executive suite. It won’t necessarily drive the resources necessary for an effective cyber program, nor will it necessarily change the minds and attitudes of people from the executives on down.
In fact, elevating the position carries the risk that the CISO will get caught up in organizational politics instead of focusing on cyber risk itself.
Deloitte tackles this and other opportunities in a new piece, The new CISO: Leading the strategic security organization.
Of course, they are using words intended to induce people to read: ‘new’ and ‘strategic’. I think we can easily disregard them and focus on the problem at hand.
First, let’s acknowledge that the role of the CISO (or other individual responsible for information security) should never be considered as simply a compliance function.
Deloitte talks about “the imperative to move beyond the role of compliance monitors and enforcers to integrate better with the business, manage information risks more strategically, and work toward a culture of shared cyber risk ownership across the enterprise”.
But even when I had information security reporting to me 30 years ago, it was about protecting the organization and not just about compliance.
It is foolish to believe that executives or the board will invest if the only return is compliance. Yes, it is necessary but a compliance function will never receive the attention of a function that contributes to the success of the organization. Executives will commit resources to the level they think prudent, but not necessarily what it will take to enable success – because they don’t understand how cyber relates to their personal and corporate success.
If they don’t know that it matters to success, it won’t matter to them.
The successful CISO helps everybody appreciate how cyber contributes to and enables success.
Buried in the Deloitte material are two sections of great importance:
- While the CISO may think in terms of reducing risks, business leaders take risks every day, whether introducing an existing product to a new market, taking on an external partner to pursue a new line of business, or engaging in a merger or acquisition. In fact, the ability to accept more risk can increase business opportunities, while ruling it out may lead to their loss. From this perspective, the role of the CISO becomes one of helping leadership and employees be aware of and understand cyber risks, and equipping them to make decisions based on that understanding. In some cases, the organization’s innovation agenda may necessitate a more lenient view of security controls.
- …… CISOs [need] to pivot the conversation—both in terms of their mind-set as well as language—from security and compliance to focus more on risk strategy and management. Going beyond the negative aspect of how much damage or loss can result from risk, CISOs need to understand risk in terms of its potential to positively affect competitive advantage, business growth, and revenue expansion.
These are, in my opinion, the keys to an effective cyber program.
If the CISO is going to influence not only the resources he or she is given but the attitude and actions of the organization, it is necessary not only to understand how the business is run, but to talk to executives in the language of the business.
Talk about how the achievement of objectives may be affected by a cyber breach. Talking about specific objectives is the best way to influence hearts and minds.
Help executives make intelligent decisions when it is appropriate to accept a cyber risk to reap a business reward.
Talk business risk, not technobabble.
Do you agree?
Are there other points of value in the Deloitte paper?
Overall, I am pleased to see the progress the internal audit practice has made over the last few years. While there are still serious problems regarding independence and resources in some parts of the world (where internal audit is established only to “check-the-box, not with any intent to be a serious activity), more and more organizations are moving to what I call “enterprise risk-based” auditing; perhaps half are providing assurance through formal audits and assessments of the management of risk; and, many are focusing on identifying problems before rather than after the occur has become a recurring mantra.
Yet, the picture is not entirely rosy.
This year, I have been privileged to work with the National Association of Corporate Directors. I was a panelist at three separate events where they discussed cyber risk.
In one group session, a director said that the board could not ask internal audit to assess and help with cyber risk because they lacked that capability. The others voiced their agreement, one and all.
This is a huge problem!
Internal audit may not always have the talent on staff to address every risk or concern, but if the board would only give it the resources, internal audit can either hire that staff or outsource the task.
As a chief audit executive, I have hired specialists to address specific risks in IT (including highly technical personnel), environmental compliance, engineering, fraud investigations, and more. Where possible, I have provided staff (including myself) training in specialized areas, such as derivatives trading, Six Sigma, and Lean Manufacturing.
I also used outside resources from consulting and personnel agencies:
- A derivatives trading and management specialist
- A “white hat” penetration testing team
- A former global procurement executive
- An expert in sales contracting and management
- A corporate tax specialist
- and more
Some talk about internal audit being the “consultant of choice”. I wouldn’t go that far. Where I would go is that internal audit should have the capability, whether through its own personnel, co-sourcing, or other contract staffing, to address and provide assurance on the key risks facing the enterprise.
Internal audit should:
- Inform the audit committee when it has insufficient resources to address a specialized area of risk, and endeavor to persuade them to provide such additional resources (headcount or dollars) to address the need
- Inform the audit committee that it has the capability to obtain the necessary resources to address specialized areas such as cyber security, ethics compliance, corporate culture, corporate governance and more. This means that the CAE needs to build a network that he/she can tap to locate and hire the necessary expertise
- Challenge management and even the audit committee when either goes outside to obtain assurance on an area of risk
I welcome your comments.
Recently, a compliance thought leader and practitioner asked my opinion about the relevance of risk management and specifically risk appetite to compliance and ethics programs.
The gentleman also asked for my thoughts on GRC and compliance; I think I have made that clear in other posts – the only useful way of thinking about GRC is the OCEG view, which focuses on the capability to achieve success while acting ethically and in compliance with applicable laws and regulations. Compliance issues must be considered within the context of driving to organizational success.
In this post, I want to focus on compliance and risk management/appetite.
Let me start by saying that I am a firm believer in taking a risk management approach to the business objective of operating in compliance with both (a) laws and regulations and (b) society’s expectations, even when they are not reflected in laws and regulations. This is reinforced by regulatory guidance, such as in the US Federal Sentencing Guidelines, which explain that when a reasonable process is followed to identify, assess, evaluate, and treat compliance-related risks, the organization has a defense against (at least criminal) prosecution. The UK’s Bribery Act (2010) similarly requires that the organization assess and then treat bribery-related risks.
I think the question comes down to whether you can – or should – establish a risk appetite for (a) the risk of failing to comply with rules or regulations, or (b) the risk that you will experience fraud.
I have a general problem with the practical application of the concept of risk appetite. While it sounds good, and establishes what the board and top management consider acceptable levels of risk, I believe it has significant issues when it comes to influencing the day-to-day taking of risk.
Here is an edited excerpt from my new book, World-Class Risk Management, in which I dedicate quite a few pages to the discussion of risk appetite and criteria.
Evaluating a risk to determine whether it is acceptable or not requires what ISO refers to as ‘risk criteria’ and COSO refers to as a combination of ‘risk appetite’ and ‘risk tolerance’.
I am not a big fan of ‘risk appetite’, not because it is necessarily wrong in theory, but because the practice seems massively flawed.
This is how the COSO Enterprise Risk Management – Integrated Framework defines risk appetite.
Risk appetite is the amount of risk, on a broad level, an organization is willing to accept in pursuit of value. Each organization pursues various objectives to add value and should broadly understand the risk it is willing to undertake in doing so.
One of the immediate problems is that it talks about an “amount of risk”. As we have seen, there are more often than not multiple potential impacts from a possible situation, event, or decision and each of those potential impacts has a different likelihood. When people look at the COSO definition, they see risk appetite as a single number or value. They may say that their risk appetite is $100 million. Others prefer to use descriptive language, such as “The organization has a higher risk appetite related to strategic objectives and is willing to accept higher losses in the pursuit of higher returns.”
Whether in life or business, people make decisions to take a risk because of the likelihood of potential impacts – not the size of the impact alone. Rather than the risk appetite being $100 million, it is the 5% (say) likelihood of a $100 million impact.
Setting that critical objection aside for the moment, it is downright silly (and I make no apology for saying this) to put a single value on the level of risk that an organization is willing to accept in the pursuit of value. COSO may talk about “the amount of risk, on a broad level”, implying that there is a single number, but I don’t believe that the authors of the COSO Framework meant that you can aggregate all your different risks into a single number.
Every organization has multiple types of risk, from compliance (the risk of not complying with laws and regulations) to employee safety, financial loss, reputation damage, loss of customers, inability to protect intellectual property, and so on. How can you add each of these up and arrive at a total that is meaningful – even if you could put a number on each of the risks individually?
If a company sets its risk appetite at $10 million, then that might be the total of these different forms of risk:
Non-compliance with applicable laws and regulations $1,000,000 Loss in value of foreign currency due to exchange rate changes $1,500,000 Quality in manufacturing leading to customer issues $2,000,000 Employee safety $1,500,000 Loss of intellectual property $1,000,000 Competitor-driven price pressure affecting revenue $2,000,000 Other $1,000,000
I have problems with one risk appetite when the organization has multiple sources of risk.
- “I want to manage each of these in isolation. For example, I want to make sure that I am not taking an unacceptable level of risk of non-compliance with applicable laws and regulations irrespective of what is happening to other risks.”
- “When you start aggregating risks into a single number and base decisions on acceptable levels of risk on that total, it implies (using the example above) that if the level of quality risk drops from $2m to $1.5m but my risk appetite remains at $10m, I can accept an increase in the risk of non-compliance from $1m to $1.5m. That is absurd.”
The first line is “non-compliance with applicable laws and regulations”. I have a problem setting a “risk appetite” for non-compliance. It may be perceived as indicating that the organization is willing to fail to comply with laws and regulations in order to make a profit; if this becomes public, there is likely to be a strong reaction from regulators and the organization’s reputation would (and deserves to) take a huge hit.
Setting a risk appetite for employee safety is also a problem. As I say:
…. no company should, for many reasons including legal ones, consider putting a number on the level of acceptable employee safety issues; the closest I might consider is the number of lost days, but that is not a good measure of the impact of an employee safety event and might also be considered as indicating a lack of appropriate concern for the safety of employees (and others). Putting zero as the level of risk is also absurd, because the only way to eliminate the potential for a safety incident is to shut down.
That last sentence is a key one.
While risk appetites such as $1m for non-compliance or $1.5m for employee safety are problematic, it is unrealistic to set the level of either at zero. The only way to ensure that there are no compliance or safety issues is to close the business.
COSO advocates would say that risk appetite can be expressed in qualitative instead of quantitative terms. This is what I said about that.
The other form of expression of risk appetite is the descriptive form. The example I gave earlier was “The organization has a higher risk appetite related to strategic objectives and is willing to accept higher losses in the pursuit of higher returns.” Does this mean anything? Will it guide a decision-maker when he considering how much risk is acceptable? No.
Saying that “The organization has a higher risk appetite related to strategic objectives and is willing to accept higher losses in the pursuit of higher returns”, or “The organization has a low risk appetite related to risky ventures and, therefore, is willing to invest in new business but with a low appetite for potential losses” may make the executive team feel good, believe they have ‘ticked the risk appetite box’, but it accomplishes absolutely nothing at all.
Why do I say that it accomplishes absolutely nothing? Because (a) how can you measure whether the level of risk is acceptable based on these descriptions, and (b) how do managers know they are taking the right level of the right risk as they make decisions and run the business?
If risk appetite doesn’t work for compliance, then what does?
I believe that the concept of risk criteria (found in ISO 31000:2009) is better suited.
Management and the board have to determine how much to invest in compliance and at what point they are satisfied that they have reasonable processes of acceptable quality .
The regulators recognize that an organization can only establish and maintain reasonable processes, systems, and organizational structures when it comes to compliance. Failures will happen, because organizations have human employees and partners. What is crucial is whether the organization is taking what a reasonable person would believe are appropriate measures to ensure compliance.
I believe that the organization should be able to establish measures, risk criteria, to ensure that its processes are at that reasonable level and operating as desired. But the concept of risk appetite for compliance is flawed.
A risk appetite statement tends to focus on the level of incidents and losses, which is after the fact. Management needs guidance to help them make investments and other decisions as they run the business. I don’t see risk appetite helping them do that.
By the way, there is another problem with compliance and risk appetite when organizations set a single level for all compliance requirements.
I want to make sure I am not taking an unacceptable level of risk of non-compliance with each law and regulation that is applicable. Does it make sense to aggregate the risk of non-compliance with environmental regulations, safety standards, financial reporting rules, corruption and bribery provisions, and so on? No. Each of these should be managed individually.
Ethics and fraud are different.
Again, we have to be realistic and recognize that it is impossible to reduce the risk of ethical violations and fraud to zero.
However, there is not (in my experience) the same reputation risk when it comes to establishing acceptable levels – the levels below which the cost of fighting fraud starts to exceed the reduction in fraud risk.
When I was CAE at Tosco, we owned thousands of Circle K stores. Just like every store operator, we experienced what is called “shrink” – the theft of inventory by employees, customers, and vendors. Industry experience was that, though undesirable, shrink of 1.25% was acceptable because spending more on increased store audits, supervision, cameras, etc. would cost more than any reduction in shrink.
Managing the risks of compliance or ethical failures is important. But, for the most part I find risk appetite leaves me hungry.
What do you think?
BTW, both my World-Class Risk Management and World-Class Internal Auditing books are available on Amazon.
The National Association of Corporate Directors (NACD) has published a discussion between the leader of PwC’s Center for Board Governance, Mary Ann Cloyd, and an expert on cyber who formally served as a leader of the US Air Force’s cyber operations, Suzanne Vautrinot.
It’s an interesting read on a number of levels; I recommend it for board members, executives, information security professionals and auditors.
Here are some of the points in the discussion worth emphasizing:
“An R&D organization, a manufacturer, a retail company, a financial institution, and a critical utility would likely have different considerations regarding cyber risk. Certainly, some of the solutions and security technology can be the same, but it’s not a cookie-cutter approach. An informed risk assessment and management strategy must be part of the dialogue.”
“When we as board members are dealing with something that requires true core competency expertise—whether it’s mergers and acquisitions or banking and investments or cybersecurity—there are advisors and experts to turn to because it is their core competency. They can facilitate the discussion and provide background information, and enable the board to have a very robust, fulsome conversation about risks and actions.”
“The board needs to be comfortable having the conversation with management and the internal experts. They need to understand how cybersecurity risk affects business decisions and strategy. The board can then have a conversation with management saying, ‘OK, given this kind of risk, what are we willing to accept or do to try to mitigate it? Let’s have a conversation about how we do this currently in our corporation and why.’”
“Cloyd: What you just described doesn’t sound unique to cybersecurity. It’s like other business risks that you’re assessing, evaluating, and dealing with. It’s another part of the risk appetite discussion. Vautrinot: Correct. The only thing that’s different is the expertise you bring in, and the conversation you have may involve slightly different technology.”
“Cloyd: Cybersecurity is like other risks, so don’t be intimidated by it. Just put on your director hat and oversee this as you do other major risks. Vautrinot: And demand that the answers be provided in a way that you understand. Continue to ask questions until you understand, because sometimes the words or the jargon get in the way.”
“Cybersecurity is a business issue, it’s not just a technology issue.”
This was a fairly long conversation as these things go, but time and other limitations probably affected the discussion – and limited the ability to probe the topic in greater depth.
For example, there are some more points that I would emphasize to boards:
- It is impossible to eliminate cyber-related risk. The goal should be to understand what the risk is at any point and obtain assurance that management (a) knows what the risk is, (b) considers it as part of decision-making, including its potential effect on new initiatives, (c) has established at what point the risk becomes acceptable, because investing more has diminishing returns, (d) has reason to believe its ability to prevent/detect cyber breaches is at the right level, considering the risk and the cost of additional measures (and is taking corrective actions when it is not at the desired level), (e) has a process to respond promptly and appropriately in the event of a breach, (f) has tested that capability, and (g) has a process in place to communicate to the board the information the board needs, when it needs it, to provide effective oversight.
- Cyber risk should not be managed separately from enterprise or business risk. Cyber may be only one of several sources of risk to a new initiative, and the total risk to that initiative needs to be understood.
- Cyber-related risk should be assessed and evaluated based on its effect on the business, not based on some calculated value for the information asset.
- The board can never have, or maintain, the level of sophisticated knowledge required to assess cyber risk itself. It needs to ask questions and probe management’s responses until it has confidence that management has the ability to address cyber risk.
I welcome your comments and observations on the article and my points, above.
I have been spending a fair amount of time over the last few months, talking and listening to board members and advisors, including industry experts, about cyber risk.
A number of things are clear:
- Boards, not just those members who are on the audit and/or risk committee, are concerned about cyber and the risk it represents to their organization. They are concerned because they don’t understand it – and the actions they should take as directors. The level of concern is sufficient for them to attend conferences dedicated to the topic rather than relying on their organization.
- They are not comfortable with the information they are receiving on cyber risk from management – management’s assessment of the risk that it represents to their organization; the measures management has taken to (a) prevent intrusions, (b) detect intrusions that got past defenses, and (c) respond to such intrusions; how cyber risk is or may be affected by changes in the business, including new business initiatives; and, the current level and trend of intrusion attacks (some form of metrics).
- The risk should be assessed, evaluated, and addressed, not in isolation as a separate IT or cyber risk, but in terms of its potential effect on the business. Cyber risk should be integrated into enterprise risk management. Not only does it need to be assessed in terms of its potential effect on organizational business objectives, but it is only one of several risks that may affect each business objective.
- It is impossible to eliminate cyber risk. In fact, it is broadly recognized that it is impossible to have impenetrable defenses (although every reasonable effort should be made to harden them). That mandates increased attention to the timely detection of those who have breached the defenses, as well as the capability to respond at speed.
- Because it is impossible to eliminate risk, a decision has to be made (by the board and management, with advice and counsel from IT, information security, the risk officer, and internal audit) as to the level of risk that is acceptable. How much will the organization invest in cyber compared to the level of risk and the need for those same resources to be invested in other initiatives? The board members did not like to hear talk of accepting a level of risk, but that is an uncomfortable fact of life – they need to get over and deal with it!
The National Association of Corporate Directors has published a handbook on cyber for directors (free after registration).
Here is a list of questions I believe directors should consider. They should be asked of executive management (not just the CIO or CISO) in a session dedicated to cyber.
- How do you identify and assess cyber-related risks?
- Is your assessment of cyber-related risks integrated with your enterprise-wide risk management program so you can include all the potential effects on the business (including business disruption, reputation risk, inability to bill customers, loss of IP, compliance risk, and so on) and not just “IT-risk”?
- How do you evaluate the risk to know whether it is too high?
- How do you decide what actions to take and how much resource to allocate?
- How often do you update your cyber risk assessment? Do you have sufficient insight into changes in cyber-related risks?
- How do you assess the potential new risks introduced by new technology? How do you determine when to take the risk because of the business value?
- Are you satisfied that you have an appropriate level of protection in place to minimize the risk of a successful attack?
- How will you know when your defenses have been breached? Will you know fast enough to minimize any loss or damage?
- Can you respond appropriately at speed?
- What procedures are in place to notify you, and then the board, in the event of a breach?
- Who has responsibility for cybersecurity and do they have the access they need to senior management?
- Is there an appropriate risk-aware culture within the organization, especially given the potential for any manager to introduce new risks by signing up for new cloud services?
I welcome your thoughts, perspectives, and comments.
At least, that is what one expert has to say in a provocative piece in SC magazine.
Here are some excerpts, but I recommend you read the short article.
The author, the CEO of a software vendor of cybersecurity products, starts with these points:
…user-driven technology has progressed so rapidly that it has significantly outpaced technology’s own ability to keep data protected from misuse and guarded from cyber vulnerabilities…….
A lack of reliable security is the price we’ve paid for this eruption of amazing new cloud-based services and keeping vital data out of the wrong hands is an uphill battle.
He then spells out a truth that we should all acknowledge:
Anyone who tells you that your data is secure today is lying to you. The state-of-the-art that is cybersecurity today is broken. There must be a better way. But don’t lose hope, there is.
The article then takes a new direction (at least for me):
CIOs today need to adopt an entirely new security philosophy – one that hinges on the fact that your files and information will be everywhere……..
If we can build a new security approach from the ground up based on the premise that data will escape, and are then able to secure everything no matter where it is, we end up debunking the concept of the “leak” entirely.
I do agree that the traditional, exclusive, focus on preventing an intrusion cannot continue. He says:
That’s why my biggest frustration coming out of the recent Sony and Anthem hacks is companies opting for reactive solutions to fortify firewalls and secure siloed tunnels of information. For example, there was a major uptick in company-wide email-deletion policies in the wake of the Sony attack. Now that’s just dumb. Those are band-aid strategies that fail to address the heart of the problem.
He continues to press his point:
Maintaining a level of security in a boundaryless world means security and policy follow exactly what you’re trying to protect in the first place — the data……
Usable security, where users can choose how they want to access, store and share data, can only be made possible by providing a seamless user experience, so security is integrated into the daily work of everyone. A great user experience is one major obstacle security vendors (and arguably, all enterprise services) have yet to conquer. If we can do it, we will move away from panic-inducing scare tactics used to encourage adoption, and instead empower users with a solution they actually like to secure data…..
In order to be a security company, enterprises need to rethink a few things. First, users have to be in control of their data at any given point in time and should be able to revoke access when they want by utilizing familiar technology. They should have complete peace of mind that their data truly stays theirs. Second, in a cloud and mobile world there are no real controlled end-points anymore, unless we want to take a step back into the stone ages. And third, the firewall model is broken and trying to extend the perimeter out simply doesn’t work anymore. It’s about protecting the information, wherever it is, and not about locking everything down where it’s hard to access, use and share for your employees and partners.
So he is presenting a new cybersecurity world where the security follows the data, using encryption and other methods.
I think that is something that every organization should consider – especially encryption.
But is it enough?
For a start, how secure is encryption in the face of the sophisticated attacker? Maybe it is reasonably secure now, but we cannot be sure it will remain secure. Consider how encryption was broken by researchers, with the story told in this 2013 article.
I think you need at least three levels of protection: prevention, encryption, and detection, followed by response.
We can no longer assume that the bad guys cannot get in, and I am reluctant to assume that my encryption will not be broken if they have time.
So, we need the ability to detect any intruders promptly – so we can shut them down and limit any damage.
Too few have sufficient detection in place. Just look how long hackers were inside JP Morgan, and then how long it took the company to expel them!
I welcome your views.