Anthony Fitzsimmons recently sent me a review copy of his new book, Rethinking Reputation Risk. He says that it “Provides a new perspective on the true nature of reputational risk and damage to organizations and traces its root causes in individual and collective human behavior”.
I am not sure that there is much that is new in the book, but if you want to understand how human behavior can be the root cause (in fact, it is very often the root cause) of problems for any organization, you may find it of interest.
The authors (Fitsimmons and Professor Derek Atkins) describe several case studies where human failures led to serious issues.
Humans as a root cause is also a topic I cover in World-Class Risk Management.
As I was reading the book, I realized that I have a problem with organizations placing separate attention to reputation risk and its management. It’s simply an element, which should not be overlooked, in how any organization manages risk – or, I should say, how it considers what might happen in its decision-making activities.
The same thing applies to cyber risk and even compliance risk.
They are all dominoes.
A case study:
- There is a possibility that the manager in HR that recruits IT specialists leaves.
- The position is open for three months before an individual is hired.
- An open position for an IT specialist who is responsible for patching a number of systems is not filled for three months.
- A system vulnerability remains open because there is nobody to apply a vendor’s patch.
- A hacker obtains entry. CYBER RISK
- The hacker steals personal information on thousands of customers.
- The information is posted on the Internet.
- Customers are alarmed. REPUTATION RISK
- Sales drop.
- The company fails to meet analyst expectations for earnings.
- The price for the company’s shares drop 20%.
- The CEO decides to slash budgets and headcounts by 10% across the board.
- Individuals in Quality are laid off.
- Materials are not thoroughly inspected.
- Defective materials are used in production.
- Scrap rates rise, but not all defective products are detected and some are shipped to customers.
- Customers complain, return products and demand compensation. REPUTATION RISK
- Sales drop, earnings targets are missed again, and …….
- At the same time as the Quality staff is downsized, the capital expenditure budget is cut.
- The Information Security Officer’s request for analytics to detect hackers who breach the company’s defenses is turned down.
- Multiple breaches are not detected. CYBER RISK
- Hackers steal the company’s trade secrets.
- Competitors acquire the trade secrets and are able to erode any edge the company may have.
- The company’s REPUTATION for a technology edge disappears. REPUTATION RISK
- Sales drop. Earnings targets are not achieved, and……..
It is true that every domino and the source of risk to its stability (what might happen) needs to be addressed.
But, focusing on one or two dominoes in the chain is unlikely to prevent serious issues.
One decision at a low level in the company can have a domino effect.
I welcome your comments.
The National Association of Corporate Directors (NACD) has published a discussion between the leader of PwC’s Center for Board Governance, Mary Ann Cloyd, and an expert on cyber who formally served as a leader of the US Air Force’s cyber operations, Suzanne Vautrinot.
It’s an interesting read on a number of levels; I recommend it for board members, executives, information security professionals and auditors.
Here are some of the points in the discussion worth emphasizing:
“An R&D organization, a manufacturer, a retail company, a financial institution, and a critical utility would likely have different considerations regarding cyber risk. Certainly, some of the solutions and security technology can be the same, but it’s not a cookie-cutter approach. An informed risk assessment and management strategy must be part of the dialogue.”
“When we as board members are dealing with something that requires true core competency expertise—whether it’s mergers and acquisitions or banking and investments or cybersecurity—there are advisors and experts to turn to because it is their core competency. They can facilitate the discussion and provide background information, and enable the board to have a very robust, fulsome conversation about risks and actions.”
“The board needs to be comfortable having the conversation with management and the internal experts. They need to understand how cybersecurity risk affects business decisions and strategy. The board can then have a conversation with management saying, ‘OK, given this kind of risk, what are we willing to accept or do to try to mitigate it? Let’s have a conversation about how we do this currently in our corporation and why.’”
“Cloyd: What you just described doesn’t sound unique to cybersecurity. It’s like other business risks that you’re assessing, evaluating, and dealing with. It’s another part of the risk appetite discussion. Vautrinot: Correct. The only thing that’s different is the expertise you bring in, and the conversation you have may involve slightly different technology.”
“Cloyd: Cybersecurity is like other risks, so don’t be intimidated by it. Just put on your director hat and oversee this as you do other major risks. Vautrinot: And demand that the answers be provided in a way that you understand. Continue to ask questions until you understand, because sometimes the words or the jargon get in the way.”
“Cybersecurity is a business issue, it’s not just a technology issue.”
This was a fairly long conversation as these things go, but time and other limitations probably affected the discussion – and limited the ability to probe the topic in greater depth.
For example, there are some more points that I would emphasize to boards:
- It is impossible to eliminate cyber-related risk. The goal should be to understand what the risk is at any point and obtain assurance that management (a) knows what the risk is, (b) considers it as part of decision-making, including its potential effect on new initiatives, (c) has established at what point the risk becomes acceptable, because investing more has diminishing returns, (d) has reason to believe its ability to prevent/detect cyber breaches is at the right level, considering the risk and the cost of additional measures (and is taking corrective actions when it is not at the desired level), (e) has a process to respond promptly and appropriately in the event of a breach, (f) has tested that capability, and (g) has a process in place to communicate to the board the information the board needs, when it needs it, to provide effective oversight.
- Cyber risk should not be managed separately from enterprise or business risk. Cyber may be only one of several sources of risk to a new initiative, and the total risk to that initiative needs to be understood.
- Cyber-related risk should be assessed and evaluated based on its effect on the business, not based on some calculated value for the information asset.
- The board can never have, or maintain, the level of sophisticated knowledge required to assess cyber risk itself. It needs to ask questions and probe management’s responses until it has confidence that management has the ability to address cyber risk.
I welcome your comments and observations on the article and my points, above.
At least, that is what one expert has to say in a provocative piece in SC magazine.
Here are some excerpts, but I recommend you read the short article.
The author, the CEO of a software vendor of cybersecurity products, starts with these points:
…user-driven technology has progressed so rapidly that it has significantly outpaced technology’s own ability to keep data protected from misuse and guarded from cyber vulnerabilities…….
A lack of reliable security is the price we’ve paid for this eruption of amazing new cloud-based services and keeping vital data out of the wrong hands is an uphill battle.
He then spells out a truth that we should all acknowledge:
Anyone who tells you that your data is secure today is lying to you. The state-of-the-art that is cybersecurity today is broken. There must be a better way. But don’t lose hope, there is.
The article then takes a new direction (at least for me):
CIOs today need to adopt an entirely new security philosophy – one that hinges on the fact that your files and information will be everywhere……..
If we can build a new security approach from the ground up based on the premise that data will escape, and are then able to secure everything no matter where it is, we end up debunking the concept of the “leak” entirely.
I do agree that the traditional, exclusive, focus on preventing an intrusion cannot continue. He says:
That’s why my biggest frustration coming out of the recent Sony and Anthem hacks is companies opting for reactive solutions to fortify firewalls and secure siloed tunnels of information. For example, there was a major uptick in company-wide email-deletion policies in the wake of the Sony attack. Now that’s just dumb. Those are band-aid strategies that fail to address the heart of the problem.
He continues to press his point:
Maintaining a level of security in a boundaryless world means security and policy follow exactly what you’re trying to protect in the first place — the data……
Usable security, where users can choose how they want to access, store and share data, can only be made possible by providing a seamless user experience, so security is integrated into the daily work of everyone. A great user experience is one major obstacle security vendors (and arguably, all enterprise services) have yet to conquer. If we can do it, we will move away from panic-inducing scare tactics used to encourage adoption, and instead empower users with a solution they actually like to secure data…..
In order to be a security company, enterprises need to rethink a few things. First, users have to be in control of their data at any given point in time and should be able to revoke access when they want by utilizing familiar technology. They should have complete peace of mind that their data truly stays theirs. Second, in a cloud and mobile world there are no real controlled end-points anymore, unless we want to take a step back into the stone ages. And third, the firewall model is broken and trying to extend the perimeter out simply doesn’t work anymore. It’s about protecting the information, wherever it is, and not about locking everything down where it’s hard to access, use and share for your employees and partners.
So he is presenting a new cybersecurity world where the security follows the data, using encryption and other methods.
I think that is something that every organization should consider – especially encryption.
But is it enough?
For a start, how secure is encryption in the face of the sophisticated attacker? Maybe it is reasonably secure now, but we cannot be sure it will remain secure. Consider how encryption was broken by researchers, with the story told in this 2013 article.
I think you need at least three levels of protection: prevention, encryption, and detection, followed by response.
We can no longer assume that the bad guys cannot get in, and I am reluctant to assume that my encryption will not be broken if they have time.
So, we need the ability to detect any intruders promptly – so we can shut them down and limit any damage.
Too few have sufficient detection in place. Just look how long hackers were inside JP Morgan, and then how long it took the company to expel them!
I welcome your views.
Last week, I participated in an NACD Master Class. I was a panelist in discussions of technology and cyber risk with 40-50 board members very actively involved – because this is a hot topic for boards.
I developed and shared a list of 12 questions that directors can use when they ask management about their organization’s understanding and management of cyber-related business risk.
The set of questions can also be used by executive management, risk professionals, or internal auditors, or even by information security professionals interested in assessing whether they have all the necessary bases covered.
This is my list.
- How do you identify and assess cyber-related risks?
- Is your assessment of cyber-related risks integrated with your enterprise-wide risk management program so you can include all the potential effects on the business (including business disruption, reputation risk, inability to bill customers, loss of IP, compliance risk, and so on) and not just “IT-risk”?
- How do you evaluate the risk to know whether it is too high?
- How do you decide what actions to take and how much resource to allocate?
- How often do you update your cyber risk assessment? Do you have sufficient insight into changes in cyber-related risks?
- How do you assess the potential new risks introduced by new technology? How do you determine when to take the risk because of the business value?
- Are you satisfied that you have an appropriate level of protection in place to minimize the risk of a successful attack?
- How will you know when your defenses have been breached? Will you know fast enough to minimize any loss or damage?
- Can you respond appropriately at speed?
- What procedures are in place to notify you, and then the board, in the event of a breach?
- Who has responsibility for cybersecurity and do they have the access they need to senior management?
- Is there an appropriate risk-aware culture within the organization, especially given the potential for any manager to introduce new risks by signing up for new cloud services?
I am interested in your comments on the list, how it can be improved, and how useful it is – and to whom.
The world continues to buzz about cyber security (or, perhaps we should say, insecurity). Now we have the Chinese government apparently admitting that they have a cyberwarfare capability: not just one unit, but three. Other nations, including the United States, Japan, and some European nations, are talking about their ineffective defenses and the need to develop an offensive capability.
What can the targets, not only any public or private company, but each of us as an individual target (yes, our personal devices are constantly under attack), do about this?
The first step is to get our collective heads out of the sand and understand that we are all, collectively and individually, at risk. The level of successful attacks is enormous (a billion records with personal information were hacked in 2014 according to IBM, as reported here). According to a survey discussed in Fortune, 71% of companies admit they were hacked last year and the majority expects to be hacked this year. However, nearly a quarter, according to Fortune, has not only kept their heads in the sand but do so with unbelievable confidence; they think a successful cyber attack is “not likely” in the next 12 months. The trouble is that very often successful attacks are not detected! It took a long time before JPMorgan Chase found out they had been hacked, and even longer before they knew the extent of damage.
Organizations need to be ready to respond effectively and fast!
The JPMorgan Chase article reports that “The people with knowledge of the investigation said it would take months for the bank to swap out its programs and applications and renegotiate licensing deals with its technology suppliers, possibly giving the hackers time to mine the bank’s systems for unpatched, or undiscovered, vulnerabilities that would allow them re-entry into JPMorgan’s systems.”
All is for naught if successful intrusions are not detected and responses initiated on a timely basis. In the Target case, reports say that the security monitoring service detected suspicious activity but the company did not respond. According to ComputerWeekly.com, many companies make the mistake of “Over-focusing on prevention and not paying enough attention to detection and response. Organisations need to accept that breaches are inevitable and develop and test response plans, differentiating between different types of attacks to highlight the important ones.”
Another insightful article discusses the critical need for pre-planned response capabilities. IT cannot do it all themselves; business executives need to not only be involved but actively work to ensure their operations can survive a successful intrusion.
What else should we do?
We have to stop using passwords like ‘password’, the name of our pet, or our birthday. Password managers are excellent tools (see this article on the top-rated products) and merit serious consideration. I have one (BTW, I don’t plan to replace it with the latest idea from Yahoo of one-time text messages. However, I do like the fingerprint authentication on my iPhone.)
A risk-based approach to cyber security is the right path, in my view. But that does mean that organizations have to continuously monitor new and emerging risks, or new observations about existing risks. An example is a new article on insecure mobile apps – both from in-house developers and from external sources.
Organizations need to allocate resources to cyber and information security commensurate with the risks, and individuals have to take the time to update the software on their personal devices. Internal audit departments should make sure they have the talent to make a difference, providing objective evaluations and business-practical suggestions for improvement.
Companies and individuals, both, need to make sure they apply all the security patches released by software vendors. They address the vulnerabilities most often targeted and when there is a breach, very often it’s because the patches have not been applied.
As individuals, we should have a credit monitoring service (I do), set up alerts for suspicious activity on their bank accounts, and all the anti-virus and spam protection that is reasonable to apply.
Finally, as individuals and as organizations, we need to make sure we and our people are alert to the hackers’ attempts through malware, social engineering, and so on. It is distressing that so many successful intrusions start with somebody clicking where they should not be clicking.
Here are a couple of articles worth reading and a publication by COSO (written by Deloitte) on how their Internal Control Framework can be used to address cyber risks.
As always, I welcome your comments.
According to McKinsey, “executives’ current perceptions of IT performance are decidedly negative”. An interesting piece, Why CIOs should be business-strategy partners, informs us that the majority of organizations are not benefitting from an effective CIO, one who not only maintains the infrastructure necessary to run the business but also works with senior management to drive new business strategies.
For example, the survey behind the report found that:
- “..few executives say their IT leaders are closely involved in helping shape the strategic agenda, and confidence in IT’s ability to support growth and other business goals is waning”.
- “IT and business executives still differ in their understanding of the function’s priorities and budgets. Nearly half of technology respondents see cost cutting as a top priority—in stark contrast to the business side, where respondents say that supporting managerial decision making is one of IT’s top priorities.”
- “In the 2012 survey on business and technology, 57 percent of executives said IT facilitated their companies’ ability to enter new markets. Now only 35 percent say IT facilitates market entry, and 41 percent report no effect.”
With respect to the effectiveness of traditional IT functional processes, few rated performance as either completely or very effective:
- Managing IT infrastructure – 43%
- Governing IT performance – 26%
- Driving technology enablement or innovation in business processes and operations – 24%
- Actively managing IT organization’s health and culture (not only its performance) – 22%
- Introducing new technologies faster and/or more effectively than competitors – 18%
There was a marked difference when the CIO is active. “Where respondents say their CIOs are very or extremely involved in shaping enterprise-wide strategy, they report much higher IT effectiveness than their peers whose CIOs are less involved.” McKinsey goes on to say:
“We know from experience that CIOs with a seat at the strategy table have a better understanding of their businesses’ near- and longer-term technology needs. They are also more effective at driving partnerships and shared accountability with the business side. Unfortunately, CIOs don’t play this role of influential business executive at many organizations. The results show that just over half of all respondents say their CIOs are on their organizations’ most senior teams, and only one-third say their CIOs are very or extremely involved in shaping the overall business strategy and agenda.”
The report closes with some suggestions. I like the first one:
“The survey results suggest that companies would do well to empower and require their CIOs and other technology leaders to play a more meaningful role in shaping business strategy. This means shifting away from a CIO with a supplier mind-set who provides a cost-effective utility and toward IT leadership that is integrated into discussions of overall business strategy and contributes positively to innovating and building the business. Some ways to encourage such changes include modifying reporting lines (so the CIO reports to the CEO, for example, rather than to leaders of other support functions), establishing clear partnerships between the IT and corporate-strategy functions, and holding both business and IT leaders accountable for big business bets.”
Is your CIO effective, both in supplying the infrastructure to run the business and in working in partnership with business leaders to enable strategic progress?
Is this a risk that is understood and being addressed?
I welcome your comments.
I congratulate Larry Carter for his new e-book, published by Compliance Week, on the topic “Segregation of Duties and Sensitive Access: Leveraging System-Enforced Controls”.
This is a timely discussion and explanation of a difficult topic and it includes useful information on the differences between manual and automated controls, preventive and detective controls, and more.
I believe it will be a useful read for internal auditors and application developers who are relatively new to the area, and a reminder to more experienced individuals of some of the key points to consider when designing automated controls to prevent individuals from having more access than they need – which can lead not only to fraud, but disruption, errors, and accidents.
For example, when I was leading the internal audit and SOX programs at Maxtor Corporation, the external auditor asked for access so he could examine some of the SAP configurations as part of his control testing. IT inadvertently provided him not only with the access he requested, read-access to the tables involved, but the ability to change the accounting period. Without realizing what he was doing, the auditor closed the accounting period while our financial team was still posting quarter-end journal entries!
Larry makes the excellent point that we need to consider not only inappropriate combinations of access privileges (i.e., Segregation of Duties, or “SOD”) but inappropriate access to a single capability. He calls this latter Sensitive Access, although the more common term is Restricted Access (“RA”).
As he points out, it is good business practice to limit everybody to the access they need to perform their job. Although it may be easier to establish the same access ‘profile’ (a set of access privileges) for several people, care has to be taken to ensure that nobody has more access than they need. If they do, that creates a risk that they may deliberately or inadvertently use that access and create a problem.
Some years ago, my internal auditors found that an individual in Procurement had the ability to create a vendor in the system and approve payment, as well as approve a purchase order. This creates a risk of fraud. The IT manager said there was a control: “We don’t tell people what access they have”. As you might imagine, we didn’t accept that argument.
This brings me to the critical topic of risk.
Larry makes the excellent and key point that you need to design your controls to address risk. You don’t design and operate controls for any other reason. With SOD, the primary reason for limiting inappropriate combinations of access is to prevent fraud. As he says, it is important to perform a fraud risk analysis and use that to identify the SOD controls you need.
When it comes to controls relating to sensitive or restricted access, the controls you need should also be determined by risk. For example, you will probably want to ensure that only a limited number of people have the ability to approve a journal entry, not only because of the risk of fraud but because you want an appropriate review and approval process to occur before they are posted. Similarly, you will want expenditures over a certain value to be approved by a more senior manager, and that is enforced through a restricted access control.
While Larry makes it clear that risk should drive the determination of what controls you need, I wish that had been how he designed his process for identifying necessary SOD and RA controls. Instead he identifies the total population of potential controls and only then considers (although it is less clear than it should be) whether the risk justifies having a control.
In fact, sometimes there are other controls (other than automated SOD or RA controls) that mitigate or even eliminate the risk. When the design of internal controls is based on a risk assessment that considers all the available controls, you are more likely to be able to design a more efficient combination of controls to address important risks. For example, let’s say you have a risk that individuals with inappropriate access to the spare parts inventory might use that to steal materials critical to manufacturing. At first blush, a control to ensure only authorized people have access might seem mandatory – and it would certainly be good practice. But, if the manager of the warehouse had an inventory taken of that area of the warehouse twice each day, the personnel working there could be relied upon to challenge anybody entering the space, and cameras detected any access, the value of an automated RA control is significantly diminished.
A related issue that Larry unfortunately doesn’t mention is the need to limit the access capabilities of the IT staff – not only to functions within applications, but to functions within IT business processes. For example, you need to limit who can change application code or bypass all your controls using “superuser” capabilities.
Another area that is often overlooked is the need to limit ‘read-only’ access to confidential information. Access privileges that allow unauthorized individuals to view customer or employee’s personal information, or confidential corporate information, may be required to comply with laws and regulations as well as to address the risk of theft or misuse of that information.
Overall, this is an e-book with a lot of useful information and it is an easy read.
Norman Marks is a semi-retired internal audit executive, author of World-Class Internal Audit and How Good is your GRC? (both are available on Amazon), and a frequent blogger on the topics of governance, risk management, internal audit, and the effective use of technology in running the business. He can be reached at firstname.lastname@example.org.