Archive

Archive for the ‘security’ Category

Cyber and reputation risk are dominoes

February 18, 2017 12 comments

Anthony Fitzsimmons recently sent me a review copy of his new book, Rethinking Reputation Risk. He says that it “Provides a new perspective on the true nature of reputational risk and damage to organizations and traces its root causes in individual and collective human behavior”.

I am not sure that there is much that is new in the book, but if you want to understand how human behavior can be the root cause (in fact, it is very often the root cause) of problems for any organization, you may find it of interest.

The authors (Fitsimmons and Professor Derek Atkins) describe several case studies where human failures led to serious issues.

Humans as a root cause is also a topic I cover in World-Class Risk Management.

As I was reading the book, I realized that I have a problem with organizations placing separate attention to reputation risk and its management. It’s simply an element, which should not be overlooked, in how any organization manages risk – or, I should say, how it considers what might happen in its decision-making activities.

The same thing applies to cyber risk and even compliance risk.

They are all dominoes.

dominoes

A case study:

  • There is a possibility that the manager in HR that recruits IT specialists leaves.
  • The position is open for three months before an individual is hired.
  • An open position for an IT specialist who is responsible for patching a number of systems is not filled for three months.
  • A system vulnerability remains open because there is nobody to apply a vendor’s patch.
  • A hacker obtains entry. CYBER RISK
  • The hacker steals personal information on thousands of customers.
  • The information is posted on the Internet.
  • Customers are alarmed. REPUTATION RISK
  • Sales drop.
  • The company fails to meet analyst expectations for earnings.
  • The price for the company’s shares drop 20%.
  • The CEO decides to slash budgets and headcounts by 10% across the board.
  • Individuals in Quality are laid off.
  • Materials are not thoroughly inspected.
  • Defective materials are used in production.
  • Scrap rates rise, but not all defective products are detected and some are shipped to customers.
  • Customers complain, return products and demand compensation. REPUTATION RISK
  • Sales drop, earnings targets are missed again, and …….
  • At the same time as the Quality staff is downsized, the capital expenditure budget is cut.
  • The Information Security Officer’s request for analytics to detect hackers who breach the company’s defenses is turned down.
  • Multiple breaches are not detected. CYBER RISK
  • Hackers steal the company’s trade secrets.
  • Competitors acquire the trade secrets and are able to erode any edge the company may have.
  • The company’s REPUTATION for a technology edge disappears. REPUTATION RISK
  • Sales drop. Earnings targets are not achieved, and……..

It is true that every domino and the source of risk to its stability (what might happen) needs to be addressed.

But, focusing on one or two dominoes in the chain is unlikely to prevent serious issues.

One decision at a low level in the company can have a domino effect.

Consider this slide deck by ERM Strategies, Inc. about the Deep Water Horizon disaster.

I welcome your comments.

Advertisement

Leading an effective information security capability

September 4, 2016 5 comments

With all the press and concern about cyber at all levels of the organization, with the regulators, and among the public, it is a worthwhile exercise to consider what this should mean for the Chief Information Security Officer (CISO) or equivalent.

Some point to the need to elevate the position of CISO to report directly to a senior executive, even to the CEO.

Elevating the position, in my opinion, will not necessarily do more than elevate the voice of cyber in the executive suite. It won’t necessarily drive the resources necessary for an effective cyber program, nor will it necessarily change the minds and attitudes of people from the executives on down.

In fact, elevating the position carries the risk that the CISO will get caught up in organizational politics instead of focusing on cyber risk itself.

Deloitte tackles this and other opportunities in a new piece, The new CISO: Leading the strategic security organization.

Of course, they are using words intended to induce people to read: ‘new’ and ‘strategic’. I think we can easily disregard them and focus on the problem at hand.

First, let’s acknowledge that the role of the CISO (or other individual responsible for information security) should never be considered as simply a compliance function.

Deloitte talks about “the imperative to move beyond the role of compliance monitors and enforcers to integrate better with the business, manage information risks more strategically, and work toward a culture of shared cyber risk ownership across the enterprise”.

But even when I had information security reporting to me 30 years ago, it was about protecting the organization and not just about compliance.

It is foolish to believe that executives or the board will invest if the only return is compliance. Yes, it is necessary but a compliance function will never receive the attention of a function that contributes to the success of the organization. Executives will commit resources to the level they think prudent, but not necessarily what it will take to enable success – because they don’t understand how cyber relates to their personal and corporate success.

If they don’t know that it matters to success, it won’t matter to them.

The successful CISO helps everybody appreciate how cyber contributes to and enables success.

Buried in the Deloitte material are two sections of great importance:

  • While the CISO may think in terms of reducing risks, business leaders take risks every day, whether introducing an existing product to a new market, taking on an external partner to pursue a new line of business, or engaging in a merger or acquisition. In fact, the ability to accept more risk can increase business opportunities, while ruling it out may lead to their loss. From this perspective, the role of the CISO becomes one of helping leadership and employees be aware of and understand cyber risks, and equipping them to make decisions based on that understanding. In some cases, the organization’s innovation agenda may necessitate a more lenient view of security controls.
  • …… CISOs [need] to pivot the conversation—both in terms of their mind-set as well as language—from security and compliance to focus more on risk strategy and management. Going beyond the negative aspect of how much damage or loss can result from risk, CISOs need to understand risk in terms of its potential to positively affect competitive advantage, business growth, and revenue expansion.

These are, in my opinion, the keys to an effective cyber program.

If the CISO is going to influence not only the resources he or she is given but the attitude and actions of the organization, it is necessary not only to understand how the business is run, but to talk to executives in the language of the business.

Talk about how the achievement of objectives may be affected by a cyber breach. Talking about specific objectives is the best way to influence hearts and minds.

Help executives make intelligent decisions when it is appropriate to accept a cyber risk to reap a business reward.

Talk business risk, not technobabble.

Do you agree?

Are there other points of value in the Deloitte paper?

Cyber risk and the boardroom

June 5, 2015 7 comments

The National Association of Corporate Directors (NACD) has published a discussion between the leader of PwC’s Center for Board Governance, Mary Ann Cloyd, and an expert on cyber who formally served as a leader of the US Air Force’s cyber operations, Suzanne Vautrinot.

It’s an interesting read on a number of levels; I recommend it for board members, executives, information security professionals and auditors.

Here are some of the points in the discussion worth emphasizing:

“An R&D organization, a manufacturer, a retail company, a financial institution, and a critical utility would likely have different considerations regarding cyber risk. Certainly, some of the solutions and security technology can be the same, but it’s not a cookie-cutter approach. An informed risk assessment and management strategy must be part of the dialogue.”

“When we as board members are dealing with something that requires true core competency expertise—whether it’s mergers and acquisitions or banking and investments or cybersecurity—there are advisors and experts to turn to because it is their core competency. They can facilitate the discussion and provide background information, and enable the board to have a very robust, fulsome conversation about risks and actions.”

“The board needs to be comfortable having the conversation with management and the internal experts. They need to understand how cybersecurity risk affects business decisions and strategy. The board can then have a conversation with management saying, ‘OK, given this kind of risk, what are we willing to accept or do to try to mitigate it? Let’s have a conversation about how we do this currently in our corporation and why.’”

Cloyd: What you just described doesn’t sound unique to cybersecurity. It’s like other business risks that you’re assessing, evaluating, and dealing with. It’s another part of the risk appetite discussion. Vautrinot: Correct. The only thing that’s different is the expertise you bring in, and the conversation you have may involve slightly different technology.”

Cloyd: Cybersecurity is like other risks, so don’t be intimidated by it. Just put on your director hat and oversee this as you do other major risks. Vautrinot: And demand that the answers be provided in a way that you understand. Continue to ask questions until you understand, because sometimes the words or the jargon get in the way.”

“Cybersecurity is a business issue, it’s not just a technology issue.”

This was a fairly long conversation as these things go, but time and other limitations probably affected the discussion – and limited the ability to probe the topic in greater depth.

For example, there are some more points that I would emphasize to boards:

  • It is impossible to eliminate cyber-related risk. The goal should be to understand what the risk is at any point and obtain assurance that management (a) knows what the risk is, (b) considers it as part of decision-making, including its potential effect on new initiatives, (c) has established at what point the risk becomes acceptable, because investing more has diminishing returns, (d) has reason to believe its ability to prevent/detect cyber breaches is at the right level, considering the risk and the cost of additional measures (and is taking corrective actions when it is not at the desired level), (e) has a process to respond promptly and appropriately in the event of a breach, (f) has tested that capability, and (g) has a process in place to communicate to the board the information the board needs, when it needs it, to provide effective oversight.
  • Cyber risk should not be managed separately from enterprise or business risk. Cyber may be only one of several sources of risk to a new initiative, and the total risk to that initiative needs to be understood.
  • Cyber-related risk should be assessed and evaluated based on its effect on the business, not based on some calculated value for the information asset.
  • The board can never have, or maintain, the level of sophisticated knowledge required to assess cyber risk itself. It needs to ask questions and probe management’s responses until it has confidence that management has the ability to address cyber risk.

I welcome your comments and observations on the article and my points, above.

How much cyber risk should you take?

May 24, 2015 6 comments

I have been spending a fair amount of time over the last few months, talking and listening to board members and advisors, including industry experts, about cyber risk.

A number of things are clear:

  • Boards, not just those members who are on the audit and/or risk committee, are concerned about cyber and the risk it represents to their organization. They are concerned because they don’t understand it – and the actions they should take as directors. The level of concern is sufficient for them to attend conferences dedicated to the topic rather than relying on their organization.
  • They are not comfortable with the information they are receiving on cyber risk from management – management’s assessment of the risk that it represents to their organization; the measures management has taken to (a) prevent intrusions, (b) detect intrusions that got past defenses, and (c) respond to such intrusions; how cyber risk is or may be affected by changes in the business, including new business initiatives; and, the current level and trend of intrusion attacks (some form of metrics).
  • The risk should be assessed, evaluated, and addressed, not in isolation as a separate IT or cyber risk, but in terms of its potential effect on the business. Cyber risk should be integrated into enterprise risk management. Not only does it need to be assessed in terms of its potential effect on organizational business objectives, but it is only one of several risks that may affect each business objective.
  • It is impossible to eliminate cyber risk. In fact, it is broadly recognized that it is impossible to have impenetrable defenses (although every reasonable effort should be made to harden them). That mandates increased attention to the timely detection of those who have breached the defenses, as well as the capability to respond at speed.
  • Because it is impossible to eliminate risk, a decision has to be made (by the board and management, with advice and counsel from IT, information security, the risk officer, and internal audit) as to the level of risk that is acceptable. How much will the organization invest in cyber compared to the level of risk and the need for those same resources to be invested in other initiatives? The board members did not like to hear talk of accepting a level of risk, but that is an uncomfortable fact of life – they need to get over and deal with it!

The National Association of Corporate Directors has published a handbook on cyber for directors (free after registration).

Here is a list of questions I believe directors should consider. They should be asked of executive management (not just the CIO or CISO) in a session dedicated to cyber.

  1. How do you identify and assess cyber-related risks?
  2. Is your assessment of cyber-related risks integrated with your enterprise-wide risk management program so you can include all the potential effects on the business (including business disruption, reputation risk, inability to bill customers, loss of IP, compliance risk, and so on) and not just “IT-risk”?
  3. How do you evaluate the risk to know whether it is too high?
  4. How do you decide what actions to take and how much resource to allocate?
  5. How often do you update your cyber risk assessment? Do you have sufficient insight into changes in cyber-related risks?
  6. How do you assess the potential new risks introduced by new technology? How do you determine when to take the risk because of the business value?
  7. Are you satisfied that you have an appropriate level of protection in place to minimize the risk of a successful attack?
  8. How will you know when your defenses have been breached? Will you know fast enough to minimize any loss or damage?
  9. Can you respond appropriately at speed?
  10. What procedures are in place to notify you, and then the board, in the event of a breach?
  11. Who has responsibility for cybersecurity and do they have the access they need to senior management?
  12. Is there an appropriate risk-aware culture within the organization, especially given the potential for any manager to introduce new risks by signing up for new cloud services?

I welcome your thoughts, perspectives, and comments.

Cybersecurity is broken

April 11, 2015 6 comments

At least, that is what one expert has to say in a provocative piece in SC magazine.

Here are some excerpts, but I recommend you read the short article.

The author, the CEO of a software vendor of cybersecurity products, starts with these points:

…user-driven technology has progressed so rapidly that it has significantly outpaced technology’s own ability to keep data protected from misuse and guarded from cyber vulnerabilities…….

A lack of reliable security is the price we’ve paid for this eruption of amazing new cloud-based services and keeping vital data out of the wrong hands is an uphill battle.

He then spells out a truth that we should all acknowledge:

Anyone who tells you that your data is secure today is lying to you. The state-of-the-art that is cybersecurity today is broken. There must be a better way. But don’t lose hope, there is.

The article then takes a new direction (at least for me):

CIOs today need to adopt an entirely new security philosophy – one that hinges on the fact that your files and information will be everywhere……..

If we can build a new security approach from the ground up based on the premise that data will escape, and are then able to secure everything no matter where it is, we end up debunking the concept of the “leak” entirely.

I do agree that the traditional, exclusive, focus on preventing an intrusion cannot continue. He says:

That’s why my biggest frustration coming out of the recent Sony and Anthem hacks is companies opting for reactive solutions to fortify firewalls and secure siloed tunnels of information. For example, there was a major uptick in company-wide email-deletion policies in the wake of the Sony attack. Now that’s just dumb. Those are band-aid strategies that fail to address the heart of the problem.

He continues to press his point:

Maintaining a level of security in a boundaryless world means security and policy follow exactly what you’re trying to protect in the first place — the data……

Usable security, where users can choose how they want to access, store and share data, can only be made possible by providing a seamless user experience, so security is integrated into the daily work of everyone. A great user experience is one major obstacle security vendors (and arguably, all enterprise services) have yet to conquer. If we can do it, we will move away from panic-inducing scare tactics used to encourage adoption, and instead empower users with a solution they actually like to secure data…..

In order to be a security company, enterprises need to rethink a few things. First, users have to be in control of their data at any given point in time and should be able to revoke access when they want by utilizing familiar technology. They should have complete peace of mind that their data truly stays theirs. Second, in a cloud and mobile world there are no real controlled end-points anymore, unless we want to take a step back into the stone ages. And third, the firewall model is broken and trying to extend the perimeter out simply doesn’t work anymore. It’s about protecting the information, wherever it is, and not about locking everything down where it’s hard to access, use and share for your employees and partners.

So he is presenting a new cybersecurity world where the security follows the data, using encryption and other methods.

I think that is something that every organization should consider – especially encryption.

But is it enough?

For a start, how secure is encryption in the face of the sophisticated attacker? Maybe it is reasonably secure now, but we cannot be sure it will remain secure. Consider how encryption was broken by researchers, with the story told in this 2013 article.

I think you need at least three levels of protection: prevention, encryption, and detection, followed by response.

We can no longer assume that the bad guys cannot get in, and I am reluctant to assume that my encryption will not be broken if they have time.

So, we need the ability to detect any intruders promptly – so we can shut them down and limit any damage.

Too few have sufficient detection in place. Just look how long hackers were inside JP Morgan, and then how long it took the company to expel them!

I welcome your views.

New information and perspectives on cyber security

March 21, 2015 10 comments

The world continues to buzz about cyber security (or, perhaps we should say, insecurity). Now we have the Chinese government apparently admitting that they have a cyberwarfare capability: not just one unit, but three. Other nations, including the United States, Japan, and some European nations, are talking about their ineffective defenses and the need to develop an offensive capability.

What can the targets, not only any public or private company, but each of us as an individual target (yes, our personal devices are constantly under attack), do about this?

The first step is to get our collective heads out of the sand and understand that we are all, collectively and individually, at risk. The level of successful attacks is enormous (a billion records with personal information were hacked in 2014 according to IBM, as reported here). According to a survey discussed in Fortune, 71% of companies admit they were hacked last year and the majority expects to be hacked this year. However, nearly a quarter, according to Fortune, has not only kept their heads in the sand but do so with unbelievable confidence; they think a successful cyber attack is “not likely” in the next 12 months. The trouble is that very often successful attacks are not detected! It took a long time before JPMorgan Chase found out they had been hacked, and even longer before they knew the extent of damage.

Organizations need to be ready to respond effectively and fast!

The JPMorgan Chase article reports that “The people with knowledge of the investigation said it would take months for the bank to swap out its programs and applications and renegotiate licensing deals with its technology suppliers, possibly giving the hackers time to mine the bank’s systems for unpatched, or undiscovered, vulnerabilities that would allow them re-entry into JPMorgan’s systems.”

All is for naught if successful intrusions are not detected and responses initiated on a timely basis. In the Target case, reports say that the security monitoring service detected suspicious activity but the company did not respond. According to ComputerWeekly.com, many companies make the mistake of “Over-focusing on prevention and not paying enough attention to detection and response. Organisations need to accept that breaches are inevitable and develop and test response plans, differentiating between different types of attacks to highlight the important ones.”

Another insightful article discusses the critical need for pre-planned response capabilities. IT cannot do it all themselves; business executives need to not only be involved but actively work to ensure their operations can survive a successful intrusion.

What else should we do?

We have to stop using passwords like ‘password’, the name of our pet, or our birthday. Password managers are excellent tools (see this article on the top-rated products) and merit serious consideration. I have one (BTW, I don’t plan to replace it with the latest idea from Yahoo of one-time text messages. However, I do like the fingerprint authentication on my iPhone.)

A risk-based approach to cyber security is the right path, in my view. But that does mean that organizations have to continuously monitor new and emerging risks, or new observations about existing risks. An example is a new article on insecure mobile apps – both from in-house developers and from external sources.

Organizations need to allocate resources to cyber and information security commensurate with the risks, and individuals have to take the time to update the software on their personal devices. Internal audit departments should make sure they have the talent to make a difference, providing objective evaluations and business-practical suggestions for improvement.

Companies and individuals, both, need to make sure they apply all the security patches released by software vendors. They address the vulnerabilities most often targeted and when there is a breach, very often it’s because the patches have not been applied.

As individuals, we should have a credit monitoring service (I do), set up alerts for suspicious activity on their bank accounts, and all the anti-virus and spam protection that is reasonable to apply.

Finally, as individuals and as organizations, we need to make sure we and our people are alert to the hackers’ attempts through malware, social engineering, and so on. It is distressing that so many successful intrusions start with somebody clicking where they should not be clicking.

Here are a couple of articles worth reading and a publication by COSO (written by Deloitte) on how their Internal Control Framework can be used to address cyber risks.

Cybersecurity in 2015: What to expect

Cybersecurity Hindsight And A Look Ahead At 2015

COSO in the cyber age

As always, I welcome your comments.

New E-Book on Segregation of Duties: A Review

November 12, 2014 1 comment

I congratulate Larry Carter for his new e-book, published by Compliance Week, on the topic “Segregation of Duties and Sensitive Access: Leveraging System-Enforced Controls”.

This is a timely discussion and explanation of a difficult topic and it includes useful information on the differences between manual and automated controls, preventive and detective controls, and more.

I believe it will be a useful read for internal auditors and application developers who are relatively new to the area, and a reminder to more experienced individuals of some of the key points to consider when designing automated controls to prevent individuals from having more access than they need – which can lead not only to fraud, but disruption, errors, and accidents.

For example, when I was leading the internal audit and SOX programs at Maxtor Corporation, the external auditor asked for access so he could examine some of the SAP configurations as part of his control testing. IT inadvertently provided him not only with the access he requested, read-access to the tables involved, but the ability to change the accounting period. Without realizing what he was doing, the auditor closed the accounting period while our financial team was still posting quarter-end journal entries!

Larry makes the excellent point that we need to consider not only inappropriate combinations of access privileges (i.e., Segregation of Duties, or “SOD”) but inappropriate access to a single capability. He calls this latter Sensitive Access, although the more common term is Restricted Access (“RA”).

As he points out, it is good business practice to limit everybody to the access they need to perform their job. Although it may be easier to establish the same access ‘profile’ (a set of access privileges) for several people, care has to be taken to ensure that nobody has more access than they need. If they do, that creates a risk that they may deliberately or inadvertently use that access and create a problem.

Some years ago, my internal auditors found that an individual in Procurement had the ability to create a vendor in the system and approve payment, as well as approve a purchase order. This creates a risk of fraud. The IT manager said there was a control: “We don’t tell people what access they have”. As you might imagine, we didn’t accept that argument.

This brings me to the critical topic of risk.

Larry makes the excellent and key point that you need to design your controls to address risk. You don’t design and operate controls for any other reason. With SOD, the primary reason for limiting inappropriate combinations of access is to prevent fraud. As he says, it is important to perform a fraud risk analysis and use that to identify the SOD controls you need.

When it comes to controls relating to sensitive or restricted access, the controls you need should also be determined by risk. For example, you will probably want to ensure that only a limited number of people have the ability to approve a journal entry, not only because of the risk of fraud but because you want an appropriate review and approval process to occur before they are posted. Similarly, you will want expenditures over a certain value to be approved by a more senior manager, and that is enforced through a restricted access control.

While Larry makes it clear that risk should drive the determination of what controls you need, I wish that had been how he designed his process for identifying necessary SOD and RA controls. Instead he identifies the total population of potential controls and only then considers (although it is less clear than it should be) whether the risk justifies having a control.

In fact, sometimes there are other controls (other than automated SOD or RA controls) that mitigate or even eliminate the risk. When the design of internal controls is based on a risk assessment that considers all the available controls, you are more likely to be able to design a more efficient combination of controls to address important risks. For example, let’s say you have a risk that individuals with inappropriate access to the spare parts inventory might use that to steal materials critical to manufacturing. At first blush, a control to ensure only authorized people have access might seem mandatory – and it would certainly be good practice. But, if the manager of the warehouse had an inventory taken of that area of the warehouse twice each day, the personnel working there could be relied upon to challenge anybody entering the space, and cameras detected any access, the value of an automated RA control is significantly diminished.

A related issue that Larry unfortunately doesn’t mention is the need to limit the access capabilities of the IT staff – not only to functions within applications, but to functions within IT business processes. For example, you need to limit who can change application code or bypass all your controls using “superuser” capabilities.

Another area that is often overlooked is the need to limit ‘read-only’ access to confidential information. Access privileges that allow unauthorized individuals to view customer or employee’s personal information, or confidential corporate information, may be required to comply with laws and regulations as well as to address the risk of theft or misuse of that information.

Overall, this is an e-book with a lot of useful information and it is an easy read.

Norman Marks is a semi-retired internal audit executive, author of World-Class Internal Audit and How Good is your GRC? (both are available on Amazon), and a frequent blogger on the topics of governance, risk management, internal audit, and the effective use of technology in running the business. He can be reached at nmarks2@yahoo.com.