The effective practitioner in action

April 19, 2019 3 comments

Some years ago, a number of department heads asked for my help.

Part of the organization was strongly considering engaging with a new agent to sell our gasoline and related products in Central and Southern America. The potential for enhanced revenue was considered high, as the Mexican agent had extensive contacts in the region. In addition, there were tax and tariff advantages (at that time).

But the department heads who approached me were concerned about the potential for harm (which we called risk).

For example, there was a high degree of concern about the integrity of the agent. There were stories that the agent stole funds from his principals. While these were unproven, the legal department (in particular) was worried.

There were also concerns about compliance by the agent with not only US laws, but those in the neighboring countries. Reputation risk was a potential issue.

They sought my help because the people supporting the initiative had the ear of senior management and the department heads did not believe management was open to hearing their concerns.

Each group had considered the potential effects of a decision to sell products through this agent. Each had a bias: one towards the opportunity and the other to the potential for harm.

Both had assessed the magnitude and likelihood of the potential consequences, but each had focused on different consequences. One had focused on the reward with little regard to the potential for harm, while the other had done the opposite.

I asked everybody to meet and discuss the proposal. One of my direct reports led what we might consider a risk workshop, although we just talked about it as a facilitated discussion.

Each group was surprised by the other’s concerns. They thought that they understood the other side, but found out that their understanding was skin deep.

When they considered all the potential consequences, both the upsides and downsides, they were able to make an informed and intelligent decision as a management team. (It doesn’t matter for our discussion which way they went, but they decided not to proceed for the moment while they explored ways to minimize the downside without jeopardizing the upside. Eventually, they decided not to move forward.)

There are a few points I want to emphasize:

  1. Risk (harms) and opportunity (rewards) are NOT two sides of the same coin. It’s not one or the other, it’s more often than not the case that both exist.
  2. An informed and intelligent decision considers all the things that might happen. On balance, should we do this, that, something else, or nothing?
  3. The practitioner can assist in a number of ways, including helping management use comparable methods and tools to assess both upside and downside potential consequences in a way that they can be compared.
  4. The practitioner can add immense value by being an independent and objective facilitator of discussions like the one above.
  5. If the practitioner only focuses on harms, he or she is not only looking at part of the picture but may actually be distorting the view available to management!

Does this scenario resonate with you? Have you had similar experiences?

I welcome your thoughts.

Advertisements

Selecting a framework for managing risk

April 13, 2019 11 comments

Carol Williams has a web site, ERM Insights, where she writes about risk management (I prefer to talk about the management of risk, rather than risk management, to ensure we are talking about how the organization addresses what might happen, i.e., risk, rather than talking about a function or team).

Recently, she shared her advice on frameworks and standards in ISO 31000 VS. COSO – Comparing And Contrasting The World’s Leading Risk Management Standards.

I like what she has to say (maybe because she quotes me) and recommend that you read and consider it.

Let me add to her discussion.

As Carol says, “the overarching goal of your risk-related activities should be to support decision-making by helping identify and properly assess both risks and opportunities to achieving strategic objectives”.

So the first step should be to understand how your organization makes decisions. Is decision-making centralized or distributed? Are employees empowered or limited?

You should also consider:

  • At what speed and frequency does the path ahead seem to change (i.e., how volatile is risk both from internal and external sources)?
  • The business you are in and what the sources of risk are. For example, I would consider different processes for managing a loan portfolio, customer credit, major projects, derivatives trading, and cyber.
  • How do your decision-makers consume information about what might happen? In fact, what do they need to make intelligent and informed decisions?

The last point is the most important: what information do people need to make intelligent and informed decisions?

The point before that is also important, as you may need different guidelines and processes in different areas of the business.

While the management of risk should be both continuous and dynamic (as risk is created or changed with every decision), on a periodic basis it is wise to take stock and see whether you are on track. Are you still likely to achieve enterprise objectives, taking everything (within reason) into account?

So another question that needs to be answered is how to collect all the information you have about sources of risk around the extended enterprise to provide a big picture view to top management and the board.

Carol correctly points out that the selection of a risk management standard or framework should not be like going to a clothing store and finding a suit (off the rack) that fits perfectly. Some, maybe a lot of customization is going to be required. Tuck in the sleeve around the cyber joint, but extend the hem of the leg that carries the weight of personnel-related sources of risk.

I welcome your thoughts.

Is internal audit being distracted by consultants bearing sparkling new toys?

April 2, 2019 4 comments

Over the years, PwC has provided great value through their annual commentaries on internal auditing.

However, in their 2019 State of the Internal Audit Profession Study, they are advising internal auditors to adopt approaches and practices with which I disagree.

The subtitle to their report is “Elevating Internal Audit’s Role: The digitally fit function”.

PwC starts quite well, acknowledging that disruptive technology and the need to address it has been around for decades.

Organisations are rapidly rolling out digital initiatives in an arena defined by more data, more automation, sophisticated cyberattacks, and constantly evolving customer expectations. In some ways — for internal audit functions— the situation is not new: technology risks and controls have already been on their agendas for decades, and most can reliably deliver a technology audit.

But then they go wrong.

But digital rollouts heighten risks beyond the technology itself.

I cannot comprehend this statement. The risk has always been the effect of a technology-related issue on the business! There’s nothing new here at all!

This has been true for as long as I have been around auditing (and that’s a very long time). PwC says:

Internal audit needs (1) the dexterity to pivot quickly and to keep up with the digital pace of the business, and (2) the knowledge and skills to provide advice and strategic assurance in this new arena.

But this is not a ‘new arena’!

40% of my internal audit team of 20+ years ago were IT auditors, including individuals with as much or more technical knowledge than IT’s own technical staff.

Why? Because that is where the greater risks were, just as they very often are today. I hired people with the skills necessary to address those greater risks.

PwC defines the ‘digitally fit function’:

The definition is twofold: (1) having in place the skills and competencies to provide strategic advice to stakeholders and to provide assurance with regard to risks from the organisation’s digital transformation and (2) changing the function’s own processes and services so as to become more data driven and digitally enabled so the function can align with the organisation’s strategic risks and thereby anticipate and respond to risk events at the pace and scale that the organisation’s digital transformation requires.

As I said, the first part of the definition is nothing new. The second part is an area that internal audit should approach with caution.

Some internal audit functions have become the owners and operators of detective controls. They have implemented analytics that test the data rather than assessing whether management has the right controls.

There are times when it is appropriate for internal audit to test the data. For example, when my team identified several major control deficiencies that represented a significant vulnerability to accounts payable fraud, my IT team developed a series of ACL reports. The team was able to analyze all payments made in the last year or so and confirm that nobody had taken advantage of the control weaknesses.

It can also be useful to analyze the data to understand the business. One of my teams saw that every software contract between the company and our customers was getting the same level of review, even though some contracts were for a few thousand dollars and others were for over a million. Using Business Objects analytics, they were able to stratify the population of contracts and recommend the point above which a contract merited a full review and below which a more streamlined review was sufficient.

I have long been a believer in the power of analytics as an internal audit tool. I used them myself when I was in public accounting (for both financial and ITGC auditing) and later made sure my internal audit teams had access to such tools. In fact, I believe all auditors should have the tools on their laptops or tablets.

But auditors should not fall into the trap of buying a hammer and then looking for nails.

I visited a large internal audit function some time ago. Following the advice of consultants, they had established a data mining team. The team had acquired powerful analytics tools and was now studying the data to decide where to deploy them.

They had bought a hammer (analytics tools and the people to deploy them) and were looking for nails.

What the intelligent internal audit team does is understand where the enterprise risks are and where they need to provide assurance, advice, and insight.

Once they know the target, they can decide what tools are right for the job. Maybe it’s analytics and maybe it’s not.

One of the problems in investing in technology is that when you take an enterprise risk-based approach (as we all should), the target is highly likely to change each year. This is especially true in these dynamic times, when (to quote PwC’s own report) you need “the dexterity to pivot quickly and to keep up with the digital pace of the business”.

If technology is only used once, then there may not be a sufficient return on the investment of time and money.

Until recently, the consultants (including PwC) had been advising internal audit teams to use analytics – without first advising that they need to determine whether there is a need (providing assurance on a risk where the analytics would be of value). Now, they are pushing something called RPA. This is what PwC says:

When it comes to using emerging technologies within their function, many internal audit functions struggle to find the fit. For example, 54% of internal audit functions are either unsure of or do not plan to use AI within the next two years. Even RPA use is questioned: 49% do not plan to use RPA or are unsure how they will use it. But not Dynamics: 37% use RPA currently, and another 45% plan to do so within two years. [PwC uses the term ‘Dynamics’ to refer to the audit functions that meet PwC’s vision of digitally fit.]

RPA stands for robotic process automation.

PwC is not the only consulting firm to push RPA for internal audit. Deloitte has a paper, Adopting automation in internal audit. KPMG has shared Intelligent automation and internal audit.

The problem is that while these bots can detect an error, that is a management role and not an internal audit role.

They are detective controls!

Internal audit functions should not limit themselves by auditing past (or even current) transactions.

They should be auditing the controls that provide assurance that current and future transactions will be handled properly.

They should be providing assurance that management has controls in place to address risk, not performing the controls themselves.

They should provide assurance, advice, and insight on today and tomorrow rather than the past.

Consider the example cited by PwC:

For one company, testing to see whether terminated employees’ system access rights were being removed in a timely manner was a highly manual process. It required using a lookup function from three disparate data sources for each IT application, which took the audit team 100 hours to test 20 instances of the control. With RPA, a bot was built in 40 hours that performs in seven hours the previously manual processes. By automating many stages of the test except human review, testing hours greatly reduced, and coverage expanded from a sample basis to full populations, which provides greater assurance.

This company confirmed that terminated employees no longer had system access rights.

But did they assess whether management had appropriate controls in place that were operating effectively? No.

Did they assess whether the rights were removed in a timely manner? No.

Just because the data was clean doesn’t mean that the right controls were in place to ensure they were clean.

It is possible that a manager scrubbed the employees’ access rights 30 minutes before the auditors ran their test.

Any of my internal audit team would have asked management how they, management, ensured employees’ access rights were removed promptly upon termination. They would then have assessed and tested those controls.

If they felt the need, perhaps because the controls were not strong, to develop analytics (or RPA) to test access, they would have passed that technology on for management to use on a continuing basis – as a detective control.

There is some good material in the PwC report, not only repeating what we have learned in the past but stressing what everybody should be doing moving forward. For example, they say:

  • Internal audit leaders universally agree that annual plans and annual assessments are antiquated.
  • “Our products, services and/or business model can significantly change within six months. So I don’t know what I’ll need in two years. I don’t have a three-year audit plan. My one-year plan changes every three months.”

But let’s get some things straight:

  1. Internal audit’s job is to provide assurance, advice, and insight – not to perform detective controls
  2. Internal audit needs to identify the risks to address and only then the tools appropriate for the task – and not the other way around

I welcome your thoughts.

Are we taking risk, making a decision, or gambling?

March 30, 2019 16 comments

We gamble all the time, but we don’t think of it that way.

We think we are making decisions, not gambling – and often don’t see it as taking risk either.

But we are.

The key is whether we are making what we consider a “sure bet”, where we believe the outcomes of our decisions are more likely than not to be (net) favorable, considering both the upside and downside – especially compared to the alternatives.

 

When I quit my job with Coopers & Lybrand in the UK and decided to move to the US, I was gambling.

  • I had no assurance that I would get a job in the US (although I was fairly confident), and certainly had no assurance it would be a job I would want. As it happened, while I wanted to move back to Atlanta, the job I was offered was in Los Angeles.
  • I didn’t think I had a future in my current position. As it happened, I had been tagged as potential partner material.
  • I was also gambling that I would enjoy life in the US. I had spent nine months in the US and had made many friends, but would I be happy in this foreign country? Would I miss the safety of being close to my family in England?
  • I made what I considered to be an intelligent and informed decision. As it happened, my assessment of the facts was partially incorrect (I probably did have a future if I had stayed), but it turned out well for me.

Every time after that, when I took a new job I was gambling.

  • In most cases, my old job was disappearing due to downsizing or an acquisition. But in some cases, I had been offered a position with the acquiring company. My assessment was that it was better to leave than stay.
  • While I had done as much research as I could on my new company, I didn’t have certainty about its prospects or the people I would work with. In one case (Solectron), nothing was as it appeared during the hiring process – but that’s another story.
  • Again, I made what I considered an intelligent and informed decision but had no certainty that it would turn out well.

 

When we gamble, whether we call it making a decision or taking a risk, it is crucial that we try to do so intelligently and with all the quality information we can obtain.

 

At college, I played poker with a lot of success. But I didn’t consider it gambling as I knew that I was one of only two players at the table who knew what they were doing. I was taking risk, but my assessment was that I was far more likely to win than lose and my potential loss was smaller than my potential gain.

 

Quality information informs and enables quality decisions.

 

The military planners deciding whether to send troops to rescue hostages in Iran (under Carter) or to capture Al Qaeda leaders (under Obama), would have had to assess:

  • The likelihood of loss of personnel and equipment. There was a range of possible levels of loss, from the embarrassment of a failed mission to the loss of the whole team. Each level of loss had its own likelihood.
  • The likelihood of success. That also was a range, from partial (such as rescuing a few hostages) to full (bringing them all home). Each level of success had its own likelihood.
  • The possibility that their assessments of loss and success were incorrect.
  • Whether the likelihood of success warranted taking the risk of failure. That was the gamble they made.
  • Were they gambling when they decided to go ahead? There was no certainty about either the potential and likelihood of loss or the potential and likelihood of success.

 

What does this mean for the risk practitioner?

Their job is to help the decision-makers make informed decisions and take risks with the knowledge that they are more likely to succeed than fail.

After all, it is only by taking risks that any organization can achieve its objectives and succeed.

The risk practitioner has the ability to help decision-makers assess the extent and likelihood of a range or potential outcomes, both potential losses and gains.

The risk practitioner can improve the likelihood of quality decisions and therefore of success.

 

Is it gambling when you have what you believe to be reliable information and are making an intelligent decision?

It’s certainly gambling when decisions are made in haste without reliable information on the extent and likelihood of what might happen.

 

I welcome your thoughts.

 

Assessing the effectiveness of your risk management program

March 23, 2019 6 comments

The IIA has published a new Practice Guide, Assessing the Risk Management Process. In IIA-speak, this is recommended but not mandatory guidance for its members.

A previous December 2010 Practice Guide, Assessing the Adequacy of Risk Management Using ISO 31000 is still available.

I much prefer the earlier version, especially as it talks about meeting the needs of the organization (which is critical) and how management needs to know what risks to pursue, not just avoid or mitigate, so that it can achieve its objectives. It also includes the famous “fan”, indicating which risk management roles are appropriate for internal auditors.

 

The new PG has some good content, including (my highlights):

  • Risk management is driven by more than regulations and external forces. Implementing efficient and effective risk management benefits organizations of any type and size by helping them to achieve operational and strategic objectives and to increase value and sustainability, ultimately better safeguarding their stakeholders.
  • Benchmarking the current state of the organization’s risk management against a risk management maturity model is a good place to start this type of assessment. Benchmarking may help the internal audit activity communicate with senior management and the board about the organization’s level of risk management maturity and about aspiring to improve the process and advance in maturity.
  • A mature risk management process typically demonstrates benefits, such as: enabling risk-based decision-making and strategy-setting [and] increasing the likelihood the organization will meet its strategic objectives.
  • If management believes that the risk management process is a bureaucratic exercise that is not worth the resources needed to execute it, then recommending large-scale improvements may be premature and received with skepticism or rejected completely.

I also like the fact that the PG recommends identifying and considering risks to the risk management process itself, a concept I invented in World-Class Risk Management (unfortunately not referenced in the PG).

But both PGs fail to focus on whether the risk management program helps organizations achieve their objectives. They only consider the potential for harm.

 

Consider this.

In 2008, when so many financial institutions were in trouble, the UK banks decided to stop making loans. They brought their ‘risk appetite’ down to very low levels.

If their risk management program had been assessed using either of these PGs (or, frankly, any of the major frameworks, standards, or guides), it would have been rated highly.

Their level of risk was within their desired range, their risk appetite.

But what happened from a business point of view?

They had next to no revenue and cash flow was severely impacted.

It was not sustainable.

What they should have been doing (and I assume they turned to this) was taking an appropriate level of risk that gave them an acceptable likelihood of achieving their short and longer-term objectives.

To repeat what the PG correctly says: “effective risk management benefits organizations of any type and size by helping them to achieve operational and strategic objectives and to increase value”.

In order to achieve your objectives, you have to take risks. The question is whether you are taking the right level of the right risks, with quality information about what might happen!

 

Avoiding failure is a recipe for failure.

 

So how should you assess the effectiveness of risk management?

You do it by assessing whether it meets the needs of the organization. Those needs include:

  • Enabling intelligent and informed decisions, both strategic and tactical, anticipating what might happen
  • Being confident that the right level of the right risks are being taken to achieve enterprise objectives, balancing the potential for both harm and reward
  • Having an acceptable likelihood of achieving (or surpassing) enterprise objectives

When your executives say that the management of risk helps them set and then execute on strategies (paraphrasing a Deloitte survey and report, where less than 20% said it did), then you probably have effective risk management.

There are multiple approaches to assessing the effectiveness of risk management. They include determining whether management is in compliance with its policies and standards, and its risk register is complete and assessments are ‘correct’; this has some but little value. Another approach is to see whether the principles in ISO 31000 (I prefer those in the 2009 version) are achieved; this has more value. But I like what I suggested above more: seeing whether the executives believe it is essential to their and the organization’s success.

I like the maturity model approach and included a few (all of which I prefer to the one in the 2019 PG) in my book, World Class Risk Management.

But any maturity model has to avoid a focus that is limited to identifying, assessing, and managing the potential for harm. It has to include whether both potential harms and rewards are considered (in a disciplined and reliable manner) in decision-making.

Building on the discussion in the new PG about risk to the risk management process, in an effective program the likelihood that the information provided being significantly wrong is low (acceptable level).

What do you think?

The Wonder and Joy of Internal Auditing

March 17, 2019 2 comments

More than 17 years ago, The IIA’s magazine published an article of mine, The new age of internal auditing.

I made some provocative comments, including (with highlights today):

Members of the profession have a unique opportunity to become major contributors to their organizations and embark on radical change.

Technology has accelerated the rate of change dramatically, and many organizations are struggling to keep up. As Steve Case, chairman of AOL Time Warner, recently stated, “There is probably going to be more confusion in the business world in the next decade than there has been in any decade in history.”

Internal auditors can thrive in the midst of this confusion and, in fact, are needed more than ever before. As our organizations sail to the new world of e-business, auditors can be at their side. We can provide necessary advice and counsel as our clients embark on new explorations.

To meet the needs of our clients in today’s business environment, however, internal auditors must be able to keep up with change and adapt to the increasing speed of business. In the words of management guru Tom Peters, “We are in the most profound revolution in over 500 years, and this revolution places over 90 percent of the white-collar worker jobs in jeopardy over the next decade. … The IO percent who survive will make it because they have reinvented their work to be full of passion, excitement, emotion, and dreams.” Auditors must embrace change or risk going the way of the dinosaur. We will survive and thrive if, as Peters suggests, we can reinvent our work.

In response to changing business demands, audit departments of the future are likely to be different in several key ways. For instance, we will audit faster and place more emphasis on real-time risk and controls consulting. Staffing will change accordingly, with more IT-proficient auditors. Instead of focusing on a list of audits from an audit schedule, we will be concerned primarily with assurance: providing peace of mind to our clients that business risk is being managed effectively – even, or especially, during turbulent times. Most importantly, however, we will need to start looking further ahead and rethinking our traditional approach to audits.

When continuous change and transformation occurs, continuous risk assessment is needed. As auditors, we will need to make sure our eyes remain on the areas of greatest risk. The days of an annual audit plan, where projects are set in stone, will disappear. Risks can change rapidly and with little warning, as Cisco found when its sales plummeted and forced the company to write off $2.5 billion in inventory. Auditors will need to challenge their schedules constantly to ensure that present and future risks are being addressed – not the risks of the past.

Our audits will be future-looking projects, rather than audits of history, and our mantra will be “assurance through prevention.”

Auditors need to be loud. We need to voice our concerns when it comes to understanding and assessing business risk in turbulent times. This takes courage, especially when management is racing to install the latest technology and our message is one of caution — of heightened risks because of missing controls and security, or hastily tested code —

The rock stars of the new age of internal auditing must step up to the challenges that lie ahead. They need to throw out the crutch of standard audit programs and old auditing textbooks and instead rely on their knowledge of basic control theory, their intellect, and their imagination. To be rock stars, internal auditors must be able to take some risks and leave their traditional thinking behind.

It can be so much fun when your internal audit team are doing all of this. There are great opportunities for personal and professional growth, as well as making a huge contribution to the success of your organization.

Looking back, I am convinced that my advice was sound. Some progress is being made, for example:

Internal Audit groups having the most impact and influence in their organizations also tend to be the most innovative. Not content with doing the same things in the same ways, they learn how to deliver the assurance, advice, and risk anticipation that stakeholders need, when they need it, and they use whatever new methods and technologies they need to do that.

The traditional audit planning process is of limited value in assessing risks in today’s disruptive environment. Continuous risk monitoring, assessment, and tracking can help Internal Audit to direct its resources to where they’re most needed—a valuable departure from rotational audit plans.

  • Protiviti has also been advocating change. In Embracing the Next Generation of Internal Auditing, Brian Christensen (EVP, global internal audit) is quoted as saying: “There needs to be a fundamental rethinking of the design and capabilities of the internal audit function to be more forward-looking and help improve the business”. The report also says:

Three out of four internal audit groups are undertaking some form of innovation or transformation effort.

Next-generation internal audit methodologies are designed to equip organizations with more efficient, flexible, risk-focused, real-time and impactful ways of conducting their activities. These methodologies, which also apply to reporting and collaboration activities, generally include continuous monitoring, high-impact reporting, an agile audit approach, and dynamic risk assessment.

 

But the profession has not (yet) met the challenge I set in 2001.

I still see:

  • A lack of interest from audit committees (according to the Protiviti study, only 16% are very interested)) in audit function transformation. I suspect they don’t know what they are missing!
  • Traditional annual (ugh!) audit plans. They may have a contingency to add “special projects”, but few have moved to agile internal auditing, where the planning is continuous and projects focus on the risks of today and tomorrow.
  • The maintenance of “audit universes” when we should have “risk universes”. We need to audit controls over the enterprise risks of today and tomorrow, not risks to a location or process.
  • Too few audit functions are assessing whether the management team has processes around what might happen (risk) that meet the needs of the organization. Some are performing a compliance audit to see whether risk management is performed consistent with policies and so on, but that is not even the start of addressing whether management manages the risk of not seeing the bus heading their way. (Note: the bus may be an opportunity or a threat.)
  • Audit reports that say what the auditor wants to say rather than what the stakeholder needs to know. (See my April 2018 article in the Internal Auditor magazine, Information Distillation. (Link available only to IIA members.))
  • A lack of passion and excitement in our work (echoing Peters’ words from my article).

 

Some seem to think that internal audit work is boring. Recently, one individual wrote that “SOX is killing the Internal Audit profession”. A lot of people ‘liked’ his article, but is it SOX that is killing internal auditing (if indeed it is a dying profession)?

I challenged the gentleman on Twitter, saying that if people are bored by their SOX testing it is because of a failure of leadership by the CAE and his or her management team.

It is the job of every manager to ensure his or her employees are motivated. Giving them boring work is awful. The manager has a duty to make it interesting.

Recently, Richard Chambers paid tribute (on the 10th anniversary of Richard’s appointment as President of the IIA) to the great Bill Bishop. Bill was President of the IIA for many years and I can still picture him talking about his internal audit tattoo and bleeding internal audit blood.

Internal audit leaders need to (and the best do) have passion for internal audit and the value it brings to the organization.

 

If you start with the idea that SOX testing is boring, it will be very boring indeed.

But there is no reason that it should be boring.

 

I’m a big fan of Tom Peters and his concept (and book) The pursuit of Wow! In 2001, I made a presentation to the SuperStrategies conference on The Gospel According to Tom Peters: Making Internal Audit a WOW! Department (click on the link to download my PowerPoint).

Wow

The idea is that a great leader can make almost any project a Wow! project. In the 2001 presentation, I quoted Tom Peters’ description of a Wow! project:

It is dynamic, stimulating, a major bond builder among co-workers, a source of buzz among customers, and inspiring, exhausting, hot, cool, sexy, where everyone wants to be.

It confronts an important issue head-on… redefines it in such a way that participants will be remembered ten years later

 

How does a great CAE make SOX exciting, something for which an auditor can have passion?

My team already knew that our job was not to find fault, but to help management succeed. Of course, when controls failed we reported that, but with an eye to helping them upgrade to processes and controls that were both efficient and effective in managing risk.

When we tested controls over financial reporting (and I did some of the testing myself), we considered:

  • Are these the right controls to include in scope?
  • Do they address the financial reporting risk?
  • Are there better controls?
  • Are there better ways to address the risk, perhaps making use of technology?
  • Are there redundant controls that can be eliminated?
  • Is there too much control?
  • Do the people have not only the information, training, responsibility, and experience to perform the controls (per AS5) but the time to do them well?
  • Is supervision and review effective and appropriate?
  • Will management know when there are problems performing the controls?
  • Can the processes be upgraded?

In other words, we were essentially performing not only a compliance audit but an operational audit as well.

Management recognized quickly that we were there to help (without losing our objectivity). Their welcoming attitude enhanced our experience as SOX testers.

Another aspect of our work was that we gave the auditors the time to do the job well. I have heard of some organizations where the auditors are hounded to complete the work. There’s no joy under those circumstances – and no opportunity to add value.

 

If you believe internal audit work can be fun, you can make SOX testing fun and challenging as well.

But it starts with the right attitude.

BTW, don’t tell me this is good in theory and not in practice until you have tried it!

 

Your thoughts?

 

Talking about software for GRC

March 10, 2019 1 comment

The Open Compliance and Ethics Group (OCEG) recently published the 2019 OCEG GRC Technology Strategy Report.

Written by French Caldwell, who has been involved in the ‘GRC’ world as an analyst with Gartner and others for many years, it has some interesting content.

It also reminded me of the problem I have with so-called GRC solutions and platforms.

 

Let me start with the challenge of the acronym, GRC.

 

Before I can talk about technology for GRC, I need to explain my views on what GRC means.

I joke that it stands for Governance, Risk, and Confusion.

Why?

Because while everybody seems to be able to explain that the letters in GRC stand for Governance, Risk, and Compliance, very few can explain what the whole term means.

I credit (if that is the right word) Michael Rasmussen with inventing the term. While others (including Scott Mitchell, the Founder and Chairman of OCEG) have laid claim to it from time to time, Michael coined the term to describe the basket of functionalities in the software he was assessing and reporting on for Forrester Research.

Michael and I are two of the first three to be honored by OCEG as Fellows (along with Brian Barnier) for our thought leadership on GRC, and we both like OCEG’s definition of GRC. I think it’s the only definition that makes sense, with a practical and useful meaning.

French refers to the OCEG definition in his report. But, here is a more complete description from OCEG (see here for details, including a discussion of the problems of fragmentation and silos that inhibit the optimization of an organization):

GRC is the integrated collection of capabilities that enable an organization to reliably achieve objectives, address uncertainty and act with integrity.

GRC as an acronym denotes GOVERNANCE, RISK, and COMPLIANCE — but the full story of GRC is so much more than those three words.

The acronym GRC was invented as a shorthand reference to the critical capabilities that must work together to achieve Principled Performance — the capabilities that integrate the governance, management and assurance of performance, risk, and compliance activities.

This includes the work done by departments like internal audit, compliance, risk, legal, finance, IT, HR as well as the lines of business, executive suite and the board itself.

 

It’s all about setting and then achieving the objectives that will deliver value!

Governance includes the setting of objectives and strategies, managing the organization through informed and intelligent decision-making, measuring and monitoring performance, and much more (such as the board, Legal, and Internal Audit).

The journey to success has to include the anticipation and handling of what might happen (Risk) while acting with integrity (Compliance).

Every part of the organization has to work together, in harmony and with shared objectives, if the potential of the enterprise is to be realized.

I have previously shared my guidance for assessing how well this is done at your organization.

 

Here is my problem with technology for GRC.

Very few self-described GRC solutions and platforms have any significant functionality around setting and communicating objectives and strategies, let alone integrating risk into the measurement of performance against those objectives and strategies.

In other words, they don’t really (for the most part; I am sure there must be exceptions) provide leadership with information on how well we are doing so far on each of our targets plus how we anticipate (considering what might happen) ending up.

This is more than adding KRI to a report with KPI.

It’s about understanding how likely we are to achieve our objectives.

 

I describe this lack of functionality by saying that when it comes to GRC, the G is silent.

 

This is all very apparent in French’s report for OCEG.

 

Even if it were possible to have one piece of software that included everything in GRC (have you seen functionality for Legal, Strategy, Performance Management, Policy Management, Risk Management, EH&S, Safety, Ethics, Investigations, Board oversight, Trade Compliance, and so on in one product?), very few companies claim to have integrated their related technologies.

Most think of GRC functionality as addressing needs related to a subset of GRC, such as the combination of:

  • Risk management
  • Policy management
  • Some aspects (but rarely all) of Compliance
  • Ethics
  • Internal Audit

 

Then there’s the question of whether it makes business sense to integrate functionalities, even just for these 5 areas.

I am not persuaded there is great value in integrating software for policy management and internal audit, for example.

 

This is what I recommend:

  1. Get the software that meets your organization’s needs, not necessarily the one labeled GRC and rated highest by the analysts. Your organization’s needs are unlikely to be the same as the criteria used by the analysts.
  2. Understand how you want the various business processes to function in both the short and longer-term and then how they might be improved by technology. Do that by focusing first on individual functions (such as risk management) before seeing where multiple functions can use the same technology.
  3. Recognize that while you don’t want the disparate parts of the organization to function in silos, the place they come together is around achieving objectives and strategies.
  4. Where it makes sense to purchase a solution that meets the needs of more than one organization, where integration has a clear value, do so. But don’t pursue integration at the expense of the efficiency and effectiveness of the individual parts.
  5. Don’t allow functions to have undue influence on the acquisition of technology. The owners of those parts of the organization where the technology would add most value to the business as a whole should have the greatest influence. (I have seen situations where the lack of functionality for internal audit has torpedoed the acquisition of the best technology for risk management.)

 

What do you think?

What are you takeaways from the OCEG report?