Home > Risk > Risk Assessment Danger

Risk Assessment Danger

Every so often, we hear about a military mission where something went wrong. The intelligence might have said, for example, that a targeted individual was thought to be in a certain location – so the military attacked that location but did not find the sought-after person.

In the same way, business leaders make decisions based (at least in part) on information about risks and opportunities.

If a risk assessment is unreliable, wrong decisions may be made with serious effects.

For example, if the risk is seen as ‘high’ that a competitor will shortly release an advanced version of a competitive product, the management team may decide to accelerate the launch of its own product even though its development team say they are not quite ready.

On the other hand, if the competitive product release risk is assessed as ‘low’, then management may wait and spend more time on product quality.

If the risk assessment is faulty and leads management to make the wrong decision, there may be severe damage.

Going to market too early with a less than perfect product can lead to customer dissatisfaction and longer-term revenue losses.

Going to market too late allows competitors to steal market share and for people to question the ability of the company to be a market-leader.

Are risk officers (CROs and their teams) confident in the risk assessments they make or facilitate?

If a risk (of any type) is assessed as, let’s say, ‘high’ (whatever that means), how confident is the CRO and/or the management team in that assessment?

Are they 100% confident? I doubt it.

How about 90% or 80%?

In fact, I doubt that many CRO’s think about the likelihood that any of the risk assessments they make or facilitate are reliable.

I believe that CROs need to understand the likelihood that each risk assessment is or is not reliable.

Related risk factors may include:

  • Cognitive bias. See previous posts: Understand your own bias as a practitioner and Are your business decisions failing because they are biased?
  • Incomplete information, including not involving all the people who have relevant information and insights
  • Information that is out of date
  • Inaccurate information, for example portraying risk as a point instead of a range
  • Hidden or difficult to find and use information. For example, I understand some organizations have a risk matrix with more than 50 columns let alone the number of rows. How can decision-makers be expected to find the nuggets of actionable information they need in such a mess of data.

Of course, many factors may lead to risk assessments that need to be taken with a grain, a pinch, or a bucket of salt.

The issue is whether the CRO understands the level of salt required. Should management make business decisions based on the available risk assessments.

If the likelihood of error in a risk assessment is unacceptable, should the decision be delayed until improvements are made – if that is even possible?

What do you think?

There are other dangers in risk assessment, which I will discuss in a later post.

  1. John J Brown
    July 31, 2022 at 2:07 PM

    Norman, I could not agree more! Many risk assessments suffer from multiple biases (see the cover article in the April 2020 RIMS Magazine), as well as misunderstanding that risks exist on a continuum of likelihood-impact pairs — e.g., a low likelihood-high impact AND a high likelihood-low impact (and many points in between). The flaws can to lead to focusing on the wrong risks, and this can be worse than not focusing on risks at all. I have also seen risk managers run intricate monte-Carlo analyses on risks that likely have a +/- 50% accuracy on likelihood and impact, in the search to quantify VaR.

  2. Ammar Ahmed
    August 1, 2022 at 12:02 AM

    In this article, Norman, you use the concept/ tool of risk assessment as a perhaps a primary filtering factor to make business decisions, as a matter of fact when significant business decisions are taken no organization uses a simple risk assessments matrix or ordinary subjective risk assessment techniques used by the risk professionals – which is only done only for an oversight role at the top level. Multi-disciplinary quantitative as well as qualitative (subjective) data and assessments are sought in order to make sure the organizations use all their resources to reach at the right decisions (as far as what can be foresighted) incorporating the cost to benefit analysis and probabilities of various aspect affecting the business decision. The quality of this decision making gets better as organization size, and therefore, the resources increases allowing the use of complex data analysis, forecasts, third party consulting reports etc. into their decision making. I understand that you have been a big proponent of using the risk management mindset at each level of business decisions and use it find opportunities instead of only focusing on risks, and I would agree with that approach. However, using that approach we are faced with a wide spectrum where at one end the risk assessments are applied in an undocumented manner for a very small and insignificant (often routine) decisions and very high level business decisions (like acquisitions, introducing a new product or technology, etc) on the other hand. As we move from former to later end of the mentioned spectrum, the decisions are backed by all sort of analysis within the resources and capabilities of the management. If they err in making their feasibility studies and assessments which are subjective in many ways, that’s the best they can do. They can definitely improve their limitations and improve their decision making continuously but this in no way shows if risk assessments done by business decision maker are faulty in their current state as no one uses subjective ratings displayed over likelihood and probability boxes to choose the fate of a business decision laying in front of them.

    • Norman Marks
      August 1, 2022 at 6:29 AM

      Ammar, thank you for the response.

      I agree that simplistic lists of risks or heat maps are of little value.

      Where I disagree is where you call the more rigorous analysis of what might happen as something different from risk assessment. A feasibility study, for example, includes a study of what might happen, its possible effects and likelihoods of those effects.

      I believe you are describing more effective risk assessments.

      My point remains, though.

      Even the more sophisticated data analyses and such can be in error.

      I have described in my books how Tosco Refining Company did an excellent analysis when considering and then planning a new unit. However, they didn’t update it as business conditions change and therefore built a less than optimal plant.

  3. August 1, 2022 at 3:51 AM

    Very important topic. Few points to add:
    – good CRO back test all their risk models, so we actually measure and reduce error in risk assessments knowing too well what a significant issue that is. If you are a CRO and reading this and don’t back test risk analysis, then not a good CRO :))
    – many more sources of error in risk assessments, starting with framing and the most important one model error. Basically if you use P x I and map it on the map, your error is more than chance and you are better off not doing any risk analysis at all. Scientific fact :))

    • Norman Marks
      August 1, 2022 at 6:33 AM

      Alex, you mention perhaps the greatest source of error: asking the wrong question! Any analysis is limited by how it is framed.

    • Norman Marks
      August 1, 2022 at 7:03 AM

      Alex, what is your level of confidence in your models and analyses? Is that an acceptable level? Does management know?

      • August 1, 2022 at 7:09 AM

        Management sets the confidence level for our risk models and for us it was 97.5% unless stated otherwise. Confidence level is stated in the first sentence of any risk analysis output. In RM2 world that’s basic hygiene.

  4. August 1, 2022 at 9:38 AM

    Hi Norman

    There are thousands of risk assessment methods and so therefore many answers.
    Balanced decision making requires the right ingredients or inputs . I like to use the term optimum rather than “best information” which is referenced in 31000. Competence and human factors are vital factors even if reliable models are being used ) Crap in Crap out.

    Speaking of hygiene do we need to consider using threat and opportunity rather than risk and opportunity.
    “In the same way, business leaders make decisions based (at least in part) on information about risks and opportunities.”

    • August 1, 2022 at 9:43 AM

      I can think of 3 methods https://riskacademy.blog/what-is-a-risk-its-not-what-you-think-it-is/, what are the other thousands?

      • August 1, 2022 at 3:53 PM

        In Health&Safety alone there are many methods alone ,31010 mentions others but is non exhaustive.

        • August 2, 2022 at 6:39 AM

          Most of the things in 31010 are pseudoscience and astrology, for the rest of the techniques referenced there the underlying math is the same, the 3 things I mentioned in the article above. Risk assessments are pretty basic mathematical exercises

          • August 2, 2022 at 7:03 AM

            That sounds like or is a put down but hopefully my response will be useful to some of Normans Readers

    • Norman Marks
      August 1, 2022 at 9:43 AM

      Sean, good question. I would use the language of the business.

      • August 1, 2022 at 3:56 PM

        Good idea Norman but I think it is important that participants understand the nuances.

  5. Bruce McCuaig
    August 2, 2022 at 4:29 AM

    Norman,
    I seems to me that one useful role audit might pursue, if they can pry themselves away from SOX, is to ensure critical business decisions are based on reliable information. The skills and knowledge required are substantially different, and audit standards and paradigms would need to change, but it might be possible to add or directly contribute to business economic value. That is not the case today. It’s too late to audit after decisions are made. The audit universe should become the decision universe. Assurance should be measured by performance.

    • August 2, 2022 at 6:37 AM

      Please no :)) Auditors don’t have the competencies to understand risk management, let alone decision science

    • Norman Marks
      August 2, 2022 at 7:31 AM

      Bruce, despite what Alex says, this is an audit I have done with my team and suggest everybody consider it.

      We talked to each of the executives at a newly acquired company and asked them what information they used in running the business. We then assessed the controls over that information’s completeness, accuracy, currency, and timeliness.

      • Maria Gutierrez
        August 2, 2022 at 8:38 AM

        As an internal auditor, I’m very curious about the results/ findings mentioned in the audit during the exercise
        .

        • Norman Marks
          August 2, 2022 at 8:48 AM

          Maria, I was surprised to find that my team’s assessment was that there were adequate controls over the information used by the executives.

          • Norman Marks
            August 2, 2022 at 8:51 AM

            Having said that, I also think that internal audit can look at other aspects of this. For example, I have had my team audit the controls over the reports provided to the board each quarter. Another potential audit is to look at the decision-making processes around major decisions. I did that for the selection of suppliers, and it could be useful to look at the decisions around product pricing.

      • brucemccuaig1
        August 6, 2022 at 2:24 AM

        Norman
        That sounds like a progressive approach if it can be sustained and expanded. As for the comment from Alex, I know that very few auditors today have any significant decision science skills, and none are required by the IIA. But sound business decisions today are not driven by historical accounting data auditors focus onIn particular strategic decisions and capital allocation is driven by forward looking projections of economic, social, political, competitive, scientific and technological and other events and conditions.

  6. August 2, 2022 at 11:49 AM

    Norman, I feel very differently about this. Auditors look at a) control effectiveness (most decision processes are not formalised and certainly not risk-based yet, nothing for auditors to test until 2nd line finish integrating risk management) and b) control adequacy or whatever, basically whether business process is adequate to the task, risk appetite, etc (here auditors have no competency to comment or make recommendations, unless proper decision scientists are involved). In the future decision making will certainly become an auditable process, but only after 2nd line finished and RM2 is implemented

    • Norman Marks
      August 2, 2022 at 2:05 PM

      Alex, I have had the pleasure of working with many internal auditors who would do a good job here. Sadly that is not your experience. I will probably write about this next week

  7. August 3, 2022 at 1:54 PM

    Great article!
    In addition, potential weaknesses of the traditional two-dimensional risk assessment approach can be addressed. Beyond the existing ‘effectiveness’ and ‘likelihood’ dimensions, ‘speed’ and ‘duration’ can be taken. For details, please check our article on
    https://dergipark.org.tr/tr/download/article-file/2410279
    I would be happy to hear your comments on whether the classic two dimensions are sufficient.

    • Norman Marks
      August 3, 2022 at 2:29 PM

      I agree that two dimensions are insufficient, even if a range of potential effects is considered. There’s the likelihood of multiple events, the speed of onset, the duration, the time to assess what the heck happened, and much more

  1. No trackbacks yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.