RiskOperational RiskOperational Risk 101: Tackling Basel II

Operational Risk 101: Tackling Basel II

In the first three articles of this series, we temporarily put Basel II aside and set the stage for a more practical approach to operational risk management. Rather than following the Accords blindly, we chose to define operational risk in terms of operational performance, not losses. We took this rather bold step because of one simple fact – implementing the Accords verbatim lead to a host of operational hurdles and definitional roadblocks that were just too hard to overcome.1

So, instead of becoming a 21st century Sisyphus fighting an endless uphill battle, we tried a different and more intuitive tack altogether.

We began by first establishing the critical relationship between operational performance, cost, and risk. For this, we used the Fundamental Operational Objective that states that the goal of every operation is to provide the desired level of performance at the lowest cost possible, operating within an acceptable level of risk. Following this path, we were then led to a natural generalization of the Basel definition of operational risk. Specifically, we defined operational risk in terms an institution’s ability to achieve its performance and costs targets. This also resulted in formally defining KPI as the institution’s performance and cost metrics.

During our analysis, we noticed that certain conditions had to be met in order for an institution to achieve its KPI targets. This subsequently led to a formal definition of quantitative critical success factors for each of the KPI which we formally defined as KRI.2

At this point, we had the framework to measure performance and risk, but not the specific metrics. The problem became how to select the right KPI and KRI out of an almost infinite amount of operational and financial data.

For this, we adopted a well-accepted performance measurement technique, the Balanced Scorecard.3 With some minor modifications, we were able to use the Balanced Scorecard to establish a formal mechanism for identifying the meaningful KPI and KRI for various component of the operation.

Finally, we were able link all the KPI and KRI to the overall corporate objectives using the Efficient Operations Hypothesis. Importantly, this provided us with the means to systematically measure performance across the entire enterprise and produce a consistent measure of performance and risk.

And yet, in spite of all this good work one big question still remained, does our approach comply with Basel II?

Just what are the AMA Minimum Standards?

Although there are number of conditions that a financial institution must meet in order to qualify for the Accord’s Advanced Measurement Approach (AMA), it essentially boils down to demonstrating that it can perform the following quantitative functions:4

  1. Estimate expected and unexpected operational losses within a given level of confidence
  2. Identify and track key operational risk factors reflecting the business environment and internal controls
  3. Perform scenario analyses to simulate possible operational losses and loss events, as defined by the Accords, incorporating both internal and external data in the analysis

Therefore, it seems reasonable that in order for our approach to be consistent and compliant with the Accords, we must be able to show that it meets these three criteria.

Ok then, let’s start with Point 2 since it is the easiest

It is clear that our KPI and KRI are business environment and internal control risk factors by virtue of the simple fact that we defined operational risk specifically in terms the KPI and KRI. In other words, our risk factors are a direct by-product of our approach.

Point 3 is really not much more difficult. We only need to show that we can model operational losses and loss events using our KPI and KRI.

Fortunately, we can use fairly common standard statistical analyses to associate the KPI and KRI with the specific Basel II loss event categories. Moreover, once we have made this association, we can simulate various values of KPI and KRI to estimate the resulting impact on operational losses using both internal and external data.5 Hence, our approach also meets the requirements of Point 3.

To show our approach is compliant with Basel II, we only have to show that we can adequately estimate expected and unexpected losses. Now, you might be thinking, “sure, this is where everybody hits the wall.” But, don’t worry, it really won’t be that hard.

However, before we can do this, we first need to take a closer look at just what is an operational loss.

For Want of a Nail…

On the surface, the concept of an operational loss seems pretty straightforward: an operational loss is a (presumably) monetary loss resulting from an operational failure, end of story. There are only three problems with this statement – what precisely is an operational failure; what precisely is a monetary loss; and what is the precise way in which operational failures cause monetary losses.

This may sound a bit facetious, but to really understand the true scope of these problems, let’s look at a simple example.

Let’s say that a network card fails in a server – a typical operational failure. Clearly, the cost of discovering the faulty card and its replacement should be considered as operational losses. Also note that a single loss event (the network card failure) can lead to more than one operational loss, in this case, the cost of discovery and the repair costs. Sounds pretty straightforward so far.

Now, let’s further assume that the network card fails just before the Fed Wire closes and the bank is buying one billion US dollars of one week money. Because the bank can’t communicate with the Fed, the transaction fails. This results in the bank being short one billion at the Fed – not a good thing to say the least. At this time of day, the bank will have no choice but to buy the necessary funds at the Fed’s discount window at a substantially higher rate of interest (i.e. the overnight discount rate as oppose to one week money) – hence, another loss event and financial loss.6

After a review of the incident, the bank, with a little prompting from the Fed, might decide that the overall treasury technology infrastructure lacks sufficient redundancy and undertakes a two year, $5m upgrade project – yet, one more loss event and a series of financial losses which sum to $5m.7

So, much like the children’s nursery rhyme in which the war was lost for the want of a nail, a simple network card failure, one costing all of $150, resulted in a $5m operational loss.

Unfortunately, through this example we see that the commonly held notion that a single loss event leads to single operational loss is plain wrong. The painful truth is that a single loss event often causes numerous subsequent operational failures, each of which, in turn, may lead to numerous operational losses – a real headache for any operational risk manager to track.8

This example helps illustrate some of the other shortcomings of the Basel II definitions as well.

When is a Loss not a Loss?

Notice that while the network card failed at a point, the subsequent loss events, such as the discovery and repair of the problem, occurred over a span of time. In fact, the last loss due to the infrastructure upgrade took place over two years. This is also true of the operational losses which also can occur over time. Therefore, we cannot simply consider loss events and losses in terms of points in time, but rather in terms of spans of time. This creates all kinds of computation headaches.9

Another major problem with Basel is the rather cavalier assumption that we all know exactly what a loss is. If we simply assume that this is a financial loss as reflected in the balance sheet and income statement, an approach taken by many operational risk managers, we will fall into a hopeless quagmire of accounting treatments and worse, that have absolutely nothing to do with operational losses.

For instance, operational losses may be capitalized (as in the case of technology or facility improvements) or simply expensed. They might be posted to a single ledger or spread across multiple ledgers in GL. Some losses maybe accelerated for tax purposes while others deferred. And the list goes on and on. Financials are prepared solely for corporate performance and tax reporting and not to support operational risk management.

Therefore, in order to accurately account for operational losses, we must capture the true monetary cost of the operational failure (as opposed to the realized cost) and which must be tracked independently of the institution’s overall financials.

Another major problem associated with the Basel II definition of losses is how to they should be allocated.

In our example, it is not clear how much of the $5m infrastructure project should really be allocated to the network card failure. While it may have been the final straw that forced the bank to upgrade its technology infrastructure, the network card failure could be part of a long line of failures which occurred over a period of time. This leads to the rather counter intuitive possibility of a significant loss without a direct loss event – have you grabbed for the Tylenol yet!

Fortunately, our approach eliminates most of these Basel II problems as well as many others. But, you’ll just have sit on the edge of your chair just a bit longer while we take a look at one of the more interesting, and extremely important aspects of Basel II.

What is an Expected Unexpected Loss Anyway?

Quite rightly, the Accords differentiate between expected and unexpected operational losses. Actually, they go so far as to propose a capital charge for only unexpected losses, provided the bank can demonstrate it can “adequately capture expected losses in its internal business practices”.10

In other words, the Accord’s correctly assume that expected losses, since they are expected, will have been budgeted by the bank and it will have the necessary capital to cover them.11 Unfortunately, many operational risk managers have failed to grasp this critical nuance.

For the most part, expected and unexpected losses have been defined by risk professionals strictly in terms of probably distributions and confidence levels. Typically, they estimate operational losses by looking at historical losses over some time span. Then using standard statistical techniques, they forecast future expected losses by aggregating all the losses within a high degree of confidence, say 99.5 per cent. The unexpected loss is simply the remaining tail of the probably distribution.12

While this may be consistent with estimating a market risk VaR, it ignores a very important fact. The Accords state that if expected losses are captured as part of an institution’s “internal business practices” (e.g. budgeted),13 such losses do not have to be included in the capital calculations. In other words, even if losses have been budgeted for, using the common statistical definition of expected losses, a financial institution will still have to set aside regulatory capital – ouch!

Clearly, the right approach is to exclude true expected losses from any statistical analysis at the outset. This would allow us to compute the expected unexpected losses, the expression of operational risk exposure. While this may sound a bit strange, in the next article, we will show just how easy and intuitive this really is.

So Where does this Leave Us?

Before we got sidetracked on semantics, we only had to show that we could estimate expected and unexpected operational losses within a given degree of confidence in order to compliant with Basel II. Given the above discussion, this comes down to showing that we can correctly account for budgeted and non-budgeted losses. Fortunately, using our approach to operational risk management this is actually fairly straightforward.

Remember, we defined operational risk specifically as the risk that we might not meet one or more KPI targets. Normally, institutions are willing to live with some degree of inefficiency. This is commonly reflected as an error tolerance around the KPI targets. For example, while we may have a goal of zero settlement breaks, we might be willing to live with 0.5 per cent of all daily settlements failing.

Now, if we meet or exceed our KPI target, then there is no loss and all is well. If we just miss our target and are within the error tolerance, we should expect an operational loss, but something acceptable. Remember, we can always reset the error tolerances to make the losses acceptable. Hence, such losses should be budgeted and be part of our internal business practices – hmm, sounds familiar.

That’s right, you might have already guessed that such losses met the Basel II definition of exempted expected losses. Sure enough, we shall define losses that result from missing our performance targets, but within our error tolerances, expected losses.

But what about the losses that result from big performance failure, the ones outside of our tolerances. Presumably, we had controls in place to prevent such errors, hence, they were unexpected. So, we will simply define unexpected losses as those losses due to performance which falls outside the error tolerance for a given KPI. Yes, it was that easy.

Can You Run that by Me Again?

To recap…

  1. In order to capture both expected and unexpected losses, we first set the error tolerances for each of the KPI.
  2. Next we monitor the actual performance noting each time performance meets or exceeds our target (no failure), falls within the error tolerances (an operational failure which results in expected losses), and performance that fall outside the error tolerances (an operational failure which results in unexpected losses).
  3. For each operational failure, we assign a monetary value (not an accounting value).
  4. We then sum the expected losses and unexpected losses of all KPI to report overall operational performance
  5. Using an historical time series of unexpected losses, we compute the expected unexpected losses to estimate our operational risk exposure.

Q.E.D.

Wow, We made It!

By simply generalizing the definition of operational risk by expressing it in terms of performance rather than losses, we were able to construct a systematic method to compute operational risk exposure. Moreover, this method allows us to dramatically lower the Basel II capital charge as well as provide an effective means to measure and improve both the quality and value of the institution’s operations.

Now, that is pretty cool.

Before we start opening the Champagne, however, this is still just theory. No matter how clever it is, for our approach to have real value, we have to show that it can work in a typical financial institution.

Of course, this is exactly what we are going do in the next article.

Note: The next article in this series will be published in mid-March

****

1 Not the least of which is a precise definition of a loss, but more on that a little.

2 a.k.a. key risk indicators.

3 See Kaplan, Robert S. and Norton, David P., “The Balanced Scorecard – Measures That Drive Performance”, Harvard Business Review, January-February 1992.

4 See AMA quantitative standards, page 144, “International Convergence of Capital Measurement and Capital Standards: a Revised Framework”, Basel Committee On Banking Supervision, Bank of International Settlement, June 2004 (Basel II).

5 In mathematical terms, we can parameterize both operational losses and events in terms of the KPI and KRI. Using generalized linear regression techniques, we can then estimate the coefficients of each of KPI and KRI as well as construct a standard covariance matrix to account for the dependencies among the KPI and KRI.

6 This will also upset the Fed, most likely, which takes a very dim view on borrowing from the discount window.

7 Some of you may not agree that this project should be considered a loss event associated with the network card failure. However, Basel II does not exclude such losses, it only really excludes the opportunity costs associated with a business handicapped by an operational failure.

8 In mathematical terms, we would classify the relationship between loss events and operational losses as being many to many – a particular loss event may lead to a number of operational losses and a particular loss, may be the result of numerous loss events. And since a single loss event can create an ever branching tree of loss events, each of which may spawn any number of operational losses, we can see that a single loss event my produce lots of additional loss events and losses.

For you physicists, the best method of modeling loss events and losses is to apply the concept of timelines from general relatively. It correctly accounts for the interdependencies and span of time issues. Plus, it leads naturally to the concept of loss event horizons making the whole exercise much simpler.

9 For you mathematicians out there, get your Lebague integration text out, your are going to need it.

10 “International Convergence of Capital Measurement and Capital Standards: a Revised Framework”, Basel Committee On Banking Supervision, Bank of International Settlement, June 2004 (Basel II), page 144.

11 The authors understand that there is a big difference to having the money and budgeting the money, after all we see this every year when the Congress debates federal spending.

12 Specifically, we integrate the operational loss probably distribution to find the total losses up to the confidence level.

13 See endnote ix.

Comments are closed.

Subscribe to get your daily business insights

Whitepapers & Resources

2021 Transaction Banking Services Survey
Banking

2021 Transaction Banking Services Survey

2y
CGI Transaction Banking Survey 2020

CGI Transaction Banking Survey 2020

4y
TIS Sanction Screening Survey Report
Payments

TIS Sanction Screening Survey Report

5y
Enhancing your strategic position: Digitalization in Treasury
Payments

Enhancing your strategic position: Digitalization in Treasury

5y
Netting: An Immersive Guide to Global Reconciliation

Netting: An Immersive Guide to Global Reconciliation

5y