RegionsNorth AmericaOperational Risk – Key Problems with the Advanced Measurement Approach

Operational Risk - Key Problems with the Advanced Measurement Approach

The new Basel Accord seeks to apply to operational risk the statistical techniques that have been developed for the measurement of market risk and credit risk and to replicate with respect to operational risk the success of applying these techniques to calculating capital requirements. Banks are encouraged to adopt advanced statistical approaches under the ‘advanced measurement approach'(AMA) described in CP31 (the Basel Committee 2003). This immediately raises a number of important issues – some of which are discussed in this article.

Fundamental Differences in Operational Risk

1. Difficulties in Risk Modelling

The most obvious issue is that both credit risk and market risk exhibit similar properties. They are both characterised by the concept of ‘risk exposure’ and both are subject to industry-wide standards for assessing and rating the probability of a loss event2. Operational risk does not exhibit these properties, largely because there is no systematic, consistent, industry-standard method for collecting and collating the data. The 56 cell matrix (eight lines of business by seven categories of operational risk) described in CP3 provides an initial framework under which to collate the data, but it remains a fact that the available data is sparse and is largely randomly anecdotal.

Risk modellers wishing to fit loss data distributions to the data therefore face a challenge because there is little consistent data with which to work, there is no agreed definition of what ‘risk exposure’ means, and there are no standards for assessing the probabilities of loss events.

2. Data Collection Issues

In the case of market risk and credit risk, all the data is explicitly available in electronic form. This data can be collected, collated and analysed through automated systems, which means that historical records of loss data are complete, consistent and homogeneous. With regard to operational risk the situation could not be more different. Each loss event is the result of a complex interaction between many potential causal factors, and a significant loss event can usually be analysed retrospectively into the ‘unlucky’ alignment of many minor factors, each of which at an individual level would be considered to be insignificant3. This effect is made clear through the detailed post-event analysis of aircraft accidents4, in which the combination of such diverse factors as unusual weather conditions, a distraction to an air traffic controller, a mistake by a maintenance engineer and the fact that the pilot just had an argument with his or her spousecan all add up to a catastrophe. The collection, collation and analysis of this type of data is almost impossible to automate, requiring in most instances detailed manual activity by experts, which in itself makes the process difficult, error-prone and expensive.

Another problem is that while this analysis can be done historically for events that have occurred, the complexity of the potential dependency trees that would need to be developed to apply this analysis for forecasting purposes is beyond the economic scope of current modelling. Identifying and calculating correlation between apparently independent factors is also a difficult issue. Contrary to the assertion made by Hass and Kaiser5, there is a real possibility that a fraud case in London is correlated to an earthquake in San Francisco – what better time to carry out fraud than when management attention is focused elsewhere? So what correlation factor should be used?

We can learn important lessons from past experience, but we cannot generalise the techniques to predict future complex loss event types not yet experienced. The unexpected losses in the tail of the loss distribution are all likely to be of this complex type. In effect, we are denied access to the potential portfolio of operational risks that exist because they are not explicitly known and cannot realistically be identified or predicted6.

The data collection process is also subject to the negative effects of a traditional risk culture in banking institutions in which employees have been encouraged to hide errors, poor decisions and criminal activities rather than report them7. Finally, there is another major hindrance in that banks usually truncate the data collection at around €10,0008, 9. Events with losses below this value are discarded because their high volume and relative insignificance in financial terms make them very expensive to collect, relative to the loss values. While this means that only ‘significant events’ are recorded, there is a huge hole in the data for the underlying causal effects that are characteristic contributors to these major losses. Thus any loss data distributions developed are based on incomplete data that is biased in an unknown way.

3. Homogeneity of Loss Data

Another issue is that of homogeneity of the loss data. Despite the standardisation of the 56 cell matrix mentioned before, there is no attempt within that framework to identify the causes of the loss events. In order to fit a loss distribution to given data it is essential that the data elements are all from the same homogeneous distribution, and this leads to the need to define homogeneous cells at an appropriate level in the hierarchy of the business line/event type matrix10.

Part of the problem is the context dependency of operational risk11. The size of the loss and probability of the event differ considerably according to the circumstances surrounding the event. It is a fact that the business context, the nature of the operational infrastructure and the threat scenarios change over time, in some cases quite quickly, and so historical data collected in one context may not be applicable in the current context. This brings into question the relevance of historical loss data. Even within the one-year time period specified in Basel II within which the accumulated losses must be covered by capital allocation, the context of those losses can change dramatically, which means that the size of potential losses and the probability of them occurring is a continuously moving target. How useful then is the assumed loss distribution?

4. Hidden Operational Risks

Operational risks are by definition embedded in the operational processes, although their causes may be external or may be related to the failure of the resources that support the execution of business processes, such as people and systems. Thus there are at least two major obvious sources of hidden operational risk: unforeseen external events (of a type not previously experienced or known); and events that are embedded in the supporting process resources in such a way as to be difficult to identify and monitor. The first category will always be problematic, unless the art of clairvoyance takes some major leaps forward, but the second category is more within our potential control. Perhaps the most stark example of this second group are those risks that are embedded in the documentation that supports business processes, and among those document types, contracts must be one of the areas for greatest concern12.

The main issue here is that today the production of contracts is still a cottage industry. Contract lawyers have templates for regular contract types, but then each individual contract is handcrafted, rather like a pot on a potter’s wheel, starting with the standard template as the raw material. The problem from an operational risk perspective is that the important process-related risks are most likely to be embedded in the handcrafted parts of the document. They include elements such as renewal dates, delivery dates, payment dates and a whole host of ‘contract events’ (when this happens you must do that by then). A large bank will typically have between 20,000 and 40,000 contracts13 and no matter how much of an attempt has been made to standardise these contracts, the individual needs for event and date tracking are enormous and complex. Teams of lawyers are employed to transcribe by hand these elements into databases that can be used for automated tracking. This manual approach is both expensive and error-prone.

Potential Solutions

1. Business Process-Based Approach

To ensure that the assessment of operational risk provides accurate models of both the size and probability of loss events and is relevant to the real business operations, it must be based upon an up-to-date and accurate model of the business processes, rather than on an abstract taxonomy of risk types14. In order to use this fundamental business process-based approach it is necessary to have a framework that allows the development of the concept of ‘risk exposure’ in operational risk terms, and also a standardised method for measuring the probability of risk events classified according to type.

A novel approach to achieving this is described by Peter Hughes in his paper15. The basis of Hughes’ proposal is that an operational environment in a financial institution supports business transactions, and that operational risks are associated with failure of those transactions, either individually or in bulk. The ‘value at risk’ is related to the volume of the transaction being processed, and the probability of transaction failure is related to a number of standardised risk factors that characterise the transaction type.

The focus on business process analysis is also more likely to provide insights into the complex interdependence of minor factors that combine through dependency trees to produce major loss events, and while forecasting of this type is unlikely to become an exact science, systematic analysis of the process elements will provide a much improved environment in which to foresee unwanted chains of events. This type of process-based approach also removes many of the problems described above, most of which are inherent in the application of loss distributions to fit collected data.

It seems that there is much work yet to be done in this area of qualitative evaluation of operational risks derived by analysis of business processes, and that this may prove to be a more profitable line of enquiry than the continuing work on attempting to develop sophisticated statistical models based on loss distributions. This does however imply that much of the current work on the development of AMA techniques may be redundant and wasted.

2. Full Automation

The current state of operational risk management is characterised by one major drawback – the degree to which it relies upon manual and partially automated processes. There are many individual system and software tools that provide partial automated processing, but data capture, risk aggregation, information integration and reporting to regulators and internal management dashboards remain an area where full automation is an elusive goal. A fully integrated enterprise risk management system must be the objective if the reduction in total cost of ownership and the desired levels of efficiency are to be achieved. An approach to reaching these goals is described in a white paper entitled Integrated Compliance Management16.

There are also still some individual areas where automation has a long way to go, in particular the industrialisation of contracts authoring and embedded risk tracking17. Automated contracts management systems are beginning to appear on the market and are destined to be a growth area in the near future. This and many other automation solutions will almost certainly draw on XML-based technologies, another example being the automation of reporting using XBRL (XML business reporting language) already proposed by the FSA for regulatory reporting. The automation of process mapping and the subsequent operational risk identification and assessment will also benefit from automation tools18.

****

References

Frachot, Antoine, Roncalli, Thierry and Salomon, Eric, 2005, ‘Correlation and Diversification Effects in Operational Risk Modelling’in Operational Risk: Practical Approaches to Implementation.

1The Basel Committee on Banking Supervision, 2003, The New Basel Capital Accord – Third Consultation Paper, April 2003.

2Hughes, Peter, 2005, ‘Using Transaction Data to Measure Operational Risk’in Operational Risk: Practical Approaches to Implementation.

3Kalhoff, Agatha and Haas, Marcus, 2004, ‘Management Based on the Current Loss Data Situation’in Operational Risk Modelling and Analysis: Theory and Practice. page 10.

4Smith, Martin, 2004, Keynote Presentation at InfoSec, Nicosia, Cyprus Computer Society, October.

5Haas, Marcus and Kaiser, Thomas, 2004, ‘Tackling the Insufficiency of Loss Data for the Quantification of Operational Risk’in Operational Risk Modelling and Analysis: Theory and Practice. page 18.

6Currie, Carolyn V., 2004, ‘Basel II and Operational Risk – An Overview’in Operational Risk Modelling and Analysis: Theory and Practice. page 74.

7Haas, Marcus and Kaiser, Thomas, 2004, ‘Tackling the Insufficiency of Loss Data for the Quantification of Operational Risk’in Operational Risk Modelling and Analysis: Theory and Practice. page 14.

8Haas, Marcus and Kaiser, Thomas, 2004, ‘Tackling the Insufficiency of Loss Data for the Quantification of Operational Risk’in Operational Risk Modelling and Analysis: Theory and Practice. pages 15 – 22.

9Kalhoff, Agatha and Haas, Marcus, 2004, ‘Management Based on the Current Loss Data Situation’in Operational Risk Modelling and Analysis: Theory and Practice. page 11.

10 Sheikh, Ahraz and Gavin, John, 2005, ‘Tail Dependency in Operational Risk Models’in Operational Risk: Practical Approaches to Implementation. pages 14 – 16.

11Currie, Carolyn V., 2004, ‘Basel II and Operational Risk – An Overview’in Operational Risk Modelling and Analysis: Theory and Practice. pages 74 – 75.

12Vares, Kristiina, 2004, ‘Contract Management and Operational Risk in Operational Risk Modelling and Analysis: Theory and Practice.

13Vares, Kristiina, 2004, ‘Contract Management and Operational Risk in Operational Risk Modelling and Analysis: Theory and Practice.

14Sherwood, John, 2005, Solution Brief: Operational Risk Management, London: Nimbus EMEA.

15Hughes, Peter, 2005, ‘Using Transaction Data to Measure Operational Risk’in Operational Risk: Practical Approaches to Implementation.

16Kind, Chris, 2004, Integrated Compliance Management: Turning an obligation and an expense into an opportunity and a value, London: idRisk.

17Vares, Kristiina, 2004, ‘Contract Management and Operational Risk in Operational Risk Modelling and Analysis: Theory and Practice.

18Sherwood, John, 2005, Solution Brief: Operational Risk Management, London: Nimbus EMEA.

Whitepapers & Resources

2021 Transaction Banking Services Survey
Banking

2021 Transaction Banking Services Survey

4y
CGI Transaction Banking Survey 2020

CGI Transaction Banking Survey 2020

5y
TIS Sanction Screening Survey Report
Payments

TIS Sanction Screening Survey Report

6y
Enhancing your strategic position: Digitalization in Treasury
Payments

Enhancing your strategic position: Digitalization in Treasury

6y
Netting: An Immersive Guide to Global Reconciliation

Netting: An Immersive Guide to Global Reconciliation

7y