RiskCredit RiskTen Pitfalls to Developing Effective Internal Ratings Based Systems

Ten Pitfalls to Developing Effective Internal Ratings Based Systems

An effective internal rating system (IRS) provides a mechanism for understanding the credit quality of individual assets as well as the overall portfolio. Information gleaned from an IRS is key to making appropriate decisions about pricing, limit setting, trading, chargeoffs, provisioning, and capital levels-all important factors in managing risk.

Several pitfalls can interfere with the effectiveness of IRS. They may be encountered anywhere from the beginning of the design phase through the testing and implementation phases. Some may be inherent in a financial institution’s organization, and some may be myths or misconceptions that stand in the way of truly understanding and revising the way the institution makes credit decisions. Here is the top 10.

Pitfall #1: “We’ve designed an effective internal rating system-that’s the toughest task.”

While it’s difficult to design a system that meets all of an institution’s needs, implementation is bound to be the toughest task. If implementation is not done properly, the system is unlikely to achieve the goals established in the design phase.

A key aspect of implementation is education. It is important to acclimate users to the new system. In addition, “buy-in” at all levels of the organization helps ensure that users implement the system as intended. Developing “buy-in” really begins prior to implementation. If users have input into the development of the system, they are more likely to be open to its implementation.

In the early stages, implementation needs to be monitored carefully, which entails ongoing validation and periodic “tweaking” or recalibrating of the system if problems arise. This process also may require reeducation about the system if any misunderstandings result in implementation problems.

Pitfall #2: “We have a strong credit culture-assessing credit risk is our strength.”

How do you know this is true? Is this assertion based on reputation and/or relatively benign loss statistics, or has the institution really done some sort of testing or benchmarking?

Relying on loss statistics can be deceiving. The goal of an IRS is not just loss avoidance. Loss statistics constitute a blunt instrument that doesn’t capture the many degrees of performance and nonperformance.

For example, if credit risk is underestimated at the inception of a transaction and pricing is based on this assessment, the institution will not be appropriately compensated for the risk it is undertaking. The internal rating process has not worked very well, but, as long as the obligor does not default and no losses are incurred, this problem will not show up in the loss statistics. Definitional issues concerning recovery also can impact the comparability of loss statistics across institutions, limiting their usefulness as a measure of the strength of the credit culture.

Back testing and benchmarking ratings are better ways of determining whether an institution is appropriately assessing credit risk. By back testing-or examining the actual behavior of companies in specific rating categories to determine if the companies behave as described by the ratings assigned-an institution can determine if it is appropriately assessing credit risk. Back testing could entail comparing actual defaults over time with the probability of default assigned or comparing actual ratings transition behavior with historical experience. If historical data is not available, benchmarking internal ratings against those of a third party can provide insight about how well the organization assesses credit quality.

Pitfall #3: “Our credit staff speaks a common language.”

Often the same word or phrase has different connotations within an institution. While most institutions assume that terminology is used consistently within the organization, this assumption may not be correct. Misunderstandings around definitions can cause inconsistencies in the application of an IRS.

For example, a major challenge confronted by financial institutions is understanding exactly what is meant by “default” and “loss given default.” Does “default” refer simply to nonpayment, to payment after a grace period, or to a covenant violation? How does the institution treat restructurings or distressed exchanges? In the case of loss given default, is the calculation based on trading price 30 days after default or by ultimate recovery following emergence from bankruptcy, and at what discount rate?

Another common inconsistency is caused by misunderstandings about what ratings in fact measure. If the time horizon or definition of what the ratings measure (probability of default, loss given default, or expected loss) is not applied consistently throughout the organization, the ratings generated by the IRS will not be consistent and the organization will not be able to aggregate credit risk effectively. Gradual changes in definitions can occur not only through time, but also from one business unit to another. If consistency is not maintained, the risk being measured and aggregated may not be comparable.

Pitfall # 4: “Our credit staff makes independent rating decisions.”

The structure of the organization can have a major impact on the independence of rating decisions. Management of financial institutions should ensure that the structure of the organization facilitates independent credit decisions while encouraging communication between the origination, credit, and workout groups.

Often, there are business reasons to extend credit with the purpose of advancing a relationship. However, this situation does not negate the need to independently analyze credit and to assess the potential impact of each credit extension on the overall relationship. When the credit process is altered to suit other business objectives, the goal of producing objectively derived ratings may not be achieved.

Independent ratings also can be ensured by examining how the credit staff is being compensated. Do the incentives encourage credit staff to avoid losses or appropriately assess credit risk? Only after examining the structure and incentives can a financial institution understand if the organization structure actually encourages independent rating decisions.

Pitfall #5: “Our system is well documented.”

When new systems are introduced, they are typically well documented. However, as policies, procedures, and criteria change over time, the documentation may not be updated to reflect theses changes. Documentation is an important training tool for new employees and is a key information source for regulators. Maintaining good documentation is critical to an effective IRS.

A system cannot be considered well documented unless an outsider can review the documentation and appropriately assign a rating. In the case of models, this means assessing your system documentation for both the theory supporting the model and the model usage. Documentation for models should include the following:

  • A description of what the model measures.
  • Results of the periodic validation test performed on the model.
  • Model strengths and weaknesses.
  • Types of obligors for which the model is appropriate.
  • Data and ratio definitions.

For expert-based judgment systems, documentation should include specification of all analytical guidelines and actual rating decisions. In addition, hybrid systems should be accompanied by documentation of how the two aspects of the system (expertbased judgment and models) interact.

Pitfall #6: “Why?… Because we have always done it this way.”

The review or redesign of an IRS provides an opportunity to question practices and assumptions. Markets and portfolios change over time. When reviewing or redesigning an IRS, seize the opportunity to take a fresh look at all aspects of the ratings process. Question the appropriateness of what needs to be done for the existing lines of business and expected future lines of business.

Pitfall #7: “We can’t challenge them- they’re the experts.”

Every organization employs both internal and external experts to accomplish various tasks. Internal experts include account officers, credit staff, and those in originations with input into the process. Outsiders, such as consultants and vendors, also need to be held accountable. While experts provide invaluable insights, their decisions should not be accepted simply because they are considered expert on a specific topic. Experts should be required to justify their decisions. Many institutions use expertbased judgment systems for the assessment of credit risk. While these types of systems rely on analysts or account officers to use their knowledge and experience to assign appropriate ratings, they should not be “black-box” systems. The rationale behind coming to a specific rating decision should be understood and documented. In the case of consultants or vendors, it pays to question their suggestions before accepting recommendations or purchasing products or systems. The review or redesign of your IRS provides a good opportunity to ensure that mechanisms are in place to justify and document expert decisions. This concept applies to products and services purchased from third parties, internal credit processes and procedures, and individual rating decisions.

Pitfall #8: “The business units are responsible for that…”

Business units should have significant input into the design or redesign of the IRS, but rating systems require centralized oversight to ensure that they achieve consistency and meet the goals of the organization. This applies to both model-driven and expertbased judgment systems.

Pitfall #9: “Business units perform different types of activities, so the rating systems must be different.”

Again, acknowledge and accommodate the different needs of the business units. However, the output of the IRS must be comparable across business units. If the output is not comparable, credit risk cannot be effectively aggregated across the institution.

Pitfall #10: “Of course we have data.”

While an institution may have captured relevant historical data, it is of little use if it is not readily accessible and can’t be manipulated as needed. The completeness of the data also is critical. Many financial institutions believe they have been collecting the data necessary for building models and testing the performance of their IRS. In practice, however, the data may be spotty, inaccessible, or stored in ways that cannot be easily manipulated. Credit files, history, and databases are rarely in the condition that management believes, and data is often inconsistent, as procedures are different from collection point to collection point. Mergers and acquisitions exacerbate this problem. Common Sense Helps The pressure to design and implement an effective IRS may tempt organizations to seek quick, off-the-shelf solutions. However, a sound IRS should be designed to fit the needs of the organization and not shoe-horned into place. There is no “one-size-fits-all” solution to the problem.

Most important, the intent of a good system is to provide structure and consistency around the ratings process. Markets and portfolios change over time. To ensure that the IRS is working effectively and continues to meet the needs of the institution, the institution, the system should be validated on a continual basis and updated as needed.

Comments are closed.

Subscribe to get your daily business insights

Whitepapers & Resources

2021 Transaction Banking Services Survey
Banking

2021 Transaction Banking Services Survey

2y
CGI Transaction Banking Survey 2020

CGI Transaction Banking Survey 2020

4y
TIS Sanction Screening Survey Report
Payments

TIS Sanction Screening Survey Report

5y
Enhancing your strategic position: Digitalization in Treasury
Payments

Enhancing your strategic position: Digitalization in Treasury

5y
Netting: An Immersive Guide to Global Reconciliation

Netting: An Immersive Guide to Global Reconciliation

5y