The current state of risk management is as if we sit below a volcano, knowing the probability of the volcano exploding and a decent guess as to the devastation it would cause. But don’t we really want to sit next to a seismometer and continually measure the increasing volcanic activity, perhaps evacuate before it explodes?
Regulators expected that the provisioning of capital for extreme losses would sustain financial enterprises in periods of stress. In the current mindset of this financial crisis, perhaps a more appropriate view of these capital measures is as the ruler by which an organization counts down to failure, not the system that proactively prevents it. Did regulators truly believe these capital rules would prevent financial institutions from failing?
To be fair to regulators, they did expect to see the coincident evolution of a risk culture within these institutions along with the development of a risk exposure measurement system to capture key operating metrics that could affect its operational risk profile. Taken together, and with regulatory oversight, it was anticipated that the new risk regime would do just that-prevent failures, or at least give an early warning of pending doom.
However, whether by abdication or by push back from the industry, or simply because there was not sufficient time to evolve in a natural way, we stopped the risk management process at capital provisioning. And we certainly failed in risk oversight. We need to get on with risk measurement- along with capital measurement and more rigorous oversight-so we can manage risk.
Risk management has always been an intuitive management skill that was and is expected of all business managers. Business managers manage their revenue and costs through performance management systems. They manage their risk through analysis of various operating metrics and measure the impact from their experience and judgment. The problem with this approach is that it lacks the ability to be measured and aggregated in any systematic way. It is left to a wide range of relatively subjective analyses performed by internal and external auditors around Sarbanes-Oxley; the Committee on Sponsoring Organizations (COSO) reviews; annual financial audits; cost analysis teams performing unit costing; business process reengineering and Six Sigma exercises; and by risk managers applying scorecards and risk control self assessments.
It was and still is wrongheaded to believe that a historical, mathematically modeled view of past losses, manifest in capital provisioning, would prevent too much risk from being taken. Financial transactions that are entered into in real time have the potential of risk exposures cascading far beyond their notional values and certainly far beyond capital provisioned from past loss events.
The industry has not yet found a way to identify operational exposures and put a consistent and comparable value on them. Operational risk, in all its diversity and complexity, is thought not measurable. In the absence of such a direct exposure measurement metric the industry has looked to loss history as being the only objective source of information on operational risks. So what would be an approach to observing the risk of loss in an operating environment?
Contrary to conventional thinking, operational risk can be measured. Just look at all the diversity in the human condition represented in a FICO score for measuring retail credit or the diversity of corporate cultures distilled down into credit rating categories, or the complexity of trading strategies across multiple geographies and products synthesized into a market value-at risk calculation.
An answer to measuring operational risk is found in the evolution of FICO scores and credit ratings. Credit reporting was born more than 100 years ago, when small retail merchants banded together to trade financial information about their customers. Lenders eventually began to standardize how they made credit decisions by using a point system that scored the different variables on a consumer’s credit report. Credit granting took a huge leap forward when statistical models were built that considered numerous variables and combinations of variables around these point systems. Today, credit analysis uses a well-defined set of inputs from the historical set of key risk indicators accumulated from many years of refining intuition into predictors of loss.
If we move over to the commercial side of credit ratings we get a similar history and methodology from the major credit rating agencies. Their methods also refined over a century, associate commercial credit scores into A-B-C rating systems where, for example, a confidence level between 99.96 and 99.98 percent has been calibrated as equivalent to the insolvency rate expected for an AA credit rating.
We start to solve the problem of determining such a metric for measuring operational risk of loss by returning to the roots of the operational risk capital charge, this being the measure of the potential for losses derived from processing transactions, for truly that is what financial institutions, in the main, do. We then make the observation that all operational processes in a financial institution are driven by transactions interacting with human, automated and data- dependent activities. Thereafter we dissect each of these pillars into a finite number of subcomponents of standardized activities that reflect key risk indicators that are known intuitively by business managers to cause losses (“Examples of Mapping Causes of Losses” flow chart below).
This is a critical observation in that each of these “pillars” of activities represents actionable elements in a transactional process. This is important if risk measurement systems are to be able to support management decisions to mitigate risk before they become losses and capital charges.
We then map transactions categorized by product type to standardized risk-weighted activity, risk weight each of their categories and subcomponents using standardized scales and best-practice optimized weightings (see “Example of a Risk- Weighted Products/Transactions Matrix,” below); and then tie the transaction process to a scaled measure of the financial values associated with each transaction (see “Transaction Value Band Weighting Table,” below).
We perform this analysis by using the enterprise’s personnel and documentation in a structured process that allows first for the understanding of the exposures inherent in the operating environment in which the business exists and translating this knowledge into risk weights. We then use these values for the calculation of a forward looking measure of risk exposure, a scaled inherent risk value, and a risk-mitigating best-practice control value. A set of standardized risk metrics is then calculated representing inherent risk, risk mitigation effectiveness and residual risk (see “Example of Calculated Risk Exposure Measures,” below).
These risk metrics, applied at the transaction level, can then be aggregated to provide departmental, divisional, subsidiary and group wide views, and views by categories (i.e., product, geography, business unit and risk type).
This method of calculating risk exposure provides a view of residual risk that is dynamically updated when changes in causal factors occur. In this way the potential for statistical correlation of measurements of exposure to risk and loss history is created which, over time, will cause the risk metrics generated through this new method to become inherently predictive. This is quite different from, but complementary to, the backward- looking capital calculations that financial institutions rely upon today in order to gauge the largest unexpected loss that may occur within a given confidence level and time horizon.
More importantly, it is built from the ground up, allowing for the intellectual property of operating management to be imbedded in the very fabric of the risk measurement system. Institutionalizing such knowledge into the operational risk activity creates credibility and action ability-most critical components in enabling a risk culture to evolve and continual risk mitigation to be its outcome. Without a measure of risk exposure, and a dynamic mechanism for seeing it build up, we cannot take preventive actions. Without it we will forever be destined to sit below a volcano of impending financial crisis and potential collapse, not to be forewarned of the increasing pressure building up so that we can mitigate the consequences of an explosion.
Allan D. Grody is a partner in ARC Best Practices Ltd, a risk consultancy and software firm that owns a pending patent on the methods and system described in this article. He is also the founder of Financial InterGroup Holdings Ltd. and a former partner of Coopers & Lybrand (now PWC) where he founded their Financial Institutions Consulting Practice.