Model Risk Management: Driving Data Quality and Auditable Processes
Financial models are pervasive across the industry. But controls over how they are used are not that well evolved. As a consequence I thought I would look at the benefits of applying enterprise data management techniques to control model-derived data and address model risk management challenges.
Financial modelling is a cornerstone of the capital markets industry. Traders use models to price assets and gauge technical factors, analysts and economists use models to study fundamentals, investors use models to construct and balance their portfolios and risk managers use models to predict the likelihood of adverse scenarios and guard against their impact.
On an enterprise level, firms then look to roll up individual models – across asset classes and business lines – to get a better understanding of aggregated exposures, run stress-test scenarios and ensure enough capital is on hand to absorb heavy losses even under the most severe stressed market conditions.
An article by McKinsey entitled “The evolution of model risk management” found that the number of models used by banks is growing by between 10 and 25% per annum. Given the importance and proliferation of financial models, one might therefore expect that firms have well-established processes for model validation and model risk management. Yet that is not always the case. McKinsey’s article outlined three key stages in the maturity of the model risk management function. It noted many European banks are still on the first stage of evolution, with US counterparts mostly on the second. The article anticipated that the real benefit of model risk management would only come in Stage 3.
Reducing the risk of manual task errors… by using manual tasks?
Many of the risks associated with using models are operational by nature. When models reside in spreadsheets or end user compute (EUC) environments it makes them more difficult to control. It often means they are more susceptible to manual task errors, pose version control issues, lack auditable processes, provide little visibility in the quality of data inputs, pose key-man risks (particularly without proper documentation), and make firms’ IP easier to walk away with.
These are significant risks that could, for the most part, be addressed by having the right technical solution and workflows in place. Unfortunately, most banks still support the model validation function with manual processes. While these processes may help to back-test model performance, they do nothing to address the previously mentioned operational risks. In fact, they carry operational risks of their own, by making the validation process more opaque and in turn introducing further risk of manual task errors that can lead to the wrong results when validating models.
Business Benefits from Model Risk Management (the Carrot)
In order to evolve their model risk management function, firms need to recognise what they stand to gain. McKinsey breaks out these benefits into three key categories: loss avoidance, cost reduction and capital improvement.
i) Loss avoidance
When it comes to loss avoidance, the stakes are significant. As an example, JP Morgan’s post-mortem into the circumstances surrounding its $6 billion ‘London whale’ trading losses uncovered a number of faults related to model risk management:
- “VaR computation was being done on spreadsheets using a manual process and it was therefore ‘error prone’ and ‘not easily scalable’.”
- Operation of the model involved “a process of copying and pasting data from one spreadsheet to another.”
- “In addition, many of the tranches were less liquid, and therefore, the same price was given for those tranches on multiple consecutive days, leading the model to convey a lack of volatility”
The thought that simple operational improvements could have helped save billions of dollars in trading losses certainly makes a compelling case for ensuring model inputs and outputs are run through proper validation tests.
ii) Cost reduction
Good model risk management also helps reduce operational costs by streamlining the model validation process. Adoption of standardised and automated tools clearly contributes to greater efficiency. So too does access to validated, cleansed model inputs (market data and analytics), which reduces the time and effort to test and optimise models.
iii) Capital Improvement
Finally, the importance of having well optimised models plays a clear role in driving capital efficiency. Better understanding of market and credit risks, asset returns and correlations not only help to fuel better-informed business decisions but also optimise relevant decisions from a regulatory capital perspective.
Regulatory Mandate (the Stick)
Although there are clear business benefits to improving the model risk management function, firms will also face pushback internally. Individuals responsible for developing models will naturally be inclined to claim control of their creations, and will not easily cede operational oversight.
However, regulators are taking an increasingly active role in mandating, or at least promoting best practices around model risk management. They have long recognised the risks that models can pose, and equally, regulatory efforts to address model risk also recognise the importance of data quality in model accuracy.
The Federal Reserve was one of the first to act in this regard. Its supervisory letter SR 11-7: Guidance on Model Risk Management was issued in April 2011 and noted that “rigorous assessment of data quality and relevance” is one of the key factors required for sound development and use of models. The same document went on to say that “all model components—inputs, processing, outputs, and reports—should be subject to validation.”
More recently, the ECB initiated a process known as the Targeted Review of Internal Models (TRIM), and published a guide to TRIM in February 2017 that includes a chapter dedicated to data quality. The TRIM guidelines on data quality suggest that banks ought to have a ‘data quality framework’ in place that is applied to all data sets (internal, external and pooled) and covers all criteria for quality (completeness, accuracy, consistency, timeliness, uniqueness, validity, availability and traceability). The guidelines also suggest firms should have clearly outlined data processing procedures (including collection, storage and validation), formalised systems for measuring and reporting on data quality and clear roles and responsibilities for data owners.
As part of its review, the ECB published its second update on TRIM outcomes in March 2019. This status update included a number of concerns relating to data quality practices applied to credit risk models. The ECB concluded that there were some concerns relating to data quality “present in all institutions investigated.” (emphasis added). These concerns covered a variety of sub-topics, including data quality frameworks and policies, the allocation of roles, responsibilities and ownership, as well as metrics and workflows for monitoring, reporting and remediating data quality issues. “The topic of data management and data quality processes presented the greatest share of findings with higher severity,” noted the report.
In parallel, the UK’s Prudential Regulation Authority (PRA) also issued new model risk management principles for stress testing (SSR/18) which were effective as of the 1st of June 2018. The use of data falls under Principle 3.2, which outlines factors required to support robust model development and implementation. The paper notes that “data used to develop a model should be assessed for quality and relevance.” Banks are also encouraged to document and justify adjustments, use of proxies or any approximations where data is not representative of the bank’s portfolio or asset mix.
Furthermore, with regards to model validation, the paper notes that “all model components” – including “inputs, calculations and reporting outputs” – should be subject to independent validation, irrespective of whether they were developed in-house or by a vendor.
Finally, another way that regulators can look to address model risk is by curtailing their use. In December 2017, the Basel Committee’s oversight body reached an agreement to limit the benefit of using internal risk models. Although this limit will be phased in over five years, it will eventually result in an ‘aggregate output floor’ that makes sure risk weighted assets (RWAs) calculated using internal models can be no lower than 72.5% of RWAs calculated via the standardised approach.
What the Future Holds
The model risk management function within financial institutions looks set for further evolution over the coming years. Models will continue to grow in numbers and sophistication. Various technological trends – including big data, alternative data, digitization, Internet of Things (IoT), artificial intelligence and machine learning – are contributing to that complexity. With a greater variety of data inputs and more multifaceted models to oversee, the process of validating model inputs and outputs will need to be ‘industrialized’.
Industrialization will mean different things to different institutions. A production line or cookie cutter approach sounds idealistic. In reality, markets are inherently difficult to model. They constantly evolve and enter new paradigms. There are very few financial models that stand the test of time and can be used in all stages of the economic cycle. Firms therefore need to see model validation as a continuous feedback loop. One that starts with an initial round of model validation, but continues with ongoing monitoring, investigation of exceptions, model optimisation, along with further testing and validation of the optimised model.
Incorporating standardised, automated validation and monitoring processes into the model risk management function will help support that feedback mechanism and keep firms focused on the end goal: improving model accuracy. Teams charged with validating models will need easy access to gold copy market data and analytics, along with an automated rules / workflow engine to flag any anomalies in model performance that require investigation.
As firms mature their model risk management function, they may find that many of their technical requirements can be fulfilled by enterprise data management platforms, which have been specifically developed to support validation and cleansing of market data.