Spreadsheets and Operational Risk
In this blog we look at the main risks posed by using spreadsheets for derivative and fixed income valuation, and explain some of the regulators’ motivations in addressing this source of operational risk.
Although spreadsheets are a tremendously powerful tool for the valuation of fixed income, complex derivative and over-the-counter (OTC) instruments due to their openness and flexibility, spreadsheets do not lend themselves easily to the challenge of conforming to regulatory requirements such as IFRS, BCBS 239, Dodd-Frank, and Solvency II. These regulations require much more control, consistency and transparency regarding input and output data used in instrument valuations. However, due to the flexibility, creativity and ease of use offered by spreadsheets they still find their way into control and valuation mechanisms.
There are many examples of how spreadsheets have played a role in recent high profile trading losses, from the UBS rogue trader scandal to the JP Morgan Whale. JP Morgan’s post-mortem into the circumstances surrounding its $6 billion ‘London Whale’ trading losses uncovered a number of faults related to spreadsheet risk:
- “VaR computation was being done on spreadsheets using a manual process and it was therefore ‘error prone’ and ‘not easily scalable’.”
- Operation of the model involved “a process of copying and pasting data from one spreadsheet to another.”
- “In addition, many of the tranches were less liquid, and therefore, the same price was given for those tranches on multiple consecutive days, leading the model to convey a lack of volatility”.
Obviously a lack of controls around spreadsheets can be exploited by individuals prepared to go beyond the risk limits set by their employers. But we also see standard valuation processes such as independent price verification (IPV) – itself a regulatory requirement – performed in spreadsheets, outside of a managed and controlled model valuation framework.
Against this background, the manner in which many financial institutions currently use and manage spreadsheets within the valuation process is simply not fit for purpose.
Spreadsheet version proliferation between people and over time
As illustrated above, spreadsheet-based valuations are vulnerable to inaccuracy, inconsistency and manual errors due to the way in which they manage the following issues:
- Market Data Capture – data is typically loaded directly into spreadsheets using data vendor APIs, which require vendor-specific knowledge of instrument IDs, field names, scaling factors and quote idiosyncrasies.
- Market Data Changes – If any market data (or any other input) is changed in the spreadsheet there is no record (audit) of this and those changes are not easily available to other users or systems.
- Analytic/Calculation Changes – Similarly, if any spreadsheet functions or formulas are changed, whether intentionally or by accident, again there is no record of this; if the spreadsheet is subsequently saved, the original values are overwritten and hence lost.
- Data Persistence – All data needed for the valuations is contained within the spreadsheet or linked spreadsheets, effectively making the spreadsheet the de facto production ‘database’.
- Data Fragmentation – Often, consolidation of information (e.g. risk exposure) means attempting to aggregate or combine data from many large spreadsheets distributed over a large variety of locations; clearly this is impractical, cumbersome and prone to errors.
Putting aside the regulatory motivations for formalizing and/or reducing manual spreadsheet processes in financial markets institutions, it is important to remain motivated to improve controls over end-user computing given the lessons provided by historical events such as the JP Morgan ‘Whale’. In terms of valuations and the IPV process a centralized and controlled approach to resourcing data and performing the reconciliation process ensures an auditable paper trail for compliance with internal and regulatory standards.
The over-use and misuse of spreadsheets and end-user computing is something that regulators are increasingly aware of, to the extent that they are now addressing the issue explicitly in regulation to reduce operational risk, increase automation and improve underlying data quality. In order to face the challenges presented by these issues, business users and technologists need to find ways to deliver the agility and user-friendliness that spreadsheets offer, without the ever-present operational risks of their use.
The first step should be a solution that can take the models that have been developed in Excel and port them over to an environment that includes the right data validation checks and EDM processes. In time a more sustainable solution should be found where models can be developed in a regulatory-approved manner. These models should also be run on technology that is robust, auditable and offers the right levels of access control given the critical nature of the calculations.
It is a big challenge, but one that if addressed successfully can unlock real business value.
—–
If you are interested in the issues raised in this article, the topic is explored in more detail in our white paper Spreadsheets and Their Role Within Operational Risk: Instrument Valuation Issues, Data and Regulation. Or feel free to get in touch to speak to one of our consultants about how to reduce your exposure to the risks associated with spreadsheet-based calculations.