Benchmarks Regulation: Data Integrity, Validation and Auditable Processes
The European Securities and Markets Authority (ESMA) earlier this month published a consultation paper outlining technical standards relating to its Benchmarks Regulation, which governs the way indices and other benchmarks are calculated. The consultation is open until the 9th of May, at which point ESMA will consider feedback and expects to publish a final report by the 1st of October 2020.
The technical standards cover five key areas of the regulation – Governance Arrangements, Methodology, Reporting of Infringements, Mandatory Administration and Non-Significant Benchmarks. From a data management perspective, the most significant provisions are contained in the Reporting of Infringements sections.
This section specifically relates to monitoring input data to detect potential market manipulation. However, such surveillance also ought to be consistent with validation checks required to ensure data accuracy. The Level 1 text of the Benchmarks Regulation states that administrators need controls that include “a process for validating input data, including against other indicators or data, to ensure its integrity and accuracy.” Such controls would lend themselves equally well to detect potential instances of manipulation.
Is Automation Necessary?
One of the questions being asked as part of the consultation is whether automated controls are necessary to monitor for data integrity. The current draft proposals state:
“Provided that the level of monitoring is appropriate for and proportionate to the nature, scale and complexity of the benchmark, administrators should not necessarily be required to have an automated system to detect potential manipulation. For complex and sophisticated activities an automated system for monitoring may seem necessary. Administrators should also be able to explain upon request why the level of automation chosen is appropriate in respect to their benchmark production.”
We would argue that some level of automation is always preferable. Fortunately, validation checks designed to detect market manipulation will be similar to processes designed to ensure data accuracy, so the same automated framework should be able to satisfy both requirements. An automated approach not only ensures that benchmark administrators can codify the way that they validate data, but also safeguards against manual errors in carrying out that process.
Combining Automation and Human Judgment
When it comes to monitoring the integrity of data inputs, it is interesting to note that ESMA has recommended “the most effective form of surveillance will likely be a combination of automated and human controls.”
We would agree entirely. Automation is ideal to systematically run multiple validation tests on a range of data inputs. Equally, those same tests can be applied to validate calculations from those inputs – the benchmark value itself. However, when tests flag potential anomalies, human judgment is required for further investigation.
For example, let’s assume I calculate a speculative grade corporate credit index. I run automated checks to validate data inputs (which could be either CDS or bond prices) to ensure the accuracy of my price sources. Some of those checks flag if an asset price has moved disproportionately relative to others of similar credit rating.
In the recent market environment, many such red flags are likely to have been raised. Systems can only alert when something is unusual. Human judgment is needed to determine whether there is a logical explanation for that anomaly. The price of bonds issued by airlines and hotel groups will have experienced unprecedented falls in recent weeks. But that is because their underlying business has been devastated.
We architect our systems and processes with this in mind. Typically, a set of rules are established to validate input data. Exceptions to those rules are generated for further investigation. Human operators can then investigate those exceptions and choose how to proceed.
Auditable Processes
Having auditable processes in place is another key requirement that we agree with. Within the Methodology section of the consultation paper, the technical guidance states:
“An audit trail of each calculation of the benchmark is required including the input data used and also the data that were not selected for a particular calculation. Further the reasoning behind such exclusion should be clearly stated. Indeed, this audit trail ensures that the benchmark is calculated in a consistent way.”
Again, this is how we would architect our systems – every process is captured and forms part of an audit trail that can be further analysed. This includes the data validation rules applied, the outcome of applying those rules (including any exceptions generated) and how exceptions were handled. Given that benchmarks can have a significant impact on the financial outcomes of contracts it is understandable why scrutiny exists over their calculation and data inputs. From a systems and controls perspective we would agree with the proposed standards, which help to codify best practice when it comes to validating data inputs and analytical processes.