Market Data Quality in Financial Services
This article forms part of a series of educational resources designed to help anyone starting out in the financial information industry. Earlier posts looked to define market and reference data, and explain principles of market data management. This article describes the key attributes of market data quality, explaining that data quality requirements vary according to use case, and outlining key processes used to enhance and assure market data quality.
Within the financial industry, market data quality can sometimes be taken for granted. Given the high cost associated with market data products and services, consumers may simply assume that data quality issues have been resolved by the time they reach financial professionals. Yet that is an over-simplification.
Market data needs to be fit-for-purpose. That means the purpose for which data is being used determines its quality requirements. This blog defines key characteristics of market data quality, explains why different market data consumers may define market data quality uniquely, and explains some of the processes used to assure market data quality within an enterprise.
Defining market data quality
Many factors go into defining market data quality. Given that market data is used to support investment decisions (we defined market data, reference data and the difference between them in an earlier blog – see here), it is important to note that not all investment decisions are the same. Different data consumers will therefore have unique data quality requirements depending on their purpose for the data. Below, we outline five key attributes relevant to market data quality and explore how they differ depending on use cases:
This can be a crucial quality attribute for market data. It is important to note that different data consumers have different requirements when it comes to timeliness (otherwise known as latency tolerance). Some strategies (such as high frequency market making or statistical arbitrage) will require the lowest latency data, resulting in data consuming applications being hosted as close as possible to exchanges’ matching engines and all aspects of data processing infrastructure optimised for speed – including the highest bandwidth / lowest latency network connections, and fastest processing capabilities (often using hardware acceleration techniques). Data consumed by humans via market data terminals typically has a higher tolerance for latency, with data often being conflated (updates consolidated so that prices on screen only change a few times per second corresponding to human reaction times). Market data consumers outside the front office (e.g. for risk or valuation services) may only require a single closing price for each day or a snapshot taken at a specific time of day.
Accuracy of market data is often taken for granted, particularly for highly liquid markets. For such markets, timeliness can be a contributing factor to accuracy. For example, if there are hundreds of orders being processed per second, any significant delay to source and process data can mean prices are not valid by the time they reach consumers.
For illiquid OTC markets (instruments that do not trade frequently), accuracy is more of a challenge simply because up-to-date market prices are unavailable. In such cases, data consumers may either be forced to rely on internal resources (e.g. front office marks and/or model-derived prices) or evaluated pricing services from vendors.
Completeness is another attribute of market data quality that can differ depending on the consumer. For example, an algorithmic trading application may require every tick across the full order book, a professional investor terminal may display the best bid on a conflated basis (with a limited amount of market depth), while a retail portal may be satisfied simply with the last traded price. Completeness can also be a key factor when considering market data for illiquid instruments. When an instrument has not traded it is important that stale prices are not used (as this can distort volatility calculations and perceptions of risk). Rather, an alternative model-derived price ought to be calculated (typically using a more liquid proxy) or an evaluated price sourced from a vendor.
The consistency of market data is particularly important in an enterprise context, where data from the same markets may be sourced by different vendors for different purposes. In such cases, it is important to ensure that instruments can be uniquely identified, creation of duplicate records is avoided, and data fields are defined and mapped consistently.
The ‘availability’ of market data is not related to frequency of price updates; it is more of a technical definition – how easily data can be ingested and integrated by consuming systems. In that sense, it is important for any enterprise data management system to be able to cater to a range of requirements – making data available in a variety of files and formats to suit each application.
Improving market data quality through enterprise data management, price mastering, and risk factor mastering
Enterprise data management systems are primarily concerned with improving data quality – in particular accuracy, completeness, consistency and availability. Timeliness is not necessarily an attribute that can be improved, although enterprise data management systems do need to cater for different requirements from consuming applications (including real-time or delayed snapshots, end-of-day and historical data).
For the purposes of this article, we will talk about three key processes relevant to improving market data quality – notably data validation, price mastering and risk factor mastering.
To enhance and assure market data quality, it is important that data is properly checked and validated. This involves running data through a set of validation rules to detect anomalies that require investigation (and possible correction). To do so, teams need the right tools to narrow down the focus of their exception handling efforts. This can include using adaptive data validation rules (for example, looking for price spikes relative to an index rather than on an absolute basis), or applying other rules to flag high priority items.
The output of data validation is to create a master, or gold copy, of critical data elements. This can include securities and derivatives prices, or other elements used to calculate those prices, such as curves (for rates, credit, FX instruments etc.) and volatility surfaces.
Securities and derivatives price master
Maintaining a price master for securities and derivatives is crucial for a range of functions. Accurate pricing data helps to support investment decisions, portfolio valuations, margin and risk calculations, among many others. By properly vetting price data, organisations can avoid the use of stale or erroneous prices and therefore ensure the integrity and accuracy of key investment processes.
Risk factor mastering
When measuring risk, organisations are not simply exposed to instrument prices, but other factors can be equally important. It is therefore vital to maintain centralised and validated master records for key risk factors, including curves, surfaces, and cubes. Enterprise data management systems therefore need to be adept at managing these objects.
Market data quality is in the eye of the beholder. Data needs to be fit-for-purpose. Although all applications benefit from accurate and consistent data, different consumers will have different requirements when it comes to other aspects of market data quality (timeliness, completeness, and availability). Equally, when making sure data is fit-for-purpose there are also regulatory considerations to take into account, with a range of regulations and standards having implications that vary according to the type of market participant.