Xenomorph Market Data Quality provides automated data quality workflows to ensure that the market and reference data that your business needs are accurate, complete and conform to internal data policy standards.
The system comes with a wide variety of pre-configured validation tests for data cleansing and the remediation of data exceptions. Users can also customise their own tests in line with the organisation’s data quality standards. Tests are triggered automatically as part of defined data management workflows – cleansing rules enable data to be remediated automatically, or exception management dashboards are available to remediate exceptions manually. All of these operations are audited and no coding is needed to configure the required behaviour.
Xenomorph Market Data Quality offers a range of exception management and audit functionality. Exceptions can be corrected in bulk, annotated, and pre-sets created to rationalise and automate the process. Dashboards present the data quality exceptions ready for remediation. These are ranked in order of severity (according to customisable parameters) as displayed by a “traffic light” system of red, amber and green flags. The columns in this view are also customisable, enabling you to configure the screen to your requirements.
Exceptions pass through two stages of approval – “four eyes” approval – meaning all corrected exceptions are validated by two people. Comments can be added (and made mandatory if required) to annotate any corrections and for audit.
Data lineage information includes a history of the tests run on the data point, audit information on changes to the data and the processes applied to it, and an interactive view of the data dependencies associated with it.
For any item object (instrument, curve or surface) there are a full range of drill-down views to access more information regarding the validation tests it has undergone, alongside a log of processes (status histories and process audit), data audit (source info) and dependencies. Process audit records all the processes that have been run against the object.
Different data sources can trigger different validation and cleaning logic and these can be assigned different data quality measures. These can then be used when the data is merged (consolidated) as part of the data management workflow.
There is a very rich set of analytics and reporting for data profiling as it forms an early part of the end to end gold copy creation process. Venue, Data Aggregator, timestamps, gaps etc. can all be analysed and used to estimate data quality and shape the workflow.