Managing Market Data in the Cloud
Less Fluff. More Concrete Use Cases.
Anyone attending last week’s FISD Technology Forum in London will have been treated to a very enjoyable afternoon. Not only was the event well attended, offering a great networking opportunity (special thanks to the team at FISD and host sponsors JP Morgan); but the content being debated was also first class (particular kudos to David Anderson).
For anyone that couldn’t attend, we thought we’d summarise some key themes (although obviously in an anonymous manner given that discussions are held under Chatham house rules).
As the title of this blog suggests, a large part of the day’s discussions focused on managing market data in the cloud. This topic is by no means new – as many speakers acknowledged. However, it was refreshing to see how discussions had evolved over the years. The debate was no longer theoretical. We were no longer asking whether ‘financial institutions were ready to manage market data in the cloud.’ Instead we were talking about tangible, real-world use cases already in production.
This is particularly reassuring to us at Xenomorph, given that we have been advocates of managing data in the cloud for many years (offering our data management platform on Microsoft Azure since 2013) and won last year’s Enterprise Data Management Initiative of the Year Award (hosted by Inside Reference Data) for supporting Mizuho America with the implementation of our cloud-based data management platform, again leveraging Azure.
Choosing the Right Use Cases
Cloud architectures cannot necessarily be applied to every imaginable use case. Nor is it economically viable to lift and shift any software and have it run in the cloud. These two points were underlined more than once over the day.
In order to truly unlock the benefits of cloud, you first need to rule out use cases that may not be appropriate. For example, you won’t be looking to run ultra-low latency marketing-making technologies on multi-tenant, software-defined infrastructure, which by definition will be less deterministic in performance.
However, there is nothing to say you can’t support real-time applications in the cloud. One use case discussed in detail related to a real-time market data distribution platform specifically architected for the cloud (offering elastic scalability) that comfortably supported throughput rates of four updates per second, which is more than sufficient for the majority of financial applications.
It was also noted that systems hosted in the cloud (whether that be public, private or hybrid) would ideally be “cloud-native.” That means they would take advantage of new architectural techniques such as micro services, stateless protocols, containerization, decoupling data from applications etc. Equally, firms would make use of appropriate cloud development and deployment methodologies – including agile and DevOps, as well as continuous integration, deployment, delivery and compliance.
However, the practical reality facing most institutions is that so many years have been invested in developing established applications that currently run well on dedicated hardware (but may not be as well-suited for deployment in the cloud), such that re-architecting them may prove uneconomic. One example that is particularly relevant to the world of market data is the use of multicast transport protocols, which are widely used for streaming data delivery but not well-suited to the cloud.
Many original blockers that prevented cloud adoption (concerns over security, auditability, performance, data sovereignty, vendor lock-in etc.) have now been removed. However, as firms look to move legacy systems to the cloud, they will need to find workarounds to architectural stumbling blocks – such as the use of multicast. This is already happening – more than one vendor on the day described the way in which they were able to use point-to-point feeds as an alternative data delivery mechanism.
AI / ML Evolving in Tandem
Another theme to emerge from the day’s discussion was the increasing variety of use cases leveraging artificial intelligence and machine learning algorithms. Interestingly, cloud adoption is serving as a key enabler for this trend, given that many of the leading cloud service providers provide the necessary tools to facilitate adoption of AI/ML analytics.
The Gravity of Data
As more data migrates to the cloud, some speakers also observed that getting data into the cloud can be easier and less expensive than getting it out. As such, it has been interesting to note another architectural shift. Given more data available in the cloud (as more vendors set up PoPs and rapidly configurable software-defined connectivity), there is now a tendency to migrate workloads to the data, rather than moving the data to applications. This ‘gravity’ of data and its potential to accelerate cloud adoption is something we have discussed in an earlier blog.
Unique Data Management Challenges of FRTB
Finally – one of the day’s presentations highlighted some unique data management challenges posed by the Basel Committee’s Fundamental Review of the Trading Book (FRTB), which now forms part of Basel III and is due to be implemented in 2022. This is another area we are carefully studying and look forward to updating you on our solution to some of the challenges posed shortly – so watch this space.