I went along to a good event at Sybase New York this morning, put on by Sybase and Platform Computing. As much as some of Sybase’s ideas in this space are competitive to Xenomorph’s, some are very complimentary and I like their overall technical and marketing direction in focussing on the issue of managing of data and analytics within financial markets (given that direction I would, wouldn’t I?…). Specifically, I think their marketing pitch based on moving away from batch to intraday risk management is a good one, but one that many financial institutions are unfortunately (?) a long way away from.
The event started with a decent breakfast, a wonderful sunny window view of Manhattan and then proceeded with the expected corporate marketing pitch for Sybase and Platform – this was ok but to be critical (even of some of my own speeches) there is only so much you can say about the financial crisis. The presenters described two reference architectures that combined Platform’s grid computing technology with Sybase RAP and the Aleri CEP Engine, and from these two architectures they outlined four usage cases.
The first use case was for strategy back testing. The architecture for this looked fine but some questions were raised from the audience about the need for distributed data cacheing within the proposed architecture to ensure that data did not become the bottleneck. One of the presenters said that distributed cacheing was one option, although data cacheing (involving “binning” of data) can limit the computational flexibility of a grid solution. The audience member also added that when market data changes, this can cause temporary but significant issues of cache consistency across a grid as the change cascades from one node to another.
Apparently a cache could be implemented in the Aleri CEP engine on each grid node, or the Platform guy said that it was also possible to hook in a client’s own C/C++ solution into Platform to achieve this, and that their “Data Affinity” offering was designed to assist with this type of issue. In summary their presentation would have looked better with the distributed cacheing illustrated in my view, and it begged the question as to why they did not have an offering or partner in this technical space. To be fair, when asked whether the architecture had any performance issues in this way, they said for the usage case they had then no it didn’t – so on that simple and fundamental aspect they were covered.
They had three usage cases for the second architecture, one was intraday market risk, one was counterparty risk exposure and one was intraday option pricing. On the option pricing case, there was some debated about whether the architecture could “share” real-time objects such as zero curves, volatility surfaces etc. Apparently this is possible, but again would have benefitted by being illustrated first as an explicit part of the architecture.
There was one question about the usage of the architecture applied to transactional problems, and as usual for an event full of database specialists there was some confusion as to whether we were talking about database “transactions” or financial transactions. I think it was the latter, but this wasn’t answered too clearly but neither was the question asked clearly I guess – maybe they could have explained the counterparty exposure usage case a bit more to see if this met some of the audience member’s needs.
The latter question on transactions above got a conversation going on about resilliancy within the architecture, given that the Sybase ASE database engine is held in-memory for real-time updates whilst the historic data resides on shared disk in Sybase IQ, their column-based database offering. Again full resilience is possible across the whole architecture (Sybase ASE, IQ, Aleri and the Symphony Grid from Platform) but this was not illustrated this time round.
Overall good event with some decent questions and interaction.