Unifying Risk with Numerix, Tabb and Microsoft
Numerix ran a great event on Thursday morning over at Microsoft’s offices here in New York. “The Road to Achieving a Unified View of Risk” was introduced by Paul Rowady of the TABB Group. As at our holiday event last December, Paul is a great speaker and trying to get him to stop talking is the main (positive) problem of working with him (his typical ebullience was also heightened by his appearance in the Wall Street Journal on Thursday, apparently involving nothing illegal he assured me and even about which his mother phoned him during his presentation…). Paul started by saying that in their end of year review with his colleagues Larry Tabb and Adam Sussman, he suggested that Tabb Group needed to put more into developing the risk management thought leadership, which had led to today’s introduction and the work Tabb Group have been doing with Numerix.
Having been involved in financial markets in Chicago, Paul is very bullish about the risk management capabilities of the funds and prop trading shops of the exchange traded options markets from days of old, and said that these risk management capabilities are now needed and indeed coming to the mainstream financial markets. Put another way, post crisis the need for a holistic view on risk has never been stronger. Considering bilateral OTC derivatives and the move towards central clearing, Paul said that he had been thinking that calculations such as CVA would eventually become as extinct as a dodo. However on using some data from the DTCC trade repository, he found that there are still some $65trillion notional of uncleared bilateral trades in the market, and that these will take a further 30 years to expire. Looking at swaptions alone the notional uncleared was $6trillion, and so his point was that bilateral OTC and their associated risks will be around for some time yet.
Paul put forward some slides showing back, middle and front-offices along different siloed business lines, and explained that back in the day when margins were fat and times were good, each unit could be run independently, with no overall view of risk possible given the range of siloed systems and data. In passing Paul also mentioned that one bank he had spoken two had 6,000 separate systems to support on just the banking side, let alone capital markets. Obviously post crisis this has changed, with pressures to reduce operational costs being a key driver at many institutions, and currently only valuation/reference data (+2.4%) and risk management (+1.2%) having increased budget spend across the market in 2013. Given operational costs and regulation such as CVA, risk management is having to move from being an end of day, post-trade process to being pre- and post-trade at intraday frequency. Paul said that not only must consistent approaches to data and analytics be taken across back, middle and front office in each business unit but now an integrated view of risk across business units must be taken (echos of an earlier event with Numerix and PRMIA). Considering consistent analytics, Paul mentioned his paper “The Risk Analytics Library” but suggested that “libraries” of everything were needed, so not just analytics, but libraries of data (data management anyone?), metadata, risk models etc.
Paul asked Ricardo Martinez of Deloite for an update on the regulatory landscape at the moment, and Ricardo responded by focusing down on the derivatives aspects Dodd-Frank. He first pointed out that even after a number of years the regulation was not yet finalized around collateral and clearing. A good point he made was that whilst the focus in the market at the moment is on compliance, he feels that the consequences of the regulation will ripple on over the next 5 years in terms of margining and analytics.
Some panel members disagreed with Paul over the premise that bilateral exotic trades will eventually disappear. Their point was that the needs of pension funds and other clients are very specific and there will always be a need for structured products, despite the capital cost incentives to move everything onto exchanges/clearing. Paul countered by saying that he didn’t disagree with this, but the reason for suggesting that the exotics industry may die is trying to find institutions that can warehouse the risk of the trade.
Satyam Kancharla of Numerix spoke next. Satyam said that two main changes struck him in the market at the moment. One was the adjustment to a mandated market structure with clearing, liquidity and capital changes coming through from the regulators. The other was increased operating efficiency for investment banks. Whilst it is probable that no in investment bank would ever get to the operational efficiency of a retail business like Walmart, this was however the direction of travel with banks looking at how to optimize collateral, optimize trading venues etc.
Satyam put forward that computing power is still adhering to Moore’s law, and that as a result some things are possible now that were not before, and that a centralized architecture built on this compute power is needed, but just because it is centralized does not mean that it is too inflexible to deal with each business units needs. Coming back to earlier comments made by the panel, he put forward that a lot of quants are involved in simply re-inventing the wheel, to which Paul added that quants were very experienced in using words like “orthogonal” to confuse mere mortals like him and justify the repetition of business functionality available already (from Numerix obviously, but more of that later). Satyam said that some areas of model development were more mature than others, and that quants should not engage in innovation for innovation’s sake. Satyam also made a passing reference to the continuing use of Excel and VBA is the main tool of choice in the front office, suggesting that we still have some way to go in terms of IT maturity (hobby-horse topic of mine, for example see post).
Prompt by an audience question around data and analytics, Ricardo said that the major challenge towards sharing data was not technical but cultural. Against a background were maybe 50% of investment in technology was regulation-related, he said that there were no shortage of business ideas for P&L in the emerging “mandated” markets of the future, but many of these ideas required wholesale shifts in attitudes at the banks in terms of co-operation across departments and from front to back office.
Satyam said that he thought of data and analytics as two sides of the same coin (could not agree more, but then again I would say that) in that analytics generate derived data which needs just as much management as the raw data. He said that it should be possible to have systems and architectures that manage the duality of data and analytics well, and these architectures did not have to imply rigidity and inflexibility in meeting individual business needs.
There was then some debate of trade repositories for derivatives, where the panel discussed the potential conflict between the US regulators wanting competition in this area, but as Paul suggested having competition between DTCC, ICE, Bloomberg, LCH Clearnet etc also led to fragmentation. As such Paul put it that the regulators would need to “boil the ocean” to understand the exposures in the market. Ricardo also mentioned some of the current controversy over who owns the data in the trade repository. One of the panelists suggested that we should also keep an eye open to China and not necessarily get totally tied up in what is happening in “our” markets. The main point was that a huge economy such as China’s could not survive without a sophisticated capital market to support it, and that China was not asleep in this regard.
A good audience question came from Don Wesnofske who asked how best to cope with the situation where an institution is selling derivatives based on one set of models, and the client is using another set of models to value the same trade. So the selling institution decides to buy/build a similar model to the client too, and Don wondered how the single analytic library practically helped this situation where I could price on one model and report my P&L using another. One panelist responded that it was mostly the assumptions behind each model that determined differences in price, and that heterogenious models and hence prices where needed for a market to function correctly. Another concurred on this and suggested there needed to be an “officially blessed” model with an institution against which valuations are compared. Amusingly for the audience, Steve O’Hanlon (CEO of Numerix) piped up that the problem was easy to resolve in that everyone should use Numerix’s models.
Mike Opal of Microsoft closed the event with his presentation on data, analytics and cloud computing. Mike started by illustrating that the number of internet-enabled devices passed the human population of the world in 2008 and by 2020 the number of devices would be 50 billion. He showed that the amount of data in the world was 0.8ZB (zetabytes) in 2009, and is projected to reach 8ZB by 2015 and 35ZB by 2020, driven primarily by the growth in internet-enabled devices. Mike also said that the Prism project so in the news of late was involving the construction of a server fame near Salt Lake City of 5ZB in size, so what the industry (in this case the NSA) is trying to do is unimaginable if we were to go back only a few years. He said that Microsoft itself was utterly committed to cloud computing, with 8 datacenters globally but 20 more in construction, at a cost of $500million per center (I recently saw a datacentre in Redmond, totally unlike what I expected with racks pre-housed in lorry containers, and the containers just unloaded within a gigantic hanger and plugged in – the person showing me around asked me who the busiest person was a Microsoft data center and the answer was the truck drivers…)
Talking of “Big Data”, he first gave the now-standard disclaimer (as I have I acknowledge) that he disliked the phrase. I thought he made a good point in the Big Data is really about “Small Data”, in that a lot of it is about having the capacity to analyze at tiny granular level within huge datasets (maybe journalists will rename it? No, don’t think so). He gave a couple of good client case studies, one for Westpac and one for Phoenix on uses of HPC and cloud computing in financial services. He also mentioned the Target retailing story about Big Data, which if you haven’t caught it is worth a read. One audience question asked him again how committed Microsoft was to cloud computing given competition from Amazon, Apple and Google. Mike responded that he had only joined Microsoft a year or two back, and in part this was because he believed Microsoft had to succeed and “win” the cloud computing market given that cloud was not the only way to go for these competitors, whereas Microsoft (being a software company) had to succeed at cloud (so far Microsoft have been very helpful to us in relation to Azure, but I guess Amazon and others have other plans.)
In summary a great event from Numerix with good discussions and audience interaction – helped for me by the fact that much of what was said (centralization with flexibility, duality of data and analytics, libraries of everything etc) fits with what Xenomorph and partners like Numerix are delivering for clients.