Xenomorph Blog

Categories

Analytics Management
Asset Management
Automated Trading
Current Affairs
Data
Data Management
Database Technology
Derivatives
Events
Financial Markets Industry
Hedge Funds
Investment Banking
Regulation
Risk Management
Software Industry
Spreadsheets
Statistics
TimeScape
Web/Tech

A few recent news articles out from yours truly.

First off, one about the Chief Data Officer in Money Management Executive magazine.

Second, one about Data trends of EDM in the Wall Street Letter.

Thirdly something on data quality and the CFTC getting more aggressive on Markets Media.

And as if you didn't know, today is the last day to vote for Xenomorph in the FTFNews Technology Innovation Awards, so please (pretty please!) take a minute to vote for Xenomorph. You know it makes sense (and big thank you! if you do have time...)

Vote by clicking here.

 

 

Related articles

Posted by Brian Sentance | 25 March 2015 | 6:04 pm


Facebank

Looks like Facebank that I joked over in 2007 is effectively becoming reality with Facebook's move into payments. I am sure that reality will end up being stranger than fiction.

Posted by Brian Sentance | 19 March 2015 | 4:31 pm


Please vote for Xenomorph! The FTF News Technology Innovation Awards 2015

Pleased to say that Xenomorph has been nominated in the FTF News Technology Innovation Awards 2015 in the Best Enterprise Data Management Solution category.

Great to see some recognition for the hard work the development team have put in recently in improving our clients' efficiency through extending our workflow, cleansing and validation functionality, enabling our clients to have easier access to reports and visualizations through our new APIs and connections to business intelligence solutions, and offering our clients greater deployment flexibility with TimeScape EDM now available on the Microsoft Azure Cloud.

Your vote would be very much appreciated and you can get to the voting page directly here.  

Many thanks!

Brian and the Xenomorph team. 

 

Posted by Brian Sentance | 13 March 2015 | 11:21 am


TabbFORUM Video: The Interdependence of Data, Analytics, and Visualization

Quick plug for an interview I did recently with Paul Rowady on the Tabb Forum, you can get access to the video here and a brief summary of what we talked about is below. As ever, Paul has his humourous angle on things and this time my green socks got the "Umpa Lumpa" treatment (unfortunately you have to watch to the end to catch that one!). Last time it was my likeness to the lead singer of an Australian band. And for the record, we did also have a good conversion on data management and BI/visualization.

"As firms increasingly apply analytics to massive volumes of raw data, the amount of derived data is growing exponentially, and the need to apply strict governance to this derived data is more important than ever. To satisfy regulatory demands, the full data trail – including models and calculations – needs to be auditable, remarks Brian Sentance, CEO, Xenomorph. Unfortunately, there often is a disconnect between the validation of the raw data and the governance of the middle tier of derived data or analytics, he notes. Sentance and TABB Group’s Paul Rowady, principal and director of data and analytics research, examine the breakdown of data governance best practices, the risks involved, and the role of visualization tools in identifying data quality and data management shortfalls."

Posted by Brian Sentance | 10 March 2015 | 3:16 pm


PRMIA/Bloomberg Event - How low can yields go?

PRMIA and Bloomberg held a joint event at Bloomberg HQ yesterday evening entitled "How low can yields go?". Tom Keene of Bloomberg News proved himself to be a very dry, amusing and competent moderator and the panel he moderated was comprised of Harley S. Bassman of PIMCO (number of humourous jibes at Harley for having caused the financial crisis due to his involved in credit derivatives), James Sweeney, Chief Economist at CS and Henson Orser of Nomura.

Tom asked the panel the obvious question "How low can yields go?". One response from the panel was that almost every market event was clouded in "deflation hysteria" looking at events such as the recent drop in oil prices. Things will change when this hysteria weakens. Another point made was that current policies (QE) are making safe assets unattractive to release cash into the economy, but that negative interest rates are inherently destabilizing. There was an attitude put forward of "we will survive this" and looking back at the Great Depression then folks pulled money from banks whereas that did not happen in 2008/9. There was some talk of how shorting the bond market 14 months in row has been wrong, but that with 3 1/4% out at 10 years along the US yield curve that this shorting has had no effect on macro policy.

Tom asked whether time "theta" as he put it, was the real healer here. The panel responded that the government's policies of 2009 worked, regardless of your opinion of where the same policies might be taking us long term. A period of balance sheet repair has followed during the 7 years after the crisis and that this was "mostly repaired" looking at many measures of debt. Looking forward, the yield curve is priced for a rate hike and the Fed wants one, but this will not occur until we are nearer full employment at 5-6% and not 8-9% levels. Wage inflation is likely to take off nearer full employment, and growth will start to slow at the same time. There are signs that this is already occurring, and that core inflation in the US is not really that low, only say 30bp less than average. 

The panel also discussed the issue that many working on Wall Street had not experienced a tightening economy over the past 10 years so maybe there should be some concerns over how they deal with it through this transition. The panel envisage more FX and rate volatility as this occurs. Against this background, then due to regulation Wall Street had fewer and smaller players to help provide liquidity into this volatile market to come. One of the panelists pointed out issues for equities, with cash flows being discounted at the current (very low) curve whilst returns look weak. One potential scenario build out of this put forward for a rapid increase in inflation over 3-6 months.

Tom asked "How long is history" wanting to establish what timeframe we should be assessing the success of policy. One of the panelists said that baby boomer generation retiring may affect fund flows as they get out of equities and buy bonds and that rates behaviours may have changed for good with markets used to yield curve inversions at around 5/5 1/4% but now moving to 3/3 1/4%. Another panelist mentioned that due to regulation the flow of funds from mortgages and their securitization to sophisticated investors was broken. Again the issue of Wall Street having less capacity due to regulation was mentioned.

On the subject of FX, the panel thought it a very difficult market to forecast. Dollar strength looks set to continue with the possibility of a 85c EUR. The Eurozone may strengthen economically as exports benefit from a weak EUR. Tom asked where investors could capture yield, and Brazil was suggested as a good target given its high rates currently. One of the panelists suggested that the world was taking part in a co-ordinated currency war, but this was not accepted by all. Japan 2015 GDP growth is likely to be good, supported by lower oil prices and experiencing some wage inflation. The Japanese Government cannot buy any more JGBs since the supply is running out, however they think inflation is about to take off there. In summary they thought Abenomics had "worked".

"Stability" and how to recognize it was the next topic from Tom. Firstly the panel thought that whatever is to come in the transition to stability, the world would not unravel. The panel said that stability ex-post was much easier to recognize than ex-ante. One of the panelist put forward a potential scenario in which the Fed could not tighten rates with a very strong dollar, China doing worse/US doing better and therefore everyone wants treasuries. 

Audience Q&A - There were a few audience questions. The first was on demographics - asking the panel about the effects on rates and the economy of birth and retirement rates. The panel thought a key issue was whether the cohort of retirees was being replaced by a similar cohort of workers. The US is balanced in this regard but other countries such as Germany, Italy and Japan are not. In the 1980's, Japan did very well economically and had 13 retirees per 100 workers and now this was 48 per 100. However, even for the US then increased longevity of the retiring population was another key issue to address. 

Another audience question focused on QE/fiat currencies and whether today's governments where printing more money than the economy has been growing. In summary the panel seemed to think of QE as an experiment that had not gone wrong yet, not to say that it might not and not to say how long it might take to go wrong. 

One audience member wondered whether Francis Fukayama's "End of History" now applied to the fixed income and hedge fund industries. The response from the panel was that it is never "different this time" and that greed/ego/hubris had caused problems and would cause problems again. However Wall Street is not dead, and it has the plumbing and machinery to convert granny's savings into funding for an app developer. The last piece of advice from one panel member was to go to the bar and think pleasant thoughts.

So we did.

 

Posted by Brian Sentance | 5 March 2015 | 9:22 pm


PRMIA Risk Year in Review 2014

PRMIA put on their Risk Year in Review event at the New York Life Insurance Company on Thursday. Some of the main points from the panel, starting with trade:

  • The world continues to polarize between "open" and "closed" societies with associated attitudes towards trade and international exposure.
  • US growth at around 3% is better than the rest of the world but this progress is not seen/benefitting a lot of the poplation yet.
  • This against an economic background of Japan, Europe and China all struggling to maintain "healthy" growth (if at all).
  • Looking back at the financial crisis of 2008/9 it was the WTO rules that were in place that kept markets open and prevented isolationist and closed policies from really taking hold - although such populist inward-looking policies are still are major issue and risk for the global economy today.
  • Some optimistic examples of progress howver on world trade recently:
  • US Government is divided and needs to get back to pragmatic decision making
  • The Federval Reserve currently believes that external factors/the rest of the world are not major risks to growth in the US economy.

James Church of sponsor FINCAD then did a brief presentation on their recent experience and a recent survey of their clients in the area valuation and risk management in financial markets:

  • Risk management is now considered as a source of competitive advantage by many insitutions
  • 63% of survey respondents are currently involved in replacing risk systems
  • James gave the example of Alex Lurye saying risk is a differentiator
  • Aggregate view of risk is still difficult due to siloed systems (hello BCBS239)
  • Risk aggregation also needs consistency of modelling assumptions, data and analytics all together if you are avoid adding apples and pears
  • Institutions now need more flexibility in building curves post-crisis with OIS/Libor discounting (see FINCAD white paper)
    • 70% of survey respondents are involved in changes to curve basis
  • Many new calculations to be considered in collateralization given the move to central clearing
  • 62% of survey respondents are investing in better risk management process, so not just technology but people and process aswell

James was followed by a discussion on market/risk events this year:

  • Predictions are hard but 50 years ago Isaac Asimov made 10 predictions for 2014 and 8 of which have come true
  • Bonds and the Dollar are still up but yields are low - this is as a result of relatively poor performance of other currencies and the inward strength of US economy. US is firmly post-crisis economically and markets are anticipating both oil independence and future interest rate movements.
  • Employment level movements are no longer a predictor of interest rate moves, now more balance of payments
  • October 15th 40bp movement in yields in 3 hours (7 standard deviation move) - this was more positioning/liquidity risk in the absence of news - and an illustration of how regulation has moved power from banks to hedge funds
  • Risk On/Off - trading correlation is very difficult - oil price goes means demand up but 30% diver in price over the past 6 months - the correlation has changed
  • On the movie Interstellar, on one planet an astronaut sees a huge mountain but another sees it is a wave larger than anything seen before - all depends on forming your own view of the same information as to what you perceive or understand as risk

Some points of macro economics:

  • Modest slow down this quarter
  • Unemployment to drop to 5.2% in 2015 from 5.8%
  • CS see the Fed hiking rates in mid-2015 followed by 3 further hikes
    • The market does not yet agree, seeing a move in Q3 2015
  • Downside risks are inflation, slow US growth and wages growth anaemic
  • Upside risks - oil price boost to spending reducing cost of gas from 3.2% down to 2.4% of disposable income

Time for some audience questions/discussions:

  • One audience member asked the panel for thoughts on the high price of US Treasuries
  • Quantitative Easing (QE) was (understandably) targetted as having distorting effects
  • Treasury yields have been a proxy for the risk free rate in the past, but the volatility in this rate due to QE has a profound effect on equity valuations
  • Replacing maturing bonds with lower yielding instruments is painful
  • The Fed are concerned to not appear to loose control of interest rates, nor wants to kill the fixed income markets so rate rises will be slow.
  • One of the panelists said that all this had a human dimension not just markets, citing effectively non-existing interest rate levels but with -ve equity still in Florida, no incentive to save so money heads into stock which is risky, low IR of little benefit to senior citizens etc.
  • Taper talk last year saw massive sell off of emerging market currencies - one problem in assessing this is to define which economies are emerging markets - but key is that current account deficits/surpluses matter - which the US escapes as the world's reserve currency but emergining markets do not.
  • Emergining market boom of the past was really a commodities boom, and the US still leads the world's economies and current challenges may expose the limits of authoritarian capitalism

The discussion moved onto central clearing/collateral:

  • Interest rate assets for collateral purposes are currently expensive
  • Regulation may exacerbate volatility with unintended consequencies
  • $4.5T of collateral set aside currently set to rise to $12-13T
  • Risk is that other sovereign nations will target the production of AAA securities for collateral use that are not AAA
  • Banks will not be the place for risk, the shadow banking system will
  • Futures markets may be under collateralized and a source of future risk

One audience member was interested in downside risks for the US and couldn't understand why anyone was pessimistic given the stock market performance and other measures. The panel put forward the following as possible reasons behind a potential slow down:

  • Income inequality meaning benefits are not throughout the economy
  • Corporations making more and more money but not proportionate increase in jobs
  • Wages are flat and senior citizens are struggling
  • (The financial district is not representative of the rest of the economy in the US however surprising that may be to folks in Manhattan)
  • The rest of the US does not have jobs that make them think the future is going to get better

Other points:

  • Banks have badly underperformed the S&P
  • Regulation is a burden on the US economy that is holding US growth back
  • Republicans and Democrats need to co-operate much more
  • House prices need more oversight
  • Currently $1.2T in student loands and students are not expecting to earn more than their parents
  • Top 10 oil producers are all pumping full out
    • The Saudis are refusing to cut production
    • Venezuela funding policies from oil
    • Russia desparately generating dollars from oil
    • Will the US oil bonanza break OPEC - will they be able to co-ordinate effectively given their conflicting interests

 Summary - overall good event with a fair amount of economics to sum up the risks for 2014 and on into 2015. Food and wine tolerably good afterwards too!

 

Related articles

Posted by Brian Sentance | 23 November 2014 | 9:38 pm


Data Management Summit NYC from the A-Team

The A-Team put on another good event at DMS New York yesterday. Lots of good stuff talked and here are a few takeaways that I remember, after a photo of Ludwig D'Angelo of JPMorgan:

WP_20141104_12_33_59_Raw

  • Data Utilities - One of presenters said that "Data Utility" was a really overused term second only to "Big Data". My comment would be that a lot of the managed services folks seem to want to talk about "Data Utilities" - seeming to prefer that term rather than what they are? Maybe because they perceive as better marketing and/or maybe because they hope to be annointed/appointed (how I don't know) as an industry "Data Utility". Anyway for me they fail to address the issue of client-specific data and its management very well, much to the detriment of their argument imho - although SmartStream did say that client data can be mixed up into the data services they offer. 
  • Andrew Gets Literaturally Physical - Andrew Delaney of the A-Team expressed a preference for "physical" books when talking about why the A-Team also prints the Regulatory Data Handbook2 as well as making it available online. I have to agree that holding a book still beats my Kindle experience but maybe I am just getting old. Andrew should check out this YouTube video on how the book was first introduced...
  • FIBO - The Financial Instrument Business Ontology (FIBO) was discussed in the context of trying to establish industry standards for data. As ever the usage of words like "Ontology" I suspect leaves a lot of business folks looking for the nearest double shot of expresso but that aside, seems like the EDM Council are making some progress on developing this standard. Main point from the event was industry adoption is key. I found some of the comments during the day a bit schizophrenic, in that some said that the regulators should not mandate standards (i.e. leave it to industry adoption and principles) but then in the next breath discussing the benefits (or otherwise) of the LEI (ok, not mandated but specific and coming from the regulators). Certainly the industry needs "help" (is that a strong enough word?) to get standards in place.
  • Data Quality - Lots on data quality with assessing the business value of data quality initiatives being a key point. On the same subject, Predrag of element-22 announced that the EDM Council will soon be announcing adoption of the Data Quality Index, which could be used to correlate data quality with operational KPIs for the business. 
  • Regulation (doh!) - It wouldn't be a data management event without lots of discussion on regulation - a key point being that even those regulations that are not directly/explicitly about data still imply that data management is key (take CVA calcs for example) - and on a related note it was suggested that BCBS239 should be considered as a more general data managment template for any business objective. 
  • Entity Hierarchies/LEI - Ludwig D'Angelo of JPMorgan gave a great talk and said that vendors were missing a massive opportunity in delivering good hierarchy datasets to clients, and that the effort expended on this at firms was enormous. Ludwig said that the lack of hierarchies in the Legal Entity Identifier (LEI) is a gap that the private sector could and should fill.  Ludwig also seemed initially to be thrown when one of the audience suggested that they were multiple "golden copies" of hierarchies needed, since definitions of ownership can differ depending on which department you are in (old battle of risk and finance departments again). Good discussion later of how regulation was driving all systems to be much more entity-centric rather than portfolio-centric, emphasising the importance of getting entity hierarchies right. 
  • DCAM - John Bottega did a great presentation on the Data Management Capability Model (DCAM). John asked Predrag of element-22 to speak about DCAM and he said that unlike previous models (DMM) then this framework would not only assess where you are in data management but will also show you where you need to go. DCAM covers data management strategy / operations / quality / business case / data architecture / tech architecture / governance / program. From what I could see it looked like a great framework - it appeared like common sense and obvious but that is in itself difficult to achieve so good effort I think. Element-22 will offer an online service around DCAM that will also allow anonymous benchmarking of data management capabilities as more institutions get involved (update: the service is called pellustro).
  • BCBS239 - Big thanks to John M. Fleming of BNY Mellon and Srikant Ganesan of Risk Focus for taking part in the panel with me. Less focus on spreadsheet use and abuse on this panel unlike the London Panel from last month. John had some very practical ideas such as the use of Wikis to publish/gather data dictionary information and with a large legacy infrastructure you are better documenting differences in definitions across systems rather than trying to change the world from day one. Echoing some of the points from DMS London, it was thought that making the use of internal data standards as part of a project sign off was very pragmatic data governance, but that also some systems should be marked/assessed as obsolete/declining and hence blocked from any additional usage in new project work. Bit of a plug for some of our recent work on data validation and exception management, but the panel said that BCBS239 needs to encompass audit/lineage on calculations/derived data/rules in addition to just the raw data

You can get more on the day by taking a look at my feed via @TheLongSentance and involving others at #DMSNYC.

 

Related articles

Posted by Brian Sentance | 6 November 2014 | 12:44 am


Banking Reloaded from Capco and Zicklin

Great event by Capco and Zicklin Business School at Baruch College in NYC yesterday. Topics went right through from high frequency trading, systemic risk, wealth management and bitcoin. The agenda is here and you can see some on the highlights on twitter at #BankingReloaded.

Posted by Brian Sentance | 29 October 2014 | 7:42 pm


TabbForum MarketTech 2014: Game of Smarts

A great afternoon event put on by TabbFORUM in New York yesterday with a number of panels and one on one interviews (see agenda). You can see some of went on at the event via the hashtag #TabbTech or via the @XenomorphNews feed.

WP_20141015_16_36_01_Raw

"Death of Legacy" Panel Discussion

Posted by Brian Sentance | 16 October 2014 | 10:50 pm


A-Team DMS London Event and BCBS239 Panel

Good day at the A-Team's DMS London event last Wednesday. The day started with Tom Dalglish doing a pretty passable impression of a stand-up comedian in the morning keynote to open the day - not exactly an easy thing to do if 1) you are asked to do it very much at the last minute and 2) this is data management, not the subject that most comedians would immediately reach out for. So due kudos to Tom, and some of the comments he made about technology architects and technology builders were funny and resonated with the audience, such as this quote coming from a technologist: "How can I give you the requirements, I haven't finished the code yet?" (I think we have all been there on that one a few times in our careers...).

You can find some of the main points from the various panels at via @XenomorphNews or more generally by #dmslondon (you could also find out a bit via my twitter account @TheLongSentance so long as you don't mind the odd photograph and a few bits of personal baggage now and again).

BCBS239 Panel - I took part in the panel on BCBS239 on risk data aggregation and reporting, something which I have written about before, and obviously a prime example of how regulation is influencing (dictating?) financial markets institutions to take data management seriously. Dennis Slattery of EDMWorks moderated the panel, and on the panel with me was Sally Hinds of DCMS, and Mikael Soboen, head of risk systems at BNP Paribas. 

IMG_3401

BCBS239 Panel at DMS London

Dennis started by outlining the four pillars of BCBS239:

  • Pillar 1 “Overarching Governance and Infrastructure.”
  • Pillar 2 “Risk aggregation” capabilities.
  • Pillar 3 “Risk reporting” capabilities.
  • Pillar 4 “Supervisory review, tools and cooperation."

Regulatory Chicken - Dennis started by asking the panel whether BCBS was another game of regulatory "chicken" where the approach of "principles" means 1) the banks do the minimum and wait for the banks to inspect and tell them what they specifically have to do 2) the regulators don't really want to be more specific beyond principles because they themselves are unsure of what is needed and want to learn from what different banks have done. General concensus from the panel debate was that firms were not doing as much as they could, but that banks needed to show at least that they had a program in place and running by the January 2016 deadline or face big issues with the regulators (so the game of regulatory chicken is "on" seems to be the conclusion). Mikael Soboen added that he was unsure whether his regulator would have the time to conduct the BCBS239 given the workload that the regulators currently faced. 

The End of Spreadsheets? - Dennis asked whether BCBS239 and the requirements for having a clear data lineage meant this sounded the bell for the end of spreadsheet usage at banks. I said not - I personally feel that a lot of folks in technology underestimate how difficult using software is for many business users and tools that make manipulating data easy like spreadsheets will have a role for the foreseeable future. I suggested that spreadsheets are a great adhoc reporting and analysis tool, and things mainly go wrong when they are used as a personal, "siloed" desktop database.

BCBS239 does not itself preclude the usage of spreadsheets and end user computing, but rather like a lot of regulation says that their usage must be taken seriously - in my view there is a tendency for some in IT to regard spreadsheets as someone else's problem, which is understandable but problematic for any CDO. Also there are approaches to spreadsheet usage that can help maintain data lineage, such as what Microsoft offers with web provision of spreadsheet dashboards using PowerView and PowerBI (used in our TimeScape MarketPlace offering), folks such as Cluster7 with their "closed circuit TV" for spreadsheet monitoring, and indeed Xenomorph with our SpreadSheet Inside approach of including centralised spreadsheet-like calculations as a supported data type within the audited data management process.

Data Dictionary - Mikael said that one responsibility he had was to represent the investment bank within the wider data dictionary initiatives due to BCBS239 at the retail bank, and said that this was challenging given the different terminology sometimes used. 

Is BCBS239 a Project or Data Governance? - The panel thought that the best approach was to use BCBS239 as a framework for compliance with current regulation and regulation to come, but that this needs to obviously be subject to having the budget to do so. There were some general comments on how the data management needs of the front office and risk were converging. Standards such as FIBO were also discussed, with feedback being that they are desirable but that it is early days where their immaturity means they are often used for specific areas such as modeling counterparty data. 

Overall a good panel (I hope!) with a good amount of audience questions and participation. Again you can find some of the main points from the various panels at via @XenomorphNews or more generally by #dmslondon (you could also find out a bit via my twitter account @TheLongSentance so long as you don't mind the odd photograph and a few bits of personal baggage now and again).

IMG_3352

A bit of fun - Brian looking up to Ron Wilbraham at DMS London

Posted by Brian Sentance | 14 October 2014 | 12:19 am


A-Team DMS Awards 2014 - Xenomorph on the Cloud

A-Team’s DMS Data Management Awards close on the 26th of September so if you haven't already, please vote for Xenomorph!

Xenomorph on the Cloud - First of a few lookbacks at what we have been doing over the past year - firstly with a short animation about one of our major initiatives this year, cloud provision of data management and a new venture into cloud-based data publishing with the TimeScape MarketPlace

So it would be fantastic if you could support Xenomorph by voting here

Thank you!

Related articles

Posted by Brian Sentance | 11 September 2014 | 7:21 pm


A-Team DMS Data Management Awards 2014

Very pleased to announce that we have been nominated again this year in the A-Team’s DMS Data Management Awards. The categories we’ve been selected for are: 

  • Best Sell-Side Enterprise Data Management Platform
  • Best Buy-Side EDM Platform
  • Best EDM Platform (Portfolio Pricing & Valuations)
  • Best Risk Data Aggregation Platform
  • Best Analytics Platform.

Last year we were delighted to win the Best Risk Data Management/Analytics Platform award – even more so as the awards are voted for by our clients and industry peers.

So if you would like to support us again this year the voting is open now:

http://referencedatareview.hs-sites.com/data-management-summit-awards-2014-survey

and runs through to the 26th September. The award winners will be announced at A-Team’s Data Management Summit, at the America Square Conference Centre in London on October 8th.

Posted by Kerry Johnson | 5 August 2014 | 12:09 pm


NoSQL Document Database - Manhattan MarkLogic

Bit late in posting this up, but given I did something about RainStor I thought I should write up my attendance at a MarkLogic event day in downtown Manhattan from several weeks back - their NoSQL database is used to serve up content on the bbc web site if you wanted some context. They are unusual for the NoSQL “movement” in that they are a proprietary vendor in a space that is dominated by open source databases and the companies that offer support for them. The database they most seem to compete with in the NoSQL space seems to be MongoDB, where both have origins as “document databases” (- managing millions of documents is one of the most popular uses for big data technology at the moment, though not so much publicized as more fashionable things like swallowing a twitter feed for sentiment analysis for example).

In order to cope with the workloads needing to be applied to data, MarkLogic argue that data has escaped from the data centre in terms of need separate data warehouses and ETL processes aligned with each silo of the business. They put forward the marketing message that MarkLogic allows the data to come back into the data center given it can be a single platform for where all data lives and all workloads applied to it. As such it is easy to apply proper data governance if the data is in one place rather than distributed across different databases, systems and tools.

Apparently MarkLogic started out with the aims of offering enterprise search of corporate data content but has evolved much beyond just document management. Gary Bloom, their CEO, described the MarkLogic platform as the combination of:

• Database
• Search Engine
• Application Services

He said that the platform is not just the database but particularly search and database together, aligned with the aim of not just storing data and documents but with the aim of getting insights out of the data. Gary also mentioned the increasing importance of elastic compute and MarkLogic has been designed to offer this capability to spin up and down with usage, integrating with and using the latest in cloud, Hadoop and Intel processors.

Apparently one of the large European investment banks is trying to integrate all of their systems for post-trade analysis and regulatory reporting. The bank apparently tried doing this by adopting a standard relational data model but faced two problems in that 1) the relational databases were not standard and 2) that it was difficult to get to and manage an overarching relational schema. On the schema side of things, the main problem they were alluding to seemed to be one schema changing and having to propagate that through the whole architecture. The bank seems now to be having more success now that they have switched to MarkLogic for doing this post-trade analysis – from a later presentation seems like things like trades are taken directly from the Enterprise Service Bus so saving the data in the message as is (schema-less).

One thing that came up time and time again was their pitch that MarkLogic is “the only Enterprise NoSQL database” with high availability, transactional support (ACID) and security built in. He criticized other NoSQL databases for offering “eventual consistency” and said that they aspire to something better than that (to put it mildly). I thought it was interesting over a lunch chat that one of MarkLogic guys said that "MongoDB does a lot of great pre-sales for MarkLogic" meaning I guess that MongoDB is the marketing "poster child" of NoSQL document databases so they get the early leads, but as the client widens the search they find that only MarkLogic is "enterprise" capable. You can bet that the MongoDB team disagree (and indeed they do...).

On the consistency side, Gary talked about “ObamaCare” aka HealthCare.gov that MarkLogic were involved in. First came some performance figures of how they were handling 50,000 transactions/sec with 4-5ms response time for 150,000 concurrent users. This project suffered from a lot of technical problems which really came down to problems of running the system based on a fragile infrastructure with weaknesses in network, servers and storage. Gary said that the government technologists were expecting data consistency problems when things like the network went down, but the MarkLogic database is ACID and all that was needed was to restart the servers once the infrastructure was ready. Gary also mentioned that he spent 14 years working at Oracle (as a lot of the MarkLogic folks seem to have) but it was only really until Oracle 7 that they could really say they offered data consistency.

On security, again there was more criticism of other NoSQL database for offering access to either all of the data or none of it. The analogy used was one of going to an ATM and being offered access to everyone’s money and having to trust each client to only take their own. Continuing the NoSQL criticism, Gary said that he did not like the premise put around that “NoSQL is defined by Open Source” – his argument was that MarkLogic generates more revenue than all the other NoSQL databases on the market. Gary said that one client said that they hosted a “lake of data” in Hadoop but said that Hadoop was a great distributed file system but still needs a database to go with it.

Gary then talked about some of the features of MarkLogic 7, their current release. In particular that MarkLogic 7 offered scale out elasticity but with full ACID support (apparently achieving one should make it not possible to achieve the other), high performance and a flexible schema-less architecture. Gary implied that the marketing emphasis had changed recently from “big data” pitch of a few years back to include both unstructured and structured data but within one platform, so dealing with heterogeneous data which is a core capability of MarkLogic. Other features mentioned were support for XML, JSON and access through a Rest API. Usage of MarkLogic as a semantic database (a triple store) and support for the semantic query language Sparql. Gary mentioned that semantic technology was a big area of growth for them. He also mentioned support for tiered stored on HDFS.

The conversation them moved on to what’s next with version 8 of Mark Logic. The main thing is “Ease of Use” for the next release with the following features:

• MarkLogic Developer – freely downloadable version
• MarkLogic Essential Enterprise – try it for 99c/hour on AWS
• MarkLogic Global Enterprise – 33% less (decided to spend less time on the sales cycle)
• Training for free – all classes sold out – instructor led online

Along this ease of use theme, MarkLogic acknowledged that using their systems needs to be easier and that in addition to XML/XQuery programming they will be adding native support for JavaScript, greatly expanding the number of people who could program with MarkLogic. In terms of storage formats, then in addition to XML they will be adding full JSON support. On the semantics side they will offer full support for RDF, Sparql 1.1. and inferencing. Bi-temporal support will also be added with a view to answering the kind of regulatory driven questions such as “what did they know and when did they know it?”.

Joe Pasqua, SVP of Product Strategy, then took over from Gary for a more technical introduction to the MarkLogic platform. He started by saying that MarkLogic is a schema-less database with a hierarchical data model that is very document-centric, and can be used for both structured and unstructured data. Data is stored in compressed trees with the system. Joe then explained how the system is indexed explaining the “Universal Index” which lists where to find the following kinds of data as in most good search engines:

• Words
• Phrases
• Stemmed words and phrasing
• Structure (this is indexed too as new documents come in)
• Words and phrases in the context of structure
• Values
• Collections
• Security Permissions

Joe also mentioned that a “range index” is used to speed up comparisons, apparently in a similar way to column store. Geospacial indices are like 2D range indices for how near things are to a point. The system also supports semantic indices, indexing on triples of subject-predicate-object.

He showed how the system has failover replication within a database cluster for high availability but also full replication for disaster recover purposes. There were continual side references to Oracle as a “legacy database”.

On database consistency and the ACID capability Joe talked about MVCC (Multi Version Concurrency Control). Each “document” record in MarkLogic seems to have a start and end time for how current it is, and these values are used when updating data to avoid any reduction in read availability. When a document is updated a copy of it is taken but made hidden until ready – the existing document remains available until the update is ready, and then the document “end time” in the old record is marked and the “start time” marked on the new record. So effectively always doing append in serial form not seeking on disk, and the start and end time for the record enables bitemporal functionality to be implemented. Whilst the new record is being created it is already being indexed so there is zero latency searching once the new document is live.

One of the index types mentioned by Joe was a “Reverse Index” where queries are indexed and as a new document comes in it is passed over these queries (sounds like the same story from the complex event processing folks) and can trigger alerts based on what documents fit each query.

In summary, the event was a good one and MarkLogic seems interesting technology and there seems to be a variety of folks using it in financial markets with the post trade analysis example (bit like RainStor I think though, as an archive) and others using it more in the reference data space. Not sure how much MarkLogic is real-time capable – seems to be a lot of emphasis on post trade. Also brought home to me the importance of search and database together which seems to be a big strength of their technology. 

Related articles

Posted by Brian Sentance | 11 July 2014 | 9:36 pm


Cloud, data and analytics in London - thanks for coming along!

We had over 60 folks along to our event our the Merchant Taylors' Hall last week in London. Thanks to all who attended, all who helped with the organization of the event and sorry to miss those of you that couldn't come along this time.

Some photos from the event are below starting with Brad Sevenko of Microsoft (Director, Capital Markets Technology Strategy) in the foreground with a few of the speakers doing some last minute adjustments at the front of the room before the guests arrived:

AzureUK-1

 

Rupesh Khendry of Microsoft (Head of World-Wide Capital Markets Solutions) started off the presentations at the event, introducing Microsoft's capital markets technology strategy to a packed audience:

AzureUK-3

 

After a presentation by Virginie O'Shea of Aite Group on Cloud adoption in capital markets, Antonio Zurlo (below) of Microsoft (Senior Program Manager) gave a quick introduction to the services available through the Microsoft Azure cloud and then moved on to more detail around Microsoft Power BI:

AzureUK-5

 

After Antonio, then yours truly (Brian Sentance, CEO, Xenomorph) gave a presentation on what we have been building with Microsoft over the past 18 months, the TimeScape MarketPlace. At this point in the presentation I was giving some introductory background on the challenges of regulatory compliance and the pros and cons between point solutions and having a more general data framework in place:

AzureUK-6

 

The event ended with some networking and further discussions. Big thanks to those who came forward to speak with me afterwards, great to get some early feedback.

AzureUK-8

 

Related articles

Posted by Brian Sentance | 30 June 2014 | 8:05 pm


Xenomorph Participates in American Cancer Society Financial Services Cares Gala for Second Year

As one person I cannot change the world, but I can change the world of one person – Paul Shane Spear. As a mantra, not many come better to live by.

According to cdc.gov, each year about 8 million people die from cancer and 14 million people are diagnosed with it. To put things into perspective, twice as many people die from cancer than AIDS, malaria and tuberculosis combined. The American Cancer Society (ACS) is a voluntary health organization dedicated to the elimination of cancer. Anything that we as individuals, groups or institutions can do to assist in this regard is valued as every effort helps to aid in research and clinical trials that continue the hopes of cancer cures being found.

Into its ninth year, the annual American Cancer Society Financial Services Cares Gala will take place on Tuesday, June 24, 2014 at Cipriani 42nd Street in New York City to bring together leaders in the financial services industry who are invested in the fight against cancer. Xenomorph is proud in its association with the ACS and to be donating a custom men’s suit for the second year running to be auctioned at the event.

Visit the ACS Financial Services Cares Gala page to learn more about the event and how you can get involved.

Posted by Naj Alavi | 26 June 2014 | 5:25 pm


Cloud, data and analytics in London. Tomorrow Wednesday 25th June.

One day to go until our TimeScape MarketPlace breakfast briefing "Financial Markets Data and Analytics. Everywhere You Need Them" at Merchant Taylor's Hall tomorrow, Wednesday June 25th. With over ninety people registered so far it should be a great event, but if you can make it please register and come along, it would be great to see you there.

Related articles

Posted by Brian Sentance | 24 June 2014 | 11:25 am


Cloud, data and analytics in London. Next Wednesday June 25th.

Less than one week to go until our TimeScape MarketPlace breakfast briefing "Financial Markets Data and Analytics. Everywhere You Need Them" at Merchant Taylor's Hall on Wednesday June 25th. 

Come and join Xenomorph, Aite Group and Microsoft for breakfast and hear Virginie O'Shea of the analyst firm Aite Group offering some great insights from financial institutions into their adoption of cloud technology, applying it to address risk management, data management and regulatory reporting challenges.

Microsoft will be showing how their new Power BI can radically change and accelerate the integration of data for business and IT staff alike, regardless of what kind of data it is, what format it is stored in or where it is located.

And Xenomorph will be demonstrating the TimeScape MarketPlace, our new cloud-based data mashup service for publishing and consuming financial markets data and analytics. 

In the meantime, please take a look at the event and register if you can come along, it would be great to see you there.

Related articles

Posted by Brian Sentance | 19 June 2014 | 4:34 pm


New Client - Mizuho Securities USA

Very pleased to announce that Mizuho Securities USA has completed a successful implementation of TimeScape, you can see the press release here and more detail is available in this article on Inside Reference Data. Big thank you to all those involved in making this happen, both at Mizuho and on the Xenomorph team.

Posted by Brian Sentance | 18 June 2014 | 11:12 am


Financial Markets Data and Analytics. Everywhere London Needs Them.

Pleased to announce that our TimeScape MarketPlace event "Financial Markets Data and Analytics. Everywhere You Need Them" is coming to London, at Merchant Taylor's Hall on Wednesday June 25th. 

Come and join Xenomorph, Aite Group and Microsoft for breakfast and hear Virginie O'Shea of the analyst firm Aite Group offering some great insights from financial institutions into their adoption of cloud technology, applying it to address risk management, data management and regulatory reporting challenges.

Microsoft will be showing how their new Power BI can radically change and accelerate the integration of data for business and IT staff alike, regardless of what kind of data it is, what format it is stored in or where it is located.

And Xenomorph will be demonstrating the TimeScape MarketPlace, our new cloud-based data mashup service for publishing and consuming financial markets data and analytics. 

In the meantime, please take a look at the event and register if you can come along, it would be great to see you there.

Related articles

Posted by Brian Sentance | 11 June 2014 | 8:53 pm


Clients and Partners. Everywhere You Need Them.

Quick thank you to the clients and partners who took some time out of their working day to attend our breakfast briefing, "Financial Markets Data and Analytics. Everywhere You Need Them." at Microsoft's Times Square offices last Friday morning. Not particularly great weather on here in Manhattan so it was great to see around 60 folks turn up...

Photo 1

 
Rupesh Khendry of Microsoft (Head of World-Wide Capital Markets Solutions) started the event and set out the agenda for the morning. Rupesh described the expense of data within financial markets, and the difficulties experienced by risk managers in pulling together all the data and analytics they need...  Photo 2
 
 ...and following Rupesh was Antonio Zurlo (below) of Microsoft (Senior Program Manager) who explained the fundamentals of Microsoft Azure and what services and infrastructure it offers, including public cloud, virtual private cloud and hybrid cloud architectures. Antonio also described a key usage pattern for HPC/grid on Azure being used to "burst to the cloud" when on-premise infrasture needs to be extended for end/intra-day risk calcs...
Photo 3
 
Sang Lee (below) of Aite Group (Managing Partner) then delivered his presentation "Floating in the Capital Markets Cloud: Moving Beyond Data Storage". Sang's main findings from the survey of 20 financial institutions were that concerns about security and SLAs relating to cloud usage remain, but even those that were concerned about this also said they were planning to start a cloud project within the next 24 months. Cloud technology seems to becoming more acceptable of late, and Sang said this seems to be due to regulation, cost pressures and the desire to offer better services to clients. Sang confirmed that HPC/Grid with "burst to the cloud" is a common usage pattern and that "Data as a Service" is becoming more popular... 
Photo 4
 
Fred Veasley (below) of Microsoft (Tech Solutions Professional) to introduce Microsoft Power BI and Office 365. Fred explained how Power BI extended the capabilities of Excel with data search (finding and retrieving publicized data sources both within an organization and over the web), its integration capabilities with standard databases, NoSQL databases, data standards such as OData and new APIs/sources of data such as Facebook. Once downloaded, the data can be shaped and merged with other datasets (for instance combining data from positions databases/systems with analytics and data from the cloud), and kept up to date automatically. In addition to Power BI, Power View enables great visualizations and interactive dashboards to be created, and once finalized these can be deployed centrally via web pages down to end users...
Photo 5
 
After Fred, Brian Sentance (below), CEO of Xenomorph explained the origins of the TimeScape MarketPlace. Based on some discussions with Microsoft about 18 months back, the idea was effectively to firstly to get TimeScape running in the Microsoft Azure cloud, secondly to turn the data management capabilities of TimeScape "upside-down" by using it as a means to upload and publish data to the cloud and thirdly to provide one-to-many access to multiple sources of data via web interfaces and key delivery tools such as Microsoft Power BI. Put another way, without any local software or hardware infrastructure both business users and IT staff can access multiple data sources in the same format and using the same data model wherever the data is needed. In addition to .NET and Java interfaces to the TimeScape MarketPlace via OData, web API delivery into F#, Python, R and MATLAB are all in development...
Photo 1 - Copy
 
...and in addition to downloading data via Power BI, Brian also demonstrated how you could build on the data using "Power View" to create powerful analytical dashboard functionality that could be built and tested in Excel, then deployed centrally within a browser for access by users outside of Excel. He added that partners was one of the key aspects for the platform, and introduced the TimeScape MarketPlace Partner Program for the platform to get data, analytics, model vendors, software and service vendors involved and building on the platform. Andrew Tognela (below) of Microsoft (Worldwide Managing Director) closed the presentations...
Photo 4 - Copy

Posted by Brian Sentance | 14 May 2014 | 9:51 pm


7 days to go - Financial Markets Data and Analytics. Everywhere You Need Them.

Quick reminder that there are just 7 days left to register for Xenomorph's breakfast briefing event at Microsoft's Times Square offices on Friday May 9th, "Financial Markets Data and Analytics. Everywhere You Need Them."

With 90 registrants so far it looks to be a great event with presentations from Sang Lee of Aite Group on the adoption of cloud technology in financial markets, Microsoft showing the self-service (aka easy!) data integration capabilities of Microsoft Power BI for Excel, and introducing the TimeScape MarketPlace, Xenomorph's new cloud-based data mashup service for publishing and consuming financial markets data and analytics.

Hope to see you there and have a great weekend!

 

Related articles

Posted by Brian Sentance | 2 May 2014 | 7:34 pm


Xenomorph Releases TimeScape Data Validation Dashboard

Very pleased to announce general availability of TimeScape Data Validation Dashboard which we announced this morning. You can see find out more here. Big thank you to all the staff and the clients involved, who have helped us to put this together over the past year. 

Related articles

Posted by Brian Sentance | 30 April 2014 | 4:30 pm


Regulatory, Compliance, and Risk Data Technology Challenges - PRMIA

The New York Chapter of PRMIA hosted "Regulatory, Compliance, and Risk Data Technology Challenges" at Credit Suisse's offices in New York, last Thursday 10th April. Abraham Thomas introduce the panelists, and Don Wesnofske started off by setting the scene for the evening's event.

Don outlined how in reaction to the 2008 Crisis the regulators now require data retention for up to 10 years or more. Don cited one particular example where data must be reconstructed within 24 to 48 hours for any date up to 7 years back, and said that this kind of "forensic" investigation capability was an important consideration for many financial institutions. He took us through a good presentation slide of his view on data management/risk architecture, and outlined how operational risk is comprised of people, process, technology and events. Don ended his presentation by taking us through Wikipedia's definition of "Big Data", and in particular talked about how data has a life cycle going through:

  • Production
  • Retention
  • Archive
  • Purged

Don handed then handed over to Luigi Mercone of Credit Suisse who is a Director of Engineering Strategy & Architecture at Credit Suisse. Luigi started by saying that to the business at CS, he is technical support which involves asking "What is on fire today? And whats going to be on fire tomorrow?" Luigi described how some time back CS had regulatory enquiry around their equities business which required them to reconstruct data from 2 years back.

The project to do this took around 4-5 months of database adminstrators time to reconstruct the world as at that point in time (I guess because tape storage was being used, and this needed restoring to disk/database). This was for an equity order management system that had doubled in size every year for the past 17 years, and at that point CS was only retaining data going back 2 years. Luigi said that it was then thought that with new regulations requiring the ability to produce forensice evidence at any point in time would potentially swamp CS's resources unless it was addressed head on and strategically. 

Luigi described the original architecture that they were using being based on an in-memory database for intraday workloads, then standard Sybase (probably ASE I guess) and then Sybase IQ for longer term archiving, taking advantage of the column-store capabilities of Sybase IQ and the resulting data compression possible. He added that the data storage requirements of the system had grown from 150TB to 1.2PB in 4 years.

Luigi then offered a comparison of this original architecture with what he found by implementing RainStor, in the original architecture the Sybase IQ database compressed data down into 160TB, whereas this was improved by a further factor of 10 down to 14TB using RainStor. He said that the RainStor was self-service providing a standard SQL interface, eliminated the need for tape storage, reduced the system "footprint" by 90% at CS, was 1/5 of the cost and the performance was good. (I guess here I would like to caveat that I know nothing of the original architecture other than the summary Luigi provided, and as such it is hard to judge whether the original architecture was optimal for the data growth experienced, and hence whether this was overall an objective comparison of Sybase IQ's capabilities with RainStor.) Luigi closed by saying that whilst RainStor was a great archive database, its original origins were in in-memory databases and he would encourage RainStor to re-enter that market too, given his experience so far. 

John Bantleman CEO of RainStor took over and described how RainStor had been designed specifically for the needs of data archiving (I guess talking more about what it does now rather than its origins outlined by Luigi above). He said that RainStor offers a 20-40x storage footprint reduction over traditional database technology and operates efficiently even at the PetaByte (PB) scale, based around RainStor proprietary database technology making use of columnar storage and being capable of storing data in both relational-style tabular format and also in more "document" style using XML and JSON formats using Key-Value access. John mention that in terms of being able to store data that not only could RainStor retrieve data at a point in time, but it could retrieve the schema being used at that point in time for a more complete view of the state of the world at that point. This echos a couple of past articles that I have penned, one for IRD and one for Wilmott Magazine on bitemporal regulatory requirements.

John said that regulation was driving the need for data archiving capabilities, with 1400 regulations added since 2008 (not sure of source, but believable) and the comment from a Chief Data Officer (CDO) at one financial markets client that if a project wasn't driven by regulatory compliance then the project isn't going to get done (certainly sounds like regulatory overload). John's opening remarks were really around how regulatory cost, complexity and compliance were driving forces behind the growth of RainStor in financial services technology, and whilst regulation is the driver, firms should look at archiving of data as an opportunity too, in order to create value from corporate memory, and to be proactive in addressing future reporting and analysis needs.

John illustrated the regulatory need for data archiving through the Consolidated Audit Trail (CAT) regulation with data retention over 7 years will generate 100PB of data. He also mentioned SEC Rule 17a-4 for broker dealers as another example of "data retention" regulation, with particular reference to storage of records in on-rewriteable, non-erasable format. John termed this WORM storage, meaning Write Once, Read Many. John seemed to imply that both the software (RainStor) and the hardware it runs on (e.g. EMC or Teradata etc) need to be WORM compliant. One of the audience members asked John about BCBS 239, to which John said that he didn't know that particular regulation (fair enough that John didn't know in my opinion, RainStor's tech is general about "data" and is applicable across many industries, whereas BCBS 239 is obviously about banks specifically and is more about data aggregation and reporting than data retention/archiving to my understanding, and this seems to be confirmed with a quick doc scan for "archive" or "retention".)

To finish off the main part of the event (before the drinks and food began) there was a panel discussion. Luigi said that it was best to "prepare for all time, not just specifics" with respect to data retention and that there were dangers in rolling up data (effectively aggregating and loosing granularity to reduce storage needs). John added that his definition of "Big Data" was "All information, for ever". Luigi added that implementing RainStor had allowed CS to spend more time on interesting questions rather than on database restoration. John proposed that version 1 of Big Data involved the retention of web data, and as such loosing a data point here and their didn't matter. Version 2 of Big Data is concerned more with enterprise data where all data has value and needs to be retained i.e. lots of high value data. He added that this was an opportunity for risk and compliance to become an asset. 

WP_20140410_20_27_09_Raw

Abraham (second from left), Don (center) and John (second from right)

Overall it was a good event which I found very interesting (but I have to admit to a certain geeky interest in this kind of tech). The event would have benefitted from say another competitive or complementary technology vendor involved maybe, plus maybe an academic to give a different slant on data retention and on what the regulators hope to gain from this kind of mandated data retention. Not that the regulators have been that good at managing data themselves recently.

WP_20140410_19_52_58_Raw

 Networking afterwards courtesy of Credit Suisse and RainStor

 

 

 

 

 

 

 

Related articles

Posted by Brian Sentance | 17 April 2014 | 3:06 pm


Financial Markets Data and Analytics. Everywhere You Need Them.

Very pleased to announce that Xenomorph will be hosting an event, "Financial Markets Data and Analytics. Everywhere You Need Them.", at Microsoft's Times Square New York offices on May 9th.

This breakfast briefing includes Sang Lee of the analyst firm Aite Group offering some great insights from financial institutions into their adoption of cloud technology, applying it to address risk management, data management and regulatory reporting challenges.

Microsoft will be showing how their new Power BI can radically change and accelerate the integration of data for business and IT staff alike, regardless of what kind of data it is, what format it is stored in or where it is located.

And Xenomorph will be introducing the TimeScape MarketPlace, our new cloud-based data mashup service for publishing and consuming financial markets data and analytics. More background and updates on MarketPlace in coming weeks.

In the meantime, please take a look at the event and register if you can come along, it would be great to see you there.

Posted by Brian Sentance | 15 April 2014 | 3:57 pm


When Big Data is not Big Understanding

Good article from Tim Harford (he of the enjoyable "Undercover Economist" books) in the FT last week called "Big data: are we making a big mistake". Tim injects some healthy realism into the hype of Big Data without dismissing its importance and potential benefits. The article talks about the four claims often made when talking about Big Data:

  1. Data analysis often produces uncannily accurate results
  2. Make statistical samplying obsolete by capturing all the data
  3. Statistical correlation is all you need - no need to understand causation
  4. Enough data means that scientific or statistical models aren't needed

Now models can have their own problems, but I can see where he is coming from, for instance 3. and 4. above seem to be in direct contradiction. I particularly like the comment later in the article that "causality won't be discarded, but it is being knocked off its pedestal as the primary fountain of meaning."

Also I liked the definition by one of the academics mentioned of a big data set being one where "N = All", and that you have "all" the data is an incorrect assumption behind some Big Data analysis put forward. Large data sets can mean that sample error is low, but sample bias is still a potentially big problem - for example everyone on Twitter is probably not representative of the population of the human race in general.

So I will now press save on this blog post, publish in Twitter and help re-enforce the impression that Big Data is a hot topic...which it is, but not for everyone I guess is the point.

 

 

Related articles

Posted by Brian Sentance | 8 April 2014 | 10:21 pm


Contact Details and Regional Offices. All rights reserved. Trademarks, copyright and legal. Whole site © Xenomorph Software Ltd. Sitemap.