John Bantleman, CEO
We recently hosted a dinner in New York City with 20 technology executives focused on Big Data in Banking and Financial Services. I found the event insightful and so I thought it would be interesting to share some of the perspectives from those who attended.
The first (and close to my heart) is the separation of the Big Data business problem and the available technology and solution architectures. I am not trying to detract from the applicability and relevance of the Hadoop technology stack and the attractive, low cost scalability platform being central to solving Big Data challenges for today’s enterprise. Yes, there is an elephant charging straight towards us but I do feel as an industry, we spend a lot more time focused on the technology itself and less on the business problems it solves.
Financial services and banking see themselves as living Big Data, with very high SLA’s and stringent availability and security requirements for many years. In fact long before the Big Data market took off where we see start-ups forming almost every day with millions of dollars of venture funds in addition to public technology providers pouring in serious investment. If you look at your classic Wall Street financial services organization or bank, there is a big problem. When you have to manage the magnitude of say 50-200 PBs of data which is growing at 40-100% under increasing pressure and scrutiny from outside regulators while trying to reduce IT spend, the problem is just not Big, its Staggering! The days of consistent double-digit margins, even for large investment banks are no longer and with little to no control over the ability to increase volume on the top end – you have figure out a way to take a third of your embedded cost out. As one executive put it: “If you are not focused on reducing IT costs right now, you are just not paying attention.”
So is Big Data viewed as opportunity or cost? An informal mini-poll conducted said 80% cost vs. 20% opportunity. The regulatory environment on Wall Street is such that it now costs the industry $30 billion a year and there is simply no avoiding it. By contrast when we discussed this with retail banks, the desire is to better understand customer behavior and we saw a shift in the balance where the business opportunity to better understand the customer across a broad range of products and services (i.e. data sources) is a very compelling proposition for executives and line-of-business owners. The technology driver for that then became “how can I keep all the history of customer transactions and clicks for a longer time without increasing infrastructure spend,” or in other words: “cheap and deep” which I covered in a previous blog a few weeks back.
Interestingly the respective Big Data solution architectures varied too. Most banks have a Hadoop cluster in a sand pit where a few technical resources are playing with it, some investigating a broader usage, and nearly all interested in it for it’s low-cost storage (even cloud) where you can easily provision virtual servers, and in some cases Content Addressable Stores (to comply with WORM requirements). By contrast the classic financial services or investment firm is certainly paying attention to new technology innovation but does not appear to be taking Hadoop as seriously right now – at least in terms of rolling out production clusters where the business will come to rely upon it.
There is no question that Big Data is a big deal in financial services. The problem already exists as opposed to it being “anticipated” or “coming”. The requirements are mostly set and are very high, in terms of enterprise-grade capabilities. For a market sector that is known to be an early technology adopter, it will be very interesting to see how Big Data technology ecosystems play out in the coming months and years.