Sometimes you need to throw yourself out of your comfort zone to see what is really happening.
Over the past few months, I’ve been on the road presenting at events and meeting with clients (both current and prospective) across the globe. Now I’m no stranger to travel (having trekked similar international routes back in my data warehousing heyday) but this big data thing that is going on – is a wee bit different.
Over the past year, I’ve seen a major shift in the perception and adoption of a big data approach. Coming in to the job last year – I was constantly bombarded with complaints of ‘This is just hype…” “…big data is a horrible name…” “What is the 4th V?… it should be this, not that…” “ Hadoop is big data… Hadoop will take over the warehouse…” “NoSQL means NO SQL and thus NO Database…”
For the most part – All rubbish.
A year later (and still getting my fair share of naysayer voice) I ended my tour in Warsaw, Poland delivering a keynote at the Computerworld Big Data event. Now, any time you have to present in English in a non-native English speaking country, you have to take into consideration the speed of your speaking, the usage of slang and the overall ability of the audience to comprehend your words. The week before I was in Japan using a translator after each sentence. This being my first time to Warsaw to present to an audience of over 300 (where the majority of sessions were in Polish …mine had a little asterisk next to it ‘in English’ on the conference brochure which did not promote confidence), I was assuming that conveying the overall message would be a crapshoot. Boy was I wrong.
Step back a year ago while presenting at a TDWI (The Data Warehousing Institute) event…. I asked the audience “How many of you have heard about ‘Hadoop’?” Like one or two hands would go up… I asked the same question here… and two thirds of the room raised their hands. Granted, this was a ‘Big Data’ event… but so was the theme of that TDWI event… Times had changed over the past year in a big way.
After my presentation I spoke with numerous conference attendees and the questions and topics were night and day different to what I was hearing a year ago. No more was it ‘What is Hadoop’…or ‘What is big data (and some crazy deep dive on the Vs and which ones were missing…)?’ It was more around…”So as you illustrated the use case for Data Warehouse Augmentation… Where did the actual loading of data occur?” or even better… “We have already starting using this Hadoop technology, yet we want to get a more real time perspective on our users… How can we do this?” Here I was in Poland at the end of winter (yup – cold and snowing), thinking that my talk might miss the mark – and I am engulfed by questions and briefings on actual usages of big data. This was awesome.
We all might not agree on what big data is, or what the big data challenge is at any one organization, but we can all agree that technology has been (and still is being) developed to answer questions and tap into data that was one deemed impossible. This is the big data challenge in my eyes.
But while specific technologies (like Hadoop) are fantastic… they are simply just one facet of a very complex answer. Business intelligence and reporting is irrelevant without trusted data underneath… A data warehouse appliance can be a foundation for new insights, but on its own – it is a just a dumb rack with disk. But together – all of these technologies combined – can offer the next frontier of insight – answers to problems that once were left unsolved.
It brings me back to that discussion in Warsaw that I had with a customer on how to augment their big data solution. They were already neck deep in using Hadoop to sort through clickstream data on their customers – yet could not leverage it fast enough to make a difference to their customer base. They were looking not just for a 360 degree view of their customer – but also for a way to reach out to their clients (with targeted offers) in real time.
This was the fantastic story that we were telling years ago when we built in continuous ingest into the data warehouse and brought InfoSphere Streams to market… yet here (last week) in Poland, a customer was coming to me with the same exact story. Awesome
As I sit in my office today and finish up on my trip reports (really missing that sushi breakfast at the Tsukiji fish market in Tokyo the other week) I’m increasing excited for 2013 and the advances that we will make this year. Organizations around the globe are making data a priority and leveraging the latest in technologies to ensure that they are making better decisions. What I have seen in meetings and briefings so far has been nothing short of astonishing. Clients are coming to the table with some absolutely innovative ideas on how they want to leverage big data technologies – ideas that could (and will) make front page on Wired magazine one day as they are developed and implemented.
It’s going to be a fun year in big data.
Headed out this evening up to Saint John, New Brunswick for the T4G Big Data Congress. I’ll be up there speaking on a panel with associates from a number of Hadoop vendors including Cloudera and Hortonworks on our respective big data platforms.
It will be interesting to see how the audience is currently interpreting big data and the challenges that they face, and I will share my perspectives from the event in the coming days.
I’m personally looking forward to hearing from Tom Davenport, who will be speaking with us as well. Competing on Analytics came out as I was kicking off my 1st tour of duty for IBM in the warehousing space, productizing the old ‘balanced configuration unit (BCU)’ into our first data warehouse ‘appliance’, the Balanced Warehouse. Yeah seems like a decade ago. It will be very interesting to hear his current view on big data in his session and the associated application of analytics.
I’ve never actually been up to Saint John, but have already been warned by the local office that the temperatures will be a bit on the cold side. From 0 to -15 to be exact. Ouch. Reminds me of the MONY tower back in the day in Syracuse – Would always be interesting to make a run to Marshal Street at that negative temperature mark flashing in the sky.
Last week we formally announced and GA’d a slew of new core big data offerings in our big data platform (There was a reason that I have been quiet/offline) and I thought it would be a great time to share them with you all. We started the discussion of our new technologies at the Information on Demand conference at the end of October – but they are now all fully baked in the marketplace.
The 3 new offerings are:
I’ll plan on digging deeper into each one of these offerings over the next few posts – but in summary, we are building out a platform portfolio that is unmatched in the world of big data. We are making it easier for organizations of all sizes to leverage and exploit ‘big data’ to make better decisions.
On the hadoop front, BigInsights not only updates and includes the latest support/versions for the Apache hadoop initiatives but also starts implementing technologies from across the big data platform. In addition to making hadoop more enterprise (rather than a standalone, open source project) BigInsights 2.0 offers a slew of advanced visualizations and tools for users across the organization.
With InfoSphere Streams 3.0, making decisions in real time has just become easier. While Streams has always incorporated a rich programming language (spl) not every user has had the time and effort to master it on the fly. With version 3.0, InfoSphere Streams now incorporates a visual GUI ‘drag-and-drop’ interface to program your own streams… and yes, that interface also generates the proper code as well so that you can alter and enhance granularly as well.
Last but not least, the Vivisimo acquisition in late Spring has already been integrated into the portfolio with the new InfoSphere Data Explorer 8.2 (formally known as Vivisimo Velocity). This offers fast ROI and minimizes risk by helping customers understand their big data assets and unlocking the value – including federated search – leaving data where it is BEFORE you determine if you are going to use it/analyze it….
Yeah – I’m a product guy, and well – new things are cool – so I get a little nerded out when we release offerings like this. In my next blog installments, I’ll drill into each one of these offerings and show you why we added the features and capabilities that we die (Yes, we did listen) – and most importantly how it helps you with your big data initiatives.
Entering into this world of Big Data headfirst, I am overwhelmed with the amount of buzz and hype surrounding the topic. The other day I read the article ‘How Big Data Became so Big’ by Steve Lohr via the NY Times website and it really set the stage for the world’s challenge of Big Data. You know something has hit the big time when Dilbert references it in passing.
Per my previous post, I do not view Big Data as a product (or as a group of products), but instead as a challenge that organizations face in their journey to analyze ALL of the data made available to them to make better decisions. Hadoop is one tool to get there – yet not the only one. Over the years we have gone from machine readable punch cards to petabytes of data stored on an array of different disk types – commodity through high performance solid state.
Great – lots of storage for data – more clutter – just like my email account. Could end up being an episode of hoarders for techos.
It’s not just the analysis of the data that is important (think a superfast data warehouse appliance cranking through queries – ala Netezza) but also the determination if the data is actually worth being stored. It is like one big garage sale. There is so much to dig through, so many items old and new – You sure as heck are not going to take it all home with you – as most of the items are garbage and not needed – they would just sit around in your house (warehouse that is) and waste premium storage – and perhaps trip you up on the way to the car (or to your analytical appliance) that you have revved and raring to go
This is where hadoop comes in handy. Hadoop sorts through your ‘digital exhaust’ (as well as any other massive load of data) and culls insight or information from it. This result can then be sent to the data warehouse for analysis – It does not have to be sent there, but in most cases I’m assuming that many folks would like to include the new insights into their analytics.
Think customer churn models, if hadoop was able to determine 1 or 2 other hidden or unknown traits of a customer segment from lets say, web click through routines (the exhaust) – The analysis would be much more accurate and theoretically save (or make) the organization money.
There are many ways that hadoop technologies can be a part of your enterprise data warehouse or big data platform – this was just one simple example that I like to use to get my head around the technology.
At the end of the day, hadoop enables analysis of Big Data problems – It might not answer them all on its own – but it is a key player (if not ‘the’ key player) in Big Data Analytics.