Headed out this evening up to Saint John, New Brunswick for the T4G Big Data Congress. I’ll be up there speaking on a panel with associates from a number of Hadoop vendors including Cloudera and Hortonworks on our respective big data platforms.
It will be interesting to see how the audience is currently interpreting big data and the challenges that they face, and I will share my perspectives from the event in the coming days.
I’m personally looking forward to hearing from Tom Davenport, who will be speaking with us as well. Competing on Analytics came out as I was kicking off my 1st tour of duty for IBM in the warehousing space, productizing the old ‘balanced configuration unit (BCU)’ into our first data warehouse ‘appliance’, the Balanced Warehouse. Yeah seems like a decade ago. It will be very interesting to hear his current view on big data in his session and the associated application of analytics.
I’ve never actually been up to Saint John, but have already been warned by the local office that the temperatures will be a bit on the cold side. From 0 to -15 to be exact. Ouch. Reminds me of the MONY tower back in the day in Syracuse – Would always be interesting to make a run to Marshal Street at that negative temperature mark flashing in the sky.
Last week we formally announced and GA’d a slew of new core big data offerings in our big data platform (There was a reason that I have been quiet/offline) and I thought it would be a great time to share them with you all. We started the discussion of our new technologies at the Information on Demand conference at the end of October – but they are now all fully baked in the marketplace.
The 3 new offerings are:
I’ll plan on digging deeper into each one of these offerings over the next few posts – but in summary, we are building out a platform portfolio that is unmatched in the world of big data. We are making it easier for organizations of all sizes to leverage and exploit ‘big data’ to make better decisions.
On the hadoop front, BigInsights not only updates and includes the latest support/versions for the Apache hadoop initiatives but also starts implementing technologies from across the big data platform. In addition to making hadoop more enterprise (rather than a standalone, open source project) BigInsights 2.0 offers a slew of advanced visualizations and tools for users across the organization.
With InfoSphere Streams 3.0, making decisions in real time has just become easier. While Streams has always incorporated a rich programming language (spl) not every user has had the time and effort to master it on the fly. With version 3.0, InfoSphere Streams now incorporates a visual GUI ‘drag-and-drop’ interface to program your own streams… and yes, that interface also generates the proper code as well so that you can alter and enhance granularly as well.
Last but not least, the Vivisimo acquisition in late Spring has already been integrated into the portfolio with the new InfoSphere Data Explorer 8.2 (formally known as Vivisimo Velocity). This offers fast ROI and minimizes risk by helping customers understand their big data assets and unlocking the value – including federated search – leaving data where it is BEFORE you determine if you are going to use it/analyze it….
Yeah – I’m a product guy, and well – new things are cool – so I get a little nerded out when we release offerings like this. In my next blog installments, I’ll drill into each one of these offerings and show you why we added the features and capabilities that we die (Yes, we did listen) – and most importantly how it helps you with your big data initiatives.
Entering into this world of Big Data headfirst, I am overwhelmed with the amount of buzz and hype surrounding the topic. The other day I read the article ‘How Big Data Became so Big’ by Steve Lohr via the NY Times website and it really set the stage for the world’s challenge of Big Data. You know something has hit the big time when Dilbert references it in passing.
Per my previous post, I do not view Big Data as a product (or as a group of products), but instead as a challenge that organizations face in their journey to analyze ALL of the data made available to them to make better decisions. Hadoop is one tool to get there – yet not the only one. Over the years we have gone from machine readable punch cards to petabytes of data stored on an array of different disk types – commodity through high performance solid state.
Great – lots of storage for data – more clutter – just like my email account. Could end up being an episode of hoarders for techos.
It’s not just the analysis of the data that is important (think a superfast data warehouse appliance cranking through queries – ala Netezza) but also the determination if the data is actually worth being stored. It is like one big garage sale. There is so much to dig through, so many items old and new – You sure as heck are not going to take it all home with you – as most of the items are garbage and not needed – they would just sit around in your house (warehouse that is) and waste premium storage – and perhaps trip you up on the way to the car (or to your analytical appliance) that you have revved and raring to go
This is where hadoop comes in handy. Hadoop sorts through your ‘digital exhaust’ (as well as any other massive load of data) and culls insight or information from it. This result can then be sent to the data warehouse for analysis – It does not have to be sent there, but in most cases I’m assuming that many folks would like to include the new insights into their analytics.
Think customer churn models, if hadoop was able to determine 1 or 2 other hidden or unknown traits of a customer segment from lets say, web click through routines (the exhaust) – The analysis would be much more accurate and theoretically save (or make) the organization money.
There are many ways that hadoop technologies can be a part of your enterprise data warehouse or big data platform – this was just one simple example that I like to use to get my head around the technology.
At the end of the day, hadoop enables analysis of Big Data problems – It might not answer them all on its own – but it is a key player (if not ‘the’ key player) in Big Data Analytics.
About a month or so ago, we installed one of our Smart Analytics System 5710s in our executive briefing center here in Raleigh, North Carolina. Having obtained my video camera the day before for my current trip to Nanning, China I decided to see if I could give it a test run with the installation.
It was remarkable that the installation took under 30 minutes. We crafted a 2 minute (or so) video that walks through highlights of the installation and some of the actual screenshots of the dashboards and reports that you get right out of the box.
Take a look and let me know what you think: