I just wanted to say I told you so! (Thanks for that, I do feel better now). But don’t go away, I have a way for you to keep one of your NY resolutions!
Last year I said that “Storage was not Snorage” here and was I right?
Well at the end of the year I found this article, “The 10 Coolest Storage Startups of 2013” (here), isn’t that great? Firstly who would have dreamt of a headline about storage a couple of years back, and secondly they had to choose the 10 coolest ones!!
Silicon Valley has turned back to enterprise IT! For a number of years the buzz was all about the ‘cloud companies’ and everyone wanted to be the next Facebook or Google, but now the money seems to be squarely behind the enterprise now. The venture capitalists have placed their bets and they don’t like to get it wrong!
In the storage space it’s proven to be more interesting than I had predicted! The current ‘battle ground’ is between the traditional way of building storage systems and the use of standardised components with intelligent software. The fundamental attribute of storage is resiliency, after all the purpose of persistent storage is to persist! While the traditional approach of providing ‘no single point of failure’ was through engineering a systems with at least two of everything, the new ‘software resilient’ architecture uses smart software on many standardised systems.
I’m always amazed that we technical people who should be rational and logical, tend to lose perspective in these times of religious fervour! The issue is that the traditional architectures provide far more than just resilience, they are engineered for a specific purpose and thus deliver against a set of service levels. These systems resolve the sharing issues like QOS etc. On the other side of the camp the software developers recognised it has to scale-out from the get-go, needs to be reliable across “not-that-reliable” hardware!
So what is the right way ahead, well I have to say it looks like BOTH ways have their pro’s and con’s and as always in IT, it depends!! So what are we seeing? Firstly traditional workloads tend to gravitate to traditional architectures, and with advances in FLASH and FLASH arrays there are new opportunities and new ways to solve old problems. For example instead of having multiple physical copies of your database, (to separate the loads), today using XtremIO you can run them all on the same array, using the snap copies, and have performance and reliable low latency…to boot!
Then at the other end of the spectrum you have Big Data workloads, where keeping the data close to the CPU pays its dividends in massive processing jobs! So a system like ScaleIO has some interesting applications.
However, in life the ultimate performance may not always be the best solution, when you have to consider all the other aspects of running a production system. Hence systems like Isilon have become very popular in the Big Data space.. a scale-out architecture that implements the Hadoop file system. Here for a small drop in ultimate performance you gain an enterprise class storage system that does not take a team of engineers to maintain etc.
In the end I think the real battleground is how to seamlessly use the right tool for the right job. The challenge is the glue that makes these tools look like one system, with little to no duplication, no increase in management and the automation of the optimal placement of data, at any point of time. Now that’s interesting!
So when your storage vendor arrives with their hammer and tries to convince you that everything is actually a nail. Don’t get religious about it, using the right tool for the right job is always the best option, not only is the TCO always less and performance better, but you get to spend nights and weekends with the family. (And wasn’t that one of your New Year’s resolutions!)