By Clive Gold, CTO Marketing, EMC Australia and New Zealand
There has been a recent spate of ‘benchmark’ results released in the storage industry. (EMC is certainly there, (here), (here) and (here), so I’m not distancing us from this.) One of these caused a massive blog post by a competitor who picked up on an obvious PR-copy writer’s blunder, trashing the result. Someone commented on the blog that the typo was obvious in the release as it expanded on the headline number, (which by the way was almost double this particular vendor’s best effort). So after wiping the proverbial egg off his face, he said he had not read the whole release, (not even the very next sentence which made the mistake obvious, wow!)! So it was a lot of fun to watch happen, but it also started me thinking how relevant the pure ‘speed test’ was today and going forward?
My answer is yes, if done right there is significance to these tests, but to be useful they need to provide you with information that is useful within your context. For example..
- Has the test been ‘ratified’ by the industry? – I remember years ago when I worked for a server manufacturer we continuously ran TPC-A and TPC-C test suites. (Transaction Processing Council). These tests were for OLTP and DW workloads and provided a ‘typical’ performance metric for each, designed and ratified by end-users.
- Is the ‘result’ metric applicable to my decision? – The TPC test results were both maximum transactions as well as $/Transaction. With these numbers you could see if you could do your work and how much it would cost!
Today, in storage there are a number of tests that have stood the test of time! Like the SPEC benchmarks, (here), which strangely enough have become more popular with file-protocols being used in virtual environments. Compared to benchmarks that have been developed by storage vendors for the storage vendors to show how massive the number they can generate; I’m not convinced about the applicability of these to real world use-cases.
New technologies also pose a new challenges, for example the use of solid state storage devices. While being designed to overcome write-durability and write performance issues, they introduce new performance variables. For this reason SNIA’s Solid State Storage Initiative, (SSSI) has been working on tests that will measure and profile the performance of these solid state drives. (It’s worth reading, as it turns out the way these devices are constructed impacts their performance over time as well as how they perform under different loads.)
Now, being able to evaluate the alternatives with the same yard-stick is useful, (how many km/litre does this car consume?), but is it sufficient? I think one issue is that this idea that performance is everything, is becoming out-dated as for most traditional applications/workloads, the current generation machines will provides the performance to satisfy the requirement. Two big points I have to make..
1) I’m not making excuses: EMC’s technologies currently hold the high-water mark in just about every industry standard benchmark: App performance with Oracle and SAP; Throughput with VNX, High Performance Computing with VNX, SPEC results, Energy Efficiency, etc. (And ususally by a large margin!)
2) Not all storage arrays on sale today as current-generation; To me current technology leverages FLASH, (i.e. full capability both as read/write and as cache or a tier of disk), it embraces the latest CPU technologies, it provides ‘Choice and Control’, (i.e. Connectivity, Service Level Control, etc.) and lastly it is automated, (i.e. automated provisioning, tuning FAST, auto healing, etc.)
Having said that speed is a factor, the other pieces that make up the total cost of ownership are becoming more important.
- The environmental cost as far as energy consumption and heat output;
- The ‘labour’ cost as far as end to end management of the system;
- The opportunity cost as far as flexibility to meet changing organisational needs.
For example the use of EMC VNX has ramped so quickly, it has in some way taken EMC by surprise. In areas of the world where IT owns the power bill, (not in Australia!), the fact that the VNX delivers performance and storage at about 1/3rd of the next manufactures’ machines has proven to be a big success. In ‘virtualisation savvy’ ANZ market the deep integration into the two most used flavours, (over 70 points of integration with VMware), makes the solution simpler, more functional as well as more lowering total cost of ownership.
I would think as the basic technologies continue to provide more and more raw performance, it’s about how much intelligence, integration and innovation goes into the product that will be more important than the benchmark result.