Tag Archives: FLASH

Xtrem Hotcakes!


WOW, the talk internally at EMC is that we might have become the market leader in all flash arrays last quarter!  What’s so impressive? The product went GA half way through the quarter!! What’s so Xtrem’ly hot about this product?

Consistency… it simply does what it advertises to do.. and it does it all the time!!  Let me explain, it’s like looking at the fuel consumption of cars! You read the specification and it show the absolute best consumption, which in reality is unachievable, (or at least for me).  Well it turns out that conservative EMC publishes the ‘on-road/real’ numbers, where the others play the theoretical specs.  Like a car the only way to work out the real numbers is to fill it up and run it for a while, (perhaps putting your foot down occasionally)!  That is what you should do when you test an all flash array:-
–          Fill it up:                 to get an indication of the true usable capacity
–          Run it for a while:   to see what happens to performance over time.
–          Put the foot down:    to see what happens!

On the last point, I’m guessing you are looking at all flash for performance, so you have to stress the box to see what happens. It’s not easy because you aren’t used to your servers being the bottleneck, so get some beefy servers and load it up!  I’m am warning you that you might be disappointed at what happens, on some arrays as it gets busy, services get shut down and it goes into a catastrophic spiral bleeding capacity and/or performance.

At SNIA we spent a great deal of time working out ways to test and classify the performance of Flash devices to enable you to compare. The reason this work had to be done is due to the way Flash as a media works, essentially its page oriented and needs some sort of garbage collection, it requires write levelling to improve durability, etc. If you are interested SNIA has all the information in the Solid State Storage Special interest Group, (here),

We’ve seen this behaviour before when I started at EMC I was responsible to introducing a new way to connect to storage called Storage Area Networks or SAN for short. (Yes I’ve been here for that long!) EMC as always tested out the full configuration and did the eLab job and published the real-world numbers. We got a shock when we saw the competitor’s numbers, a factor of about 100 higher!!  An interesting trick, they had worked out that the fibre channel chips had a small buffer in them, so their test wrote a small piece of data to the chip and then read it back again. Fantastic wasn’t it: absolutely valid as they were writing and reading from the array, while as the same time being absolutely useless as a measure of what would happen on your site.

I use to work with a guy, George Z some of you will know him,  who had a way of giving you an absolutely accurate answer, which was at the same time completely useless. Don’t get caught by this, it could be costly!


Storage is not Snorage This Year!


An Australian Journalist who is well known and one of the doyens of IT News coined the term “Storage is Snorage” about 10 years ago! For the most part he has been right! EMC and the industry have enhanced the hardware by ‘riding the price/performance’ curves of the underlying components… and disks have grown 1000x and processors 10 000x faster and RAM cheaper, However until recently, the basic architecture has not really changed!!

But that has all changed and two new architectures reach prime-time this year!

The first is scale-out, yes I know Isilon has been in the market for over 5 years, but in more niche application areas. Now two trends are merging, mainstream computing environments are experiencing massive growth in unstructured data and the traditional architectures are creaking, and secondly Isilon has ‘Enterprise’ features.

A quick word of warning when you look around, the value is in the architecture not the fact that there is a single file system! Why I say this is that whenever there is a major advance in technology, you get the ‘horseless carriages’! People who take the old technology and substitute some-part of it and think it’s all new.. (Or less kindly; you can put lipstick on a pig, but it’s still a pig!) The reason I put this point in is that ‘traditional’ storage with its RAID groups and LUN size limits, (or aggregates), is the source of the management nightmare when you scale to the PByte level. So putting a wrapper or layer above this does not detract from the management, etc, overheads. So to do Scale-Out, you need to design from the ground up.

Talking about ground up design, it brings me to the second exciting architecture this year, the All-Flash array. Once again all storage designed before this had one design consideration, ‘locality matters!” Because of the mechanical drives the position of the head is a major determinant of performance and through put. (Throw random requests at a disk drive and it will perform like a stuck pig, order them and it screams!)

Now if you start from scratch and design around a storage medium that has no locality, (no penalty for writing or reading form anywhere). Major change in design. Secondly, RAID, well that is an absurd notion, there is no ‘DISK’ involved, just an address space!

Back in a FLASH!

After a long hiatus it’s time to begin this blog again, like my weight loss regime, I’ve run out of excuses and it’s time to put the effort it!

 There has been so much going on and so many interesting developments in the industry.. it’s time to throw some ideas out there and let’s see what interests you..

How would you build a storage array if you knew nothing about disk drives? I guess you would do what XtremeIO have done! I wrote the following for a SNIA newsletter but you might find it interesting..

“Adoption of new technologies always seems to follow the same maturation cycle. History shows that we first think of new technologies with the old mindset; remember the “horseless carriage”. Then eventually we re-assess the use-cases and then take full advantage of the innovation. Imagine a world today without motorised transportation.

Today the same pattern in the storage industry!  For a few years now, leading storage vendors have been incorporating FLASH storage into their arrays; the horseless carriage. The FLASH devices are being used instead of mechanical disks, but are being used with the ‘old’ mindset. Doing this has created tremendous value for the market as there has been a fundamental shift. The shift is to use the FLASH to deliver performance, while mechanical disks provide the low cost storage. With the right amount of automation the cost of storage systems can be reduced, in some cases around the 30% mark.

However, is there more benefit to using FLASH as a storage medium and re-thinking the whole thing? The answer is yes, and hence a number of FLASH start-ups have sprung up. These companies are engineering storage products from the ground up to use FLASH as the storage media, and the results are fantastic.

As an example of this change consider the RAID mechanism for protecting data. One of the fundamental constructs of the current RAID algorithms is that you are dealing with a set of physical disk drives. The algorithms have therefore been designed to understand the locality of the data and in essence attempt to overcome the physical limitations of physical drives such as seek times.

Now consider FLASH memory, there are no seek times and in fact there is no concept of ‘devices’, add FLASH you get more capacity! With these factors in mind the data protection algorithms can we rewritten to perform better protection with less overhead! One vendor has shown how ‘double parity’ protection, (equivalent to RAID-6), carries an average overhead of just 1.2 i/os. Dramatically different to the reads and writes that have to be done in a traditional RAID-6 system!

This is just the beginning, as we move into the ‘big data’ world, its technologies like this which will enable us to make best use of all the data that is available, and make a profound impact on the world we live in!”

Australian Insurance Company realises that FLASH is the new DISK!

By Clive Gold, CTO Marketing, EMC Australia and New Zealand.

A large insurance company in Australia is replacing a popular unified storage system with VNX as their average utilisation was 30%, and could not be improved! In the past I’ve been critical of vendors who’s systems do not provide a very good conversion of RAW to USABLE! The engineering decisions made traded off capacity utilisation for other features or functions. However that is not the case here!!

In this case the reason the drives could not be completely utilised was performance! This particular vendor has not evolved their architecture and hence has to use striping to beef up performance.  The I/O requirements meant that they had to size the array on the number of spindles resulting in a massive cost per Byte stored.

I’m not here to pick on this particular vendor but to highlight how important this transition issue is and why FLASH is critical to your future. Moore’s law means that every 18 months the CPU’s double in performance. This means that you need to provide twice the I/O capability to balance the compute model. In the old world, that meant you need to double the number of spindles every 18months, or you waste the computing power.

If you do the maths, today the average server needs 3912 drives to meet the potential demand. (I know I will get arguments about cache etc..) That is lots and the cause of this 30% problem today, the kicker is in five years’ time, you will need over 40 thousand drives per server!!   The old model is broken!!

Hence when you look under the covers of the VNX you discover the multiple dimensions of scaling that are available to support this rapid evolution into a FLASH rich storage array.  So in summary.. it does seem like FLASH is the new disk… and if you speak to us about protecting your data you soon realise that Disk is the new Tape!!

Do we really know how revolutionary FLASH and FAST IS?

By Clive Gold, CTO Marketing, EMC Australia and New Zealand

I’m in Perth for the beginning of this week and by chance got invited to an update meeting with one of our best partners dimension data. Isn’t it funny that these chance meetings normally turn out to be the most valuable?

This was because it was so different from the previous meeting I had. In the meeting the statement is made, “Your is broken because it doesn’t do it this way!” I then outline the pro’s can con’s of the different approaches and why I think EMC’s engineering decisions produce the best result. Then the person says, “Oh I better have a look at that!”. Meaning that they had zero experience in what they were talking about.

On the other had, Ken from dimension data says. “You know FAST makes SAN’s simple and affordable for unified communications!”, which I didn’t so he xplains.

Unified communications systems like call manager need a airly high IO rate, so as you scale the IOPs required increases dramatically. In the past dedicated servers with DAS were used, with all the inefficiencies that DAS comes with. SAN was too expensive as you had to use more spindles to get the IOPs, but you didn’t need all the space. Now with FLASH and FAST, it allows us to use smaller storage systems and just enough FLASH to provide the IOPs”

I don’t know how many times journalists from ARN and CRN and other channel publications ask, “What’s the value in the channel to do with this or that.. “ This sure is an example of the systems integrators value in our IT eco-system.

So, why has FLASH and FAST helped you reduce cost and solve a problem?