Monthly Archives: September 2011

EMC’s New Datacentre – While Larry “Eats his dog food”, we would like to think we are “Sipping our own Champagne”?

By Clive Gold, Marketing CTO, EMC Australia and New Zealand.

When I joined EMC we were a ‘mainframe’ customer through and through, how times have changed! EMC just opened its shiny new, state of the art, datacentre in Durham, North Carolina USA. Two major changes to the past strategy; firstly it outside the Massachusetts area and secondly it’s designed from the ground up to be a cloud data centre.

This US scale facility, at just half a million square foot includes both a major processing centre for EMC, as well as it houses a global research and development centre, we call a “Center of Excellence.”  Two driving rationale behind the move, the high energy cost and to improve the disaster recovery/business continuity position of having all datacentres in the same state.

Some interesting pieces to this move; to maximise the return over the next 20 years, EMC has made flexibility – the ability to adapt to new technologies with minimal disruption a priority. As a result energy efficiency, modularity and low-impact construction were the priorities in the design. Some interesting stuff on the energy efficiency side;

–          The facility is broken into three 150 000 square foot modules, all with separate power  and cooling/air-handling units.

–         Cold aisle containment is provided for the rack rows of high load density, (12k/rack) – and hot aisle/cold aisle rack row arrangements are used to increase cooling control and efficiency.

–         As well as leveraging lots of innovative stuff; including rooftop water collection system, free air-cooling for most of the year and UPS flywheel technology to eliminate batteries!

From a ‘cloud’ point of view all applications run on VMware vSphere, beginning with about 350 applications and 6 PBytes of data being moved into this datacentre over the next six months.  The hardware, you guessed it, Vblock!  In  fact it took less than a week to stand up the infrastructure for the new SAP ERP(v6) application! An infrastructure which sizes up to be 625GHz of processing, 4.6TB of RAM and 246TB’s of storage (25% of the production capacity), all on a Cisco Nexus infrastructure.  (I believe that EMC is currently moving off the Oracle system onto SAP.) As a comparison they worked out it would have taken just over 3 months to build the same infrastructure in the old datacentre, but with Vblock, the overhead cabling design and power and cooling setup.. the result was staggering.

So some may think it’s not too radical a change over the last five years or so; taking the EMC IT organisation from a ‘mainframe’ mindset to a ‘cloud’ one. But what that all means, I guess could be debated for some time to come!


Big Data, Big Hit in Canberra; Thanks to Dr Williamson

By Clive Gold, CTO Marketing, EMC Australia and New Zealand

How often do you attend an event that has everyone engaged
and participating right up to and even after the end? This morning we had an
event in Canberra around the topic of Big Data. The room was packed, Dr Darrell
Williamson, (here),  Deputy Director- CSIRO ICT Centre, was fascinating and I managed to not put everyone to sleep!

Maybe this was just good timing, with Senator Kate Lundy just announcing the “Open Technology Foundation”, (here),  or maybe it is just the right time for us to think about how to use the data we have?  Either way it was a great session.

Darrell covered a wide range of applications and issues, in which large amounts of data was the common theme.  In some cases scary amounts of data, such as the Square  Kilometre Array Pathfinder project, which will spew out 72TBytes of data each second .. which pales into insignificance when you consider this is a 1:100 model of the full size SKA, destined the push out 100 times as much data.

But Big Data is not just about these massive and specialised applications, Dr Williamson also touched on areas such as sentiment analysis, using twitter feeds to analyse the public’s feelings about the services they receive from government departments. He spoke about matching grain to the micro-climates around Australia, and many more thought-provoking uses of data!

So for EMC is this just a ‘needs lots of storage’ play, no. EMC has an very interesting stack of capability when it comes to this field. Firstly, it has a storage component, but not just more scale in traditional architectures, no new architectures custom designed for this large and/or lots of data requirement. Namely Isilon and Atmos.

Secondly, no point keeping all of this stuff if you can’t use it, so combine Greenplum, (for the more structured data), with Hadoop, (for the less structured data) creates a unique and very interesting analytical analysis capability.

Lastly, and the most important piece to answer the ‘so what’ question! So what do I do when I’ve worked it out? How do I make changes that enable me to leverage what I’ve found? The top layer of the EMC Big Data Stack is Documentum’s xCP. If you are not familiar with this technology, xCP is an accelerated composition platform, that allows you to effectively paint out a workflow and instantiate it. (i.e. compose an application with no coding required and run it). In this way, you can effectively implement a new application/workflow within days or weeks.

Not only does xCP allow you to make the changes to what people do, but through a connector back into Greenplum, you can build analytics into your standard workflows. For example, if you have someone who is issuing new credit cards, the workflow could do a realtime query on the available datasets to ensure it is not a fraudulent application. That could include a search through twitter feeds, social networking, demographics, etc. (For example one bank found that a person kept using addresses within one block of where they grew up, to obtain multiple credit cards under different names.)

So as I sit at Canberra airport waiting for a delayed flight, I’ve just been introduced to another big data application. A service that predicts if a flight will take off in time! How does it do this? It takes the historic flight data and correlates it with a number of other datasets: past 24 hours performance, weather conditions, current delays in the network, major events etc.. and adjusts the prediction appropriately; unfortunately I didn’t consult this before leaving for the airport! Hence the longer than normal post! Sorry.

Another Vblock Story – Major Australian Law Firm

by Clive Gold, CTO Marketing, EMC Australia and New Zealand

Kevin Bloch, CISCO’s local CTO and I gave a briefing to a group of CIO’s from the legal fraternity about two years ago. As was custom at the time we was asked everyone how much of their environment was virtualised. Out of the 8 or so organisations represented, almost half were at or very near 100%. Two years ago! So I wasn’t surprise to learn about one of these major firms implementing two Vblocks, to build their private cloud on.

Unlike so many organisations, they had a great opportunity as both their storage and compute environments were aged and needed to be refreshed. That made the competition falter, (in this case Netapp), as their reference architecture could not stack up against VCE and Vblock in any of the key requirements of reducing risk, lowering TCO as well as improving agility. (Not by a long shot, when they worked out the impact of the converged infrastructure over the expected life of the infrastructure.)

There were two other interesting points to me… firstly part of this project was the roll out of Exchange 2010. Ahh you say they were looking to virtualise a mission critical application! No they are already virtualised and have been for a while. Which always makes me wonder why so many of the messages you hear in the market today are about “it’s now time to virtualise mission critical applications”, when so many of EMC’s customers already do. Maybe it’s because ANZ is so far ahead of the rest of the world!

Secondly, the IT group has done this to provide a higher level of service to their customers’, while reducing the cost. How cool is that as I thought that in IT, that was an oxymoron! IT generally boils down into a trade-off between two factors, high utilisation or performance, quickly or right, etc. So its nice to have an IT solution in Vblock that provides a combination of three factors; lower risk, lower cost and more agile.