@chuckhollis posted a blog recently which equates IT to a modern factory, (here). This is great, as some of you know, I am the analogy king and I’ve been thinking about this analogy for a while.. and Chuck has beat me to it!
Chuck talks about the functions of a modern factory and how they relate to modern IT, such as:- “demand forecasting, process optimization, supply chain optimization — and, yes, product quality”. It’s a great article and traces the parallels between them, however I was thinking about a couple of issues that he only touches on.
Agility:- is the catch cry of the Cloud discussion- however what do we mean by agility? Chuck talks about agility in terms of scale; get bigger or smaller very quickly! However, how about agility in terms of ‘flexible manufacturing’, where a factory has tooling that allows it to be re-configured to produce different products. By using robots and CNC machines, (Numerically controlled machines), you can essentially re-program a factory to produce a different product, for example, cut the pieces of wood to create different chairs, or a table! (Virtualising the factory!)
Process Optimisation:- Here building on from Chuck’s solid foundation, I think we can extend the analogy. One of the optimisation techniques in the earlier factories was ‘worker activity optimisation’, where an expert would watch the workers on the line and remove redundancy, double-handling, inefficient movements, etc. The idea was that by watching what was being done, you could recognise the patterns and optimise around these.
Consider, if you could watch everything that everyone you work with does each day. You would notice massive inefficiencies, (in the flow of information, the way tasks are performed, etc.). Now a few small changes would create massive productivity gains!
Move down a level; these people use systems to perform these various different functions. These sophisticated systems provide alternative ways to produce the desired outcome. If we could watch how each person does, essentially the same task, you would find ways that are better than others… and with a bit of training everyone could use the best practice, more productivity.
And lastly down another level; those systems are essentially software running code. Imagine if you could monitor the flow through the code as everyone uses it? We could understand how to make the code more efficient, we could see what never gets used, we would see the path to a bug, etc. Now think of the impact of having this meta-data about the running system… and the optimisation that could be done!
This, to me, is one of the really exciting aspects of Big Data. And if you are thinking this is very “Blue-sky” thinking… well not as far out as you might believe, (for example there is talk of doing this process optimisation, by monitoring the case-flow data that can be obtained from Documentum xCP and using Greenplum analytics to optimise this.)