This week we hear from Steve Clarke, in the first of a two part series setting out his experience of data centre commissioning. Steve is a former CIO with long experience at a senior level running IT operations. In Part 1, Steve talks about his experience moving data centre services for a major Telco and Broadband provider. Part 2 will focus on the lessons he learned along the way, how he sees data centre procurement changing and what that means for future role of CIOs.
Steve, thanks for taking the time to talk with us, can you tell us a bit about your recent past?
I’ve worked in many industries, but most recently I spent around six years within the ISP/Telecoms/Internet sector running IT functions. My last role was IT Operations Director at TalkTalk where I spent most of my time managing IT Change programmes against a backdrop of multiple acquisitions and divestments, alongside leading large and diverse operations functions. Right now, I’m running my own business providing fractional IT Director services to SMEs and Growing Companies enabling them to make use of enterprise experience for a fraction of the price.
So how many data centres have you commissioned?
I’ve commissioned three data centres, but I’ve also de-commissioned five data centres! Mostly in the UK, but one of those I decommissioned was in Luxembourg.
Were these large data centres?
The ones I commissioned were around 60 racks each or so. Some of the data centres I decommissioned were bigger, but then they were quite old and the equipment in them took up a lot more room. We were able to consolidate quite considerably.
What were the main drivers that prompted the deployments?
The first data-centre commission was just part of expansion where the old facility was not good enough or large enough anymore.
The second data centre was more significant and part of a major separation programme where I was running an IT programme to separate AOL Broadband from the AOL Inc ‘mothership’ with the aim of running it as a standalone entity within the Carphone Warehouse Group. We were building our infrastructure from the ground up to replace the existing infrastructure owned by AOL Inc and a data centre was a necessity. Unfortunately we didn’t get a lot of time to plan and deploy, so that was an interesting experience.
The final data centre was commissioned as part of the separation of TalkTalk from Carphone Warehouse. We had to move many applications out of Carphone Warehouse data centres, including the main billing platform, and into a data centre owned by TalkTalk. This time we had a bit more time to plan and we also located it not far from the other data centre to provide fail-over capabilities. During both of those builds, we were able to close a number of other data centres and make considerable savings.
Presumably there were some interesting challenges?
Traditionally it’s always been about getting the right amount of space, but for me, space wasn’t an issue. The main concern for me was about getting sufficient power to the right racks, which was a struggle sometimes. Even then, I had some racks that were only half full because of the power draw of the machines in the rack.
I think getting the network right was the second biggest issue. It’s a big expense and it’s not something that you can get wrong and then re-work cheaply. And it doesn’t stop with the hardware. Getting the traffic routing design right is absolutely essential; otherwise it will come back later to bite you on the backside.
Thirdly, particularly with the second data centre, we were very short on time. We got there and it was a success, but it was a very steep learning curve and not one that I’d like to have to climb again!
Can you tell us about the business benefits that resulted from the moves?
In all cases, the data centre was not implemented for just business benefit, but because we had to relocate our equipment. However, we did achieve many benefits for the company. For instance, consolidating servers either through virtualisation or running more than one application per physical server reduced our power requirements. In addition, we shut down a number of legacy platforms rather than migrate them, which meant a reduced support overhead.
With the closure of older data centres there were tangible cost savings as well in terms of line-items such as rental and security. I would say that the benefits have been realised and will continue to be realised as the technology we implemented continues to provide pay-back.
You mention virtualisation, to what extent did you virtualise?
We did undertake virtualisation to some extent, but more specifically we used a blade environment with a closely knit backend SAN that provided a significant boost in performance for our users.
...and presumably an environmental pay off?
Yes the consolidation we achieved meant that from an environmental point of view, we’d done pretty well at reducing our power footprint. We did look at other ways to reduce cooling or power requirements, but at the time, the main focus was completing the projects to stop the pretty large monthly transition agreement costs, so environmental requirements were some way down the priority list and were only achieved through accidental delivery via the other work we were doing.
A big thanks to Steve for sharing his experience with us – look out for part 2, coming soon
Views expressed are those of the author, and do not necessarily reflect those of Logicalis.