A taxing thought about datacentres…

Here is a question that I hope will tax you a little and (if you have the time) might prompt you to give some feedback…

Should the government(s) – in this case Scotland (where DataVita is based) but this could easily apply to the wider UK and beyond – tax datacentres according to how energy efficient they are?

Well, what do you think?  I started to give this idea some thought whilst putting together a rather “tongue in cheek” presentation for last years’ Scot-Cloud show in Edinburgh.

History is littered with taxation policies implemented by governments in an attempt to align taxes to deal with specific problems.  Here are a few examples –

In most cases, these have not been well received but in all cases, they were trying to address a specific problem.  So what problem do we have with datacentres?

According to a variety of articles I have read recently, datacentres are consuming around 3% – 4% of the global energy supply.  Doesn’t sound like much does it?  Well it is and it is only going to grow.  As a result datacentres have two problems:

  1. The amount of energy they use
  2. The type of energy they use

The second one can be solved if you are prepared to put the effort into it.  It is now possibly in most countries to access 100% renewable energy sources.  We have done it at DataVita.  It wasn’t easy (see my blog article Renewable Power for Datacentres) but we did it.  But, doing this before you address the first point is verging on irresponsible.  What’s the point of consuming too much renewable power?

So back to the bigger challenge – driving down the amount of energy datacentres use.  This is a two stage process.  Stage 1 is to address your IT load (I.E. the amount of power your IT equipment in your datacentre in consuming).  There are lots of ways to address this and I won’t go into them all.  Good examples are:

  • Virtualisation
  • Consolidation
  • Migrating to fewer (newer more powerful and energy efficient) servers
  • Application rationalisation
  • Turning off servers that are not used

There are many ways to drive down the IT load, so you get the idea.  The second stage is to look at the energy overhead required on top of the IT load to run the datacentre. This overhead is typically made up of cooling, lighting, power for building management systems – commonly referred to as the Power Usage Effectiveness (PUE).

Now in researching the background for this blog, I came across the UMBRELLA CLIMATE CHANGE AGREEMENT FOR THE STANDALONE DATA CENTRES and I (for a moment) thought someone had already done this.  I was wrong.  To save you reading 20 odd pages of legal text, skip to the last page and look at the targets set out for this agreement – a baseline PUE 2.0 with a 15% target for reduction by 2020 (so a PUE of 1.7).  That’s a bit like Lewis Hamilton settling for a top 20 finish.

This is where I think Scotland has an opportunity and I hope Ms Sturgeon or someone in her team reads this and takes note.  What if Scotland introduced a meaningful tax scheme that encouraged organisations to build and operate energy efficient datacentres here?  What would the impact be?  Could Scotland become a new Datacentre hub and attract billions in investment from the global content and cloud providers like Amazon, Google, Microsoft and alike?

There is of course a precedent for this – Ireland.  In case you are not familiar, take a look at Host in Ireland – an organisation set up expressly to help attract investment in the region. Sweden have also recently announced a major tax reduction on power for datacentres.

In Scotland, the word Digital is used just about everywhere for anything that looks vaguely like innovation in technology. That said, there are some amazing home grown software houses coming up alongside the “unicorns” that always get a mention (and I won’t mention them). All this is great but investment from outside Scotland by some of the big content/cloud providers will have a much bigger impact on the economy. Did you know that 9 of the world’s top 10 ICT companies have a presence in Ireland and the ICT industry accounts for 25% its annual turnover?

Its a simple formula – find a way to attract the big content/cloud providers and they will bring jobs that need digital skills and in turn they will have a big impact on Scotland’s GDP.

We have a real advantage here – the climate is ideally suited to building highly energy efficient datacentres, so why not use it Scotland?

Is your datacentre in a wilderness?

Ever get a question that bounces around in your head and no matter how hard you try, the answer just eludes you? I have had just that this last week, so I’m hoping by writing this blog I can get an answer.

So here is my question – What would make a company/organisation move from their own datacentre to a co-location facility like ours?

Firstly, let me add some clarity to that, because some of you are probably thinking this is a bit simplistic. There are loads of reasons! Better up time, cost savings, current DC keeps failing, office move etc. The list could go on. These are all reasons people would give having made the decision and all very valid. They are typically driven by the IT department and are in response to a specific risk/issue/project (sometimes all of these).

What I mean is, how does a lowly (I mean that by virtue of a datacentre is low down the stack) datacentre guy like me, persuade a senior (often non IT) member of a company/organisation to consider moving their datacentre when they have no risk/issue/project demanding it. Imagine if an estate agent called you up out of the blue and said “Hi – I’ve got this great house for you, its better that your current one, should save you some money and its in a safer neighbourhood” but you were quite happy with your current house and had no desire to move. What would you say? You would probably say “no thanks” and then get back to whatever it is you were doing.

I guess, to follow my estate agent analogy above, if the agent was able to understand your life plans for the next 5 years, they might have a chance. If they (somehow) knew you were planning to have kids and your current house was not in a good place for schools and the new one was, it might just get your attention.

The most likely answer I have come up with so far to my question relates to the title of this blog. Being in your own (single tenant) datacentre in this age of public/private/hybrid cloud must be the equivalent to having a house in a wilderness.  Imagine, every time you wanted to consume a service that a city dweller takes for granted such as water, electricity or (god forbid) broadband you had to pay a fortune to the provider (assuming they would even do it) and wait months to get what you wanted? It would certainly hold back your plans and make you think twice, unless you wanted the isolation and had no need for such services.

Coming back to my point, so isn’t being in your own datacentre already affecting your decisions, plans and ability to be agile? Why not just put everything in the cloud and be done with it then? Well, if you can do that then great – problem solved! In reality most organisations need a hybrid model and co-location is the best place to put your legacy platforms and private cloud.

Being in a co-location datacentre like ours gives you access to a marketplace of service providers, cloud providers, ISVs and numerous other industry specific providers who just happen to host in the same datacentre. It also gives you access to low cost, high speed, secure transit to other datacentres with yet more of the same and it gives you simple, cost effect access to peering services that allow you to connect directly to a world of public cloud providers. Oh and all of this is available pretty much on demand when you need it.

Embracing everything digital to drive transformation in your company/organisation means being agile and not being held back by your own datacentre and its location. The days of waiting months to deploy new services, connections and platforms are gone. I think it is a pretty good reason to consider making the move from that in house datacentre.

Go on, move your datacentre into a bustling metropolis like DataVita and see what happens. It’s all about the future and if you can see yours then we can help you get there.

BTW – if you are wondering about the image on this blog. It was done by Claire Mills @listenthinkdraw at Fintech 2016 in Scotland last week. It was this image and the discussions I had on the day that really got me thinking about this.

“Customer Service” – the key is in the name

I don’t know about you but in my simple existence, on average, I must interact with at least 5 different companies that provide some sort of service to me on a daily basis. When I look at all of these service providers, I rate my satisfaction with them using a straightforward criteria –

  1. Customer service
  2. Price

It’s a pretty simple formula but for me it works.  Customer service/price = good value – and that is what I want.  Don’t we all?  Now if I look at the companies who provide services to me, I can use my formula to easily categorise my service providers into one of two camps – “Keepers” and “Soon to be ditched”.

Excellent customer service = customer retention = profit.  It’s not rocket science. So it’s no coincidence when you read in the press about a big company struggling and making cuts and they have a reputation for poor customer service. There’s no better example recently than Npower, who through years of consistently bad customer service are now in a situation where they are having to lay off 2,500 staff. I was (very much past tense) a customer and they not only gave me terrible customer service but just didn’t care. Bet they wish they had now…

So what makes excellent customer service?  Well, in my view, this definition has changed over the last few years thanks to technology.  It’s interesting when I look at my “Keepers”, all have embraced next generation technology to enhance their customer service.  For me, excellent customer service is underpinned by these things –

  1. Self-service – empower customers to serve themselves, after all, they know what they want (it also lowers the service providers cost and scales easier).  A good app or web portal is worth its code in gold when it comes to customer retention
  2. Educate & inform – use social media and apps to push the right information to the right customers at the right time
  3. Be accessible – when self-service and content is options can’t help, be available to speak to your customers at a time to suit them using a medium that suits them (apps, SMS, chat)
  4. Speak their language – by all means deliver services from anywhere in the world but if you choose to speak to your customers directly, don’t create a language barrier.  It just doesn’t work
  5. Pre-empt their needs – use your data to think ahead and figure out what your customers are going to need before they do
  6. Listen – there is no excuse for not getting good customer feedback in today’s environment.  Make it easy for your customers to give it to you and act on it
  7. Have a simple charging model – don’t hide costs in the small print. Yes you can point to the small print and say “we did tell you to read it” but do you really want that kind of relationship with your customers?  Or ex-customers as they will become

So, I read an interesting article recently that 18.6% of UK co-location datacentre customers are ready to ditch their provider.  Even worse – 34.6% are unhappy with their current provider’s support.  You can read it here.  When you look at the reasons, its easy to see the link with my list above –

  • Additional and unexpected charges – point 7
  • Skilled personnel are not permanently on site as promised in their contracts – point 3
  • Poor response times – point 3 but point 1 is the solution
  • Additional services not offered – point 5

So why is it so hard for co-location datacentres to provide excellent customer service?  My theory is that many are hampered by ageing facilities and many still live in a world where the datacentre is the datacentre (run by the FM team) and IT is IT (run by the IT team) and never the twain shall meet.  I.E. the age old disjoint between the IT service and the datacentre facility.

So, if you are one of those unlucky customers, in one of these co-lo datacentres and looking to move, look beyond the sales pitch and slides to ensure you get what you want next time.  Firstly, look at the age and quality of the facility, after all, if they are fighting a losing battle from the outset then you know where it is going.  Assuming you like the facility, then look for investment in the things that enable excellent customer service, namely –

  • A self-service portal (and even an app)
  • A support service you can interact with 24×7 in a way that suits you
  • The use of next generation media to educate and inform customers
  • Evidence that they listen and innovate
  • A simple and clear charging model

I’ll leave you with one guess as to how we have built our model.  Come June this year you can find out…

Murphy’s Law vs your N+1 Datacentre? My money is on Murphy…

As a datacentre provider I get frustrated/confused/slightly annoyed (take your pick) when I see, almost weekly, datacentre outages in the news and the article talks about N+1 redundancy.  Why is this news?  It shouldn’t be!  Let me explain…

There is an old saying in life – you get what you pay for.  My grandfather taught me that at a very early age and it is one of life’s truisms that everyone just needs to except.  Nothing in life is free and certainly not when it comes to your datacentre.  So the simple truth I will offer you today is this –

If you pay for a datacentre with N+1 redundancy then expect it to go down at some point.

There, I have said it and I will probably get a bunch of datacentre operators telling me otherwise but I’ll stand by that one.  Let’s face it there are lots of things that can take your datacentre down but when it comes to the physical mechanical and electrical infrastructure (I.E. all the bits that deliver power to your racks and cooling), it either comes down to component failure or people.

So now lets explore this phrase “N+1”.  What does it really mean?  Well according to Wikipedia (https://en.wikipedia.org/wiki/N%2B1_redundancy), it means “Components (N) have at least one independent backup component (+1)”.  The diagram below shows the a typical Tier 3 M&E infrastructure at a high level –

Tier 3

So, I’m sure like many people, you wonder why am I saying the above will let you down?  Well, in a way it has a lot to do with luck, but as we all know, luck can be good and bad and there is a little thing called “Murphy’s Law” or as we often refer to it in the UK “Sod’s law”.  There are some wonderful quotes on Wikipedia again such as –

  • Whatever can happen will happen
  • Seemingly spiteful behaviour manifested by inanimate objects
  • If anything can go wrong it will

I think you get my point.  I noted in the press today an article about yet another datacentre outage that provided me with a classic example of how Murphy’s Law can take down your datacentre here.  It’s the bit about “The engineer on site then assisted with the fitting of the replacement part and a safety mechanism in the device triggered incorrectly, taking down the entire data centre”. Now I know of these guys from industry talk and they have an excellent reputation but I also know they use a co-lo datacentre so you really do have to feel for them.  So what did they do wrong?

Nothing, in theory, other than decide that N+1 redundancy is good enough for their datacentre.  The reality is that it is not.  With N+1 you have one component dedicated to backup throughout the M&E infrastructure, assuming you do opt for a second power feed to your rack then you should have this to rely on as well.  In the example above, I’m not sure they did.  I might be wrong.

So back to Murphy’s Law.  With all datacentre M&E components (switchgear, UPS, generators), they need maintaining and from time to time this maintenance has a risk of disruption.  It might be scheduled or it might be to fix a problem (as per the example above).  When this maintenance is happening, with N+1, you are temporarily running with no redundancy.  So when Murphy decides its your turn (everyone has a turn) and he decides to fail a second component or trigger something that shouldn’t when this maintenance is happening, then your datacentre is going to become a very dark and quiet place. This is not a good thing – just ask any DC manager who has been in their facility when the power has been lost completely.

So why do so many datacentres (multi and single tenant) use N+1?  The obvious answer is that it balances cost with an acceptable level of redundancy.  At least that is what people convince themselves of when they sign off the order.  When it goes down, the cost difference between N+1 and N+N (see below) will seem like a bargain compared to the cost of the datacentre being down (even for 9 minutes!).

So what’s the answer?  Well, you can of course argue that the best way to protect against your N+1 datacentre going offline is to have a second DR datacentre.  The reality is something different.  Unless you can justify and are prepared to pay a lot of money for your applications to be load balanced across two datacentres (and that generally is a lot of money) then fail over to DR takes time (hours at least) and has other issues associated (which I won’t go into on this post).  Then of course you need to fail back (more issues).  Clearly we all need a DR plan (that works) but in an ideal world we would rather the primary datacentre did not go offline.

So, if I were in your shoes, I would be looking for a datacentre that has N+N redundancy at the M&E level. So every individual component has a backup component (a 1:1 ratio), ideally on both the A and B power feed.  The diagram below shows at a high level what this looks like –

Tier 4

Now the clued up ones amongst you will notice that this is in fact a Tier 4 infrastructure.  You are correct.  However, there are a few high quality co-location datacentres out there that have decided to go this route.  Actually building out a datacentre to the full Tier 4 specification invariably does not make commercial sense unless you have a very specific customer in mind.  So what these few high quality co-lo datacentres have done is built everything else to Tier 3 but built the M&E to Tier 4.  Some have even gone through Uptime Institute certification for Tier 3 as well, despite having an infrastructure that exceeds this (you can only certify to the lowest point).

So, what is the moral of this story?  The devil is always in the detail.  If you don’t want Murphy to have his fun with your datacentre, then look for a facility that can maintain independent power feeds on line, without risk to your service.  That is N+N and it often doesn’t cost any more than the N+1 guys charge…

Confused.cloud.com?

image

Well its a new year (OK a bit past now…), so I thought I would break the mould of my short blogging career so far and write about something other than datacentres. Anyone that knows me will know my other big passion is cloud.  In fact I was writing about cloud long before datacentres.  An odd and fairly useless fact I know!

As part of the launch of Datavita and our shiny new datacentre, we had to decide on our strategy towards cloud.  Do we embrace it fully and try to do it? Do we (as many co-lo datacentres do) ignore it to avoid competing with potential clients? Decisions, decisions!!

Well with mine and my business partner’s history we could not ignore it but at the same time we didn’t want to fall into the trap that many datacentre providers do of trying to offer cloud services and end up offering managed services and making a hash of it (brand reputation down the pan…).

Likewise we didn’t see the point of trying to complete with AWS, Microsoft and the like.  So what did we decide?

Well something different, we think anyway.

Everyone seems to love the concept of what the public cloud can deliver – instant on, massive flexibility, hourly billing, low cost and complete freedom, but many organisations can’t utilise it for reasons specific to them.  These reasons often include – data sovereignty concerns, regulatory requirements, network latency, integration with legacy systems and even plain old emotional attachment to being able to go and see their kit. There are more of course, security still raises its head but I would argue that most public clouds are more secure than many in house systems today.

So, imagine having all the features, flexibility and benefits of the public cloud but it sits in the same datacentre as your co-lo racks and it offers data sovereignty and compliance with regulatory requirements for industries such financial services, life sciences, healthcare, public sector and others.  Oh and it also looks feels and behaves just like Azure but with financial and process controls to control spending and risk.

What do we call it? Public Cloud?  Private/Public? Hybrid?  Take your pick. We call it the Datavita Cloud.

What do you think?  Interested in your views…

Design for the datacentre limits or limit the design

Stop asking me for racks…!

So its almost Christmas and just enough time for another quick rant about one of my frustrations about being a new datacentre provider.

As part of the start-up of DataVita, in Q1 next year I need to start hiring my sales team ready for the launch of the Fortis Datacentre in May.  This got me thinking (there is a point, I promise so read on…).

Having spent the best part of 22 years selling solutions and services, I would like to think that I have evolved beyond the rather stereotypical approach to selling that goes along the lines of “What do you want to buy and how many do you need”.  Having built and run sales teams previously, the first thing I look for in a new hire sales person is the right attitude.  That is often best represented by the willingness to use one of my favourite words in any sales process – “why”.  Some examples –

  • “Why do you need this?”
  • “Why do you need this many?”

Of course “why” must be preceded by one of my other favourite  words – “what”.  Again examples –

  • “What are you trying to achieve?”
  • “What is the benefit?”
  • “What is the risk?”

Without wishing to get into a “sales 101” article, you get the picture.  So, back to the theme…

Prior to setting up Datavita, I had the displeasure of occasionally having to reach out to co-location datacentres to ask for prices for racks.  Not once in the times that I had to do this, did the sales person (order taker) ask “what” or “why”.  This seems to be a common theme with many datacentre providers in that they tend to employ order takers rather than sales people.

The result?  Well you miss the chance to differentiate, up sell and ultimately win business.  It also explains why so many datacentre providers struggle to then grow their business when adding more complex product lines such as managed services and cloud services.

Oddly enough, the “WaW” disease (that’s my new name for it!) seems to have spread to customers.  As I have started to engage with prospective customers over the last few months, quite a few have insisted on a standard quote for racks with the same amount of power in each.  The reality is that no two datacentres are the same and unless you match up the limits of the datacentre capability to the design of the equipment going into the racks, you will end up with a solution that is more expensive and complex than it should be.  Just because you need 10 racks in your current datacentre, doesn’t mean you need 10 racks in mine.

The best way to tackle this, is to involve your datacentre provider in the design.  Find out what their limitations are regarding options for:

  • The minimum and maximum amount of power in a standard rack
  • Can single or three phase power be used?
  • Single or dual power feed?
  • How many PDUs can be installed per power feed?
  • How many sockets are available on the PDUs?
  • Standard or smart PDUs?
  • The maximum Rack Units available that you can use
  • The type of cable management available
  • Can cables be securely routed between racks?
  • How affordable are high speed cross connect services?
  • The maximum weight of equipment per rack
  • Is cold or hot aisle containment available?

All of the above should allow you to optimise your design to ensure that you minimise the number of racks you need, which in turn reduces complexity and saves money.  Yes, odd I know, a datacentre provider who wants you to use less racks!

My point?  I have walked around too many datacentres and spoken to too many customers where half empty racks are the norm.  It’s a waste for the you the customer, the datacentre and provider and the environment.  So lets work together to optimise datacentres, else eventually we will run out.

Happy Christmas!

Renewable Power for Datacentres…

Is sustainability obtainable?

Well as it’s nearly Christmas I thought I would share with you a story about our ongoing journey to obtain 100% renewable power for the Fortis datacentre when it launches in May 2016.

You are probably thinking – what is he on about?  Renewable power is everywhere right?  Everyone is talking about it and Apple have moved to it so surely its easy?  Well you could not be more wrong, especially here in Scotland.

When we first cut the business plan for DataVita over two years ago, we wanted to ensure that we not only had a compelling proposition but also we wanted to build a socially responsible company.  So, the most obvious area to look at for innovation was power and in turn our carbon emissions.

With this in mind, the first thing my business Partner (DataVita Dan – the technical brains of the outfit) did was look at how he could design the most energy efficient, reliable datacentre possible.  Using indirect free air cooling and ensuring every single element of the facility from air flow to lighting to cabling came under an “energy efficiency” microscope, he has created a datacentre that will achieve an annualised PUE of 1.18 and by using Cold Aisle Containment throughout, we can deliver this early in the build out.

So the next bit was to find a good source of renewable power.  Easy, or so I thought.  This has certainly not been the case.  The first hurdle was kindly provided by the UK government.  Their decision to kill off the subsidies for wind farms in July this year threw the renewable energy market into turmoil.  So it took a number of weeks before these companies would even consider talking to us.

After the dust settled from the government’s bombshell to wind power, we finally got down to discussing power supply contracts with a number of suppliers ranging from a selection of the big names through to niche renewable suppliers.  Here is what we have found so far:

  • Many don’t want to deal with us as we are too big!
  • Renewable does not always mean zero carbon emissions (unless it is backed by a REGO certificate then it probably isn’t)
  • Many suppliers make it so complex to engage and just understand their pricing that you can waste weeks (even months) with them to no avail

So, I am pleased to say after much work, we are now on a road to contract with at least one supplier, who I won’t name yet but a press release will follow in time.  They seem to be able to remove the complexity and get down to basics in days (not weeks or months) and provide a competitive price per kilowatt hour (ppkWh) and have the capacity to handle our datacentre.  Oh and they have the most important thing – the REGO certificates.

What I hope you will take from this blog is this – everyone (large or small) should be trying to consume power from renewable sources.  If we can do it then anyone can.  Datacentre providers have a responsibility to do this in my opinion.  After all, the technology sector is supposed to lead and deliver the future so we should be powered by future proof power.

Happy Christmas!

Welcome

Welcome to the DataVita blog!  We thought it would be fun to create a place a bit less formal where we can express ourselves a bit more.  So here it is.  Everything here is based on our opinions and experience and doesn’t set out to offend anyone.  Please give us your views in response.

Don’t be fooled by datacentre tier claims

Or it could be your tears flowing when it lets you down…

As a company that has spent the last year and a half building and bringing on line the largest, high quality quality co-location datacentre in Scotland, I get annoyed when I see other datacentres who claim to be “Tier III design” or “Tier III” or even “Tier III certified” when they are not.  Why does this matter to me?  Well to be honest, it doesn’t really but it should matter to you if you are using or thinking of using a co-location datacentre to host your critical IT services.

Over the last couple of years alone, the IT press has been littered with stories of co-location datacentres going offline and causing major outages for their customers.  Here are just a few –

These are just a few I found in two minutes on Google.  Of course not every outage makes the press.  In fact while writing this blog, I saw a news article on a Scottish national paper’s website about one of the largest local councils experiencing a major datacentre outage due to a faulty fire suppression system that decided to trigger when no fire was apparent and take out a number of critical servers (and their services with it).  Disasters in the datacentre do happen!

So to come back to my headline, why does datacentre tiering matter?  Well the answer is very simple.  If a datacentre provider is prepared to spend the money to go through a certification process, such as The Uptime Institute certification then it shows a commitment to quality and you should be looking for this as the number one requirement in a provider. But don’t stop there.

With TUI (there are others) there are three levels of certification:

  • Design – where the provider pays TUI to review the datacentre design and certify it to the correct Tier (1 – 4)
  • Construction – where the provider pays TUI to attend site and test the facility and certify what has been built to the correct Tier (1 – 4)
  • Operational Sustainability – where the provider submits to an audit after 12 months of operations and TUI certifies that the provider is operating the facility to their standard

So as a first point, if a datacentre provider makes any kind of claim to be “Tier xxx” then check it with the certified body.  If it is TUI (the most popular and recognised), then you can use their website to do this here.  It might surprise you how few are actually certified!  Well it does cost around $200,000 to go through all three…

Be careful though, many datacentres only go through Design certification and then go no further, allowing them to then build something a bit cheaper (and less reliable).  Thankfully TUI have woken up to this and changed their policy.  Any Design awards made after January 1st 2014 now automatically expire after two years.

My view is this.  If you want to be assured that the datacentre you are in or going into is good enough for mission critical services then you need a facility that is Tier III Design and Construction certified as a minimum.  If you want to (as much as anyone can) mitigate the single biggest risk to any datacentre (people) then you need to find one that has gone the extra mile and has the Operational Sustainability certification as well.

Now to confuse you a bit more I will add one more thought, just to prove that this is not a well disguised advert for TUI (it’s not!).  Personally I think the Tiering system has a flaw.  As an organisation that has spent the last year and a half designing and building a large scale (2,000 racks) datacentre we felt that Tier III was good but in some cases not good enough.  However, we also found that Tier IV was a stretch too far for many and did not make commercial sense for a co-location provider.

So what did we end up doing?  Well in the end we settled for certifying to Tier III as you can only certify to your lowest point.  In fact our Design certificate is due any day now (woohoo!) and the Construction certification process is all booked up for early 2016.  We have also committed the funds in our plan to achieving Operational Sustainability by mid 2017.  But there is a twist.  We felt that N+1 (the requirement for Tier III) redundancy through the mechanical and electrical infrastructure (generators, switchgear and UPS) was not good enough for mission critical systems.  So we actually built the M&E infrastructure to the Tier IV standard, although we can only certify to Tier III.  Why?  Well the requirements and extra cost to achieve Tier IV just didn’t make sense for a co-location datacentre in Scotland but we wanted to be able to offer a (genuine) 100% SLA and you simply cannot achieve this with N+1.

So the moral of this tale is this – Tiering does matter but check up what you are told with the certified body and look under the covers.  If there is no certified body then you have to question why.  The devil really is in the detail.  Don’t be the one shedding tears over a datacentre outage.  Leave that to the post Jan 2014 Design certified datacentres!  (Sorry, terrible pun I know…).