On Friday 21st March, operations at Heathrow ground to a halt. 1,350 flights were cancelled or diverted, impacting 300,000 passengers and halting the transportation of millions of pounds worth of goods. The headlines and commentary that followed all concluded the same thing: “lessons must be learned.” Precisely what those lessons are, however, remains up for debate. Crestchic’s Paul Brickman explores what businesses and other sites where power is mission critical can learn from the crisis. 

Heathrow: What we know 

The energy secretary, Ed Miliband, has commissioned the independent National Energy System Operator (NESO) to investigate and assess the UK’s energy resilience. Lessons. Must. Be. Learned.

At the time of writing, the latest information suggests that the fire started in a transformer within National Grid’s North Hyde substation. While finger-pointing still abounds, and a national enquiry is just out of the starting blocks, we are yet to learn exactly what happened when the power went down and – critically – why one of the world’s busiest airports went offline for almost a whole day. 

Whatever the root cause, the airport stayed closed for more than a day. The incident serves as a huge wake-up call. As well as a blow to the economy – with the fallout likely to run to billions of pounds – the outage raises unanswered questions about the resilience of power infrastructure at other critical sites. 

What about the backup? 

Sources suggest that the airport had access to emergency backup power, including diesel generators, batteries and a biomass power generator. However, the backup was only enough to power critical systems, such as landing equipment and runway lights. With Heathrow’s electrical demand estimated to be around 40 MW, the fact that backup power couldn’t extend to running escalators, bridges and baggage carousels seems unsurprising. But is it? 

DC Byte has reported that the Virtus London 2 and Ark Union Park data centres, which draw around 50 megawatts (MW), were connected to the same North Hyde substation that caused Heathrow to grind to a halt. Yet neither suffered a loss of power. Why? The data centre sector has stringent targets. Goals of 99.999% uptime and the extreme costs of downtime mean industry is risk averse. They backup their grid supplies 100% – and then some – with enough redundancy to ensure they stay operational in the event of a grid outage. 

Start with the basics, then level up 

It’s already been widely agreed that lessons must be learned. So, here’s a starter for ten – a takeaway for all businesses: To achieve true power resilience, backup power must be sized to fit. The new ARK data centre is rumoured to have 12 backup generators – suggesting downtime is not a cost they are willing to entertain. 

First, identify critical loads. What does your business need to keep operational? Calculate their total power draw, factor in the desired runtime, and add a safety margin to ensure sufficient capacity for unexpected fluctuations. Better still, bring in the experts to do it for you. 

Once you’ve got your backup power systems in place, it is critical to ensure that they work as they should. An early comment from Ed Miliband suggested the fire “appears to have knocked out a backup generator”. While we are yet to learn whether backups actually failed, maintenance and testing backup power generators is a piece of the power resilience puzzle that should not be overlooked. 

If you want your backup power to work, testing it with a loadbank is critical. 

www.crestchicloadbanks.com