In trying to imagine a worst-case scenario for potential disaster in the data centre, you'd have a hard time picking an event more frightening than Hurricane Sandy in 2012.
On October 29, 2012, Hurricane Sandy made a sharp turn on a clear path for New Jersey’s coast. A storm of this size and intensity left little doubt in business owners' minds that flooding was likely. In the Northeast, it was clear that the impact of Sandy would be large. It was possible that coastal flooding would not only disable sections of New Jersey, but also parts of Manhattan.
For many data centre operators, Sandy was shaping up to be a nightmare scenario. For the Telx team, however, the storm was an opportunity for data centre personnel to put years of training and planning to the test.
During the hurricane and its aftermath, Telx data centres never went down. Despite massive, long-duration power outages, sketchy communications, impassable roads, rationed fuel, and displaced personnel, Telx data centres in New York and New Jersey remained operational.
Weathering this storm took years of planning and experience, an eye for the unknown, and a knowledge of the fact that even a small amount of downtime is unacceptable in today's business environment.
How prepared are you for a disaster near your data centre? Evaluate the factors below to find out.
Location, Location, Location
Where is your data centre located? Is it in a flood zone or an area with excessive seismic activity? Alternatively, is all of your data located in one facility or geographic location, or is it spread out over a larger area?
While flooding may be more likely closer to the coasts, there are still coastal locations that lie outside the 500 year flood plain, like Telx's 3 New Jersey locations. Paying careful attention to location within or outside of flood plains, and within different seismic zones, can give you a better understanding of how likely natural disaster is at any given location.
The actual location of the data centre isn't all that matters, however. Proximity to infrastructure that may be needed after a disaster is also essential. A location may be outside the flood plain but inaccessible after a flood because surrounding roads aren't.
Ideally, your data would be housed in several different facilities across a variety of locations so that even if something does happen to one location, resources can be quickly diverted to avoid downtime.
Contingency Plan, or Scrambling After a Crisis?
Around five years ago, and even later, the status quo was to have a disaster recovery plan that described how to recover from a disaster or unplanned interruption of service. Thanks to virtualisation, cloud computing, and data replication, however, the mindset has recently shifted to one of business continuity, which allows teams to have access to systems and information from nearly any connected location.
Your business will be in a much better position to avoid downtime if you think more along the lines of business continuity and less along the lines of disaster recovery. The former prioritizes diverse connectivity options that safeguard access to computing, as well as precautionary steps should some sort of natural disaster appear on the horizon.
For true business continuity, the facility you occupy should not only house important resources in multiple locations; it should also offer generator, cooling, and power redundancy and have technical support in place should you need it. Having mission-critical data backed up before a disaster ensures continuity should disaster ever strike.
Before, during, and after a disaster, people are your most important asset.
Your team should be well-versed in the ins and outs of your contingency plan. You should also have plans in place about who must stay on-site and who can work remotely. (This also means making sure that the systems you have in place for people to work remotely actually work.) What's more, you must have established lines of connectivity between business units so things can continue to operate normally.
Even-or perhaps especially-in a disaster situation, digital redundancy and technical plans must be backed up by a team on the ground that can ensure full continuity. Telx facilities are manned 24/7 for assistance when needed.
Last year, Facebook shut down one of its data centres in its entirety to stress test its resiliency plan. While you probably don't have to go that far to ensure your systems and plans are what you need to succeed, it's absolutely essential to subject your plan to constant revision to ensure maximum uptime in the event of a disaster.
If you can test your infrastructure and plan annually or quarterly, you’ll be that much more prepared if disaster ever strikes.
Hope for the Best, Prepare for the Worst
Telx weathered one of the worst storms the East Coast has seen in recent memory by focusing on safe, resilient locations, having a contingency plan with a well-trained team to back it up, and constantly trialling and revising to ensure the BC/DR plan is current.
After the dust settled and operations went (mostly) back to normal, Michael Allen Seeve, president of Mountain Development Corp., remarked that "Hurricane Sandy was a great reminder that there are professionals with years of experience anticipating and preparing for these events. The smartest of them pre-planned and worked with locals to build infrastructure and connectivity to withstand the unknown."
No one ever wants to face disaster, but having a plan backed by the right resources can help business remain normal after disaster strikes. Even if your business never has to face another Hurricane Sandy, it never hurts to be over-prepared.
If you'd like to learn more about Telx's Business Continuity and Disaster Recovery services, please visit our BC/DR page here. For all other questions, reach out to us via the contact page of our site, or by Facebook or Twitter. We would be happy to answer any of your questions!