IT Is Power. Try Living Without A Single Breadboard For A Day.

Don MacVittie

Subscribe to Don MacVittie: eMailAlertsEmail Alerts
Get Don MacVittie: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Continuous Integration

Blog Feed Post

Rebuilding After Disaster: DevOps is the first step

Picture compliments of FS.comhttp://www.stacki.com/wp-content/uploads/2016/05/DC.Flood_.jpg 600w" sizes="(max-width: 300px) 100vw, 300px" />

Not our flooded DC, but similar.

If you’ve ever stood in the ruins of what was once your datacenter and pondered how much work you had to do and how little time you had to do it, then you probably nodded at the title. If you have ever worked to get as much data off of a massive RAID array as possible with blinking red lights telling you that your backups should have been better, you too probably nodded at the title.

It is true that I have experienced both of these situations. A totally flooded datacenter (caused by broken pipes in the ceiling) set us to scrambling so that we could put together something while we restored normal function. The water had come through the ceiling and was a couple feet deep, so the destruction was pretty thorough. In a different role, a RAID array with many disks lost one disk, and before our service contractor came to replace that disk (less than 24 hours), two more went. Eventually, as more people than just us had problems, the entire batch of disks that this RAID devices’ drives came out of was determined to be faulty. Thing is, a ton of our operational intelligence was on those disks in the form of integrations – the system this array was for knitted together a dozen or so systems on several OS/DB combinations, and all the integration code was stored on the array. The system was essentially the cash register of the organization, so downtime was not an option. And I was the manager responsible.

Both of these scenarios came about before DevOps was popular, and in both scenarios we had taken reasonable precautions. But when the fire was burning and the clock was ticking, our reasonable precautions weren’t good enough to get us up and running (even minimally) in a short amount of time. And that “minimally” is massively important. In the flood scenario, the vast majority of our hardware was unusable, and in a highly dynamic environment, some of our code – and even purchased packages – was not in the most recent set of backups. That last bit was true with the RAID array also. We were building something that had never been done before at the scale we were working on, so change was constant, new data inputs were constant, and backups – like most backups – were not continuous.

With DevOps, these types of disasters are still an issue, some of the problems we had will still have to be dealt with, but one of the big issues we ran into – getting new hardware, getting it installed, getting apps on it, and getting it running so customers/users could access something is largely taken care of.

With provisioning – server and app – and continuous integration, the environment you need can be recreated in a short amount of time, assuming you can get hardware to use it on, or are able to use it either hosted or in the cloud for the near term.

Assuming that you are following DevOps practices (I’d say “best practices”, but this is really more fundamental than that), you have configuration and reinstall information in GitHub or Bitbucket or something similar. So getting some of your services back online becomes a case of downloading and installing a tool like Stacki or Cobbler, hooking it to a tool like Puppet or SaltStack, and getting your configuration files down to start deploying servers from RAID to app.

Will it be perfect? Not at all. If your organization has gone all-in and has network configuration information in a tool like Puppet with the Cisco or F5 plugins, for example, it is highly unlikely that short-term network gear while you work things out with an insurance company is going to be configurable by that information. But having DevOps in place will save you a lot of time, because you don’t have to rebuild everything by hand.

And trust me, at that instant, the number one thing you will care about is “How fast can I get things going again?” knowing full well that the answer to that question will be temporary while the real problems are dealt with. Anything that can make that process easier will help, you will already be stressed trying to get someone – be it vendor reps for faulty disk drives or insurance reps for disasters – out to help you do the longer-term recovery, the short term should be as automatic as possible.

Surprisingly, I can’t say “I hope you never have to deal with this”. It is part of life in IT, and I honestly learned a ton from it. The few thousand lines of code and tens of thousands of billable data we lost with the RAID issue was an expensive lesson, but we came out stronger and more resilient. The flooded datacenter gave me a chance to deal with insurance on a scale most people never have to, and (with the help of the other team members of course) to build a shiny new datacenter from the ground up – something we all want to do. But if you have a choice, avoid it. Since you don’t generally have a choice, prepare for it. DevOps is one way of preparing.

 

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.