IT Is Power. Try Living Without A Single Breadboard For A Day.

Don MacVittie

Subscribe to Don MacVittie: eMailAlertsEmail Alerts
Get Don MacVittie: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Cloud Computing, DevOps Journal

Blog Feed Post

The Age of Ultimation By @DMacVittie | @DevOpsSummit #Cloud #DevOps

Review your automation and make a decision on what to do moving forward

While we were sweating VM bloat, and throwing barbs at each other about VMs versus containers, it snuck up on us. While some were decrying that cloud would end the datacenter, while others laughed and said public cloud was a non-starter, and still more scoffed at hybrid cloud, it snuck up on us. While big name vendors hawked “Data center in a box” that had integration and automation all sewed up, and bare metal automation was growing up, it jump-scared us. While application integration tools and CI/CD were gaining their stripes, it stood up and announced “I AM ULTIMATION”.

We have reached the point where there is nothing, configuration wise that you can’t do quickly and reliably. From bare metal installers like Satellite or Stacki to full on application dependency and configuration tools like Puppet and SaltStack, to Spinning up VM-light from Kitematic’s beta or command line Docker and Vagrant.

And it is time for us to take advantage of it. The “self-healing network”? Can finally be real and heterogeneous. Automated installation and operation is a 90% thing – nearly all software install/config hurdles can be jumped with other software, leaving hardware failures and the corner-case software issue.

Right Tool for the Job.
We in IT have a tendency to get on a bandwagon and ride along, sometimes getting violently defensive of our choices, even if they aren’t the best way to utilize/evangelize the solution at hand. I point to the articles declaring the end of the data center from public cloud as an extreme example. The premise was at best very far off, and at worst ludicrous. Yet people defended it. Don’t be that person. Every single technology has strengths and weaknesses that can be utilized and avoided. Evaluate your technology usage in light of today’s reality, and adjust accordingly.

For example, containers are great development environments. They’re contained, and you can completely destroy the inside of one – it’s easy to set back up, and the rest of the network is protected from them. But those who are declaring each function will be its own container are not admitting the reality of the management nightmare such an environment would create, and certainly aren’t considering the overhead of that many containers in a datacenter. Containers are lightweight, not zeroweight. Lightweight just means not as expensive as a VM, it doesn’t mean free. And when you’re talking thousands (or tens of thousands) per app, the cost has implications. As does the network impact of that many containers zipping packets around. This bit is easier to manage by grouping per-machine, but that would require some kind of non-ip communication and a thorough way to evaluate communications patterns before deployment (and during for frequently re-used functions) to be effective. We’re not there yet, not even worth considering today.

But Use the Tools
The thing is, find the power of each of these platforms, and use them!

Some VMs exist to do one simple thing and are never visited by IT staff. Do they warrant a full copy of the OS, or could they be put onto a container server and run that way? Most could be containers, saving you resources and even man-hours.

Development should start moving toward containers where it makes sense. They’re easier to develop in, and if it’s a stand-alone app with a single entry point, they’re easier to deploy too. But the real power is “We don’t care what your hardware is, our development environment is in this container”. An age-old timewaster solved in seconds. You’ll have to maintain the versioning, security, etc in the container, but that’s one container, not 500 laptops that you’re maintaining.

We all know what VMs are good for, that’s why we have so many of them (I have six running on work machines right now, and four on my home dev machine… That’s just the ones powered on, and I’m just one technical employee). They’re great for separating tasks, protecting machines (it is far easier to back up a VM than a hardware install in most instances), and abstracting hardware. I personally have VMWare VMs running on Windows, and Linux and VirtualBox VMs running on Windows, Linux, and Mac and I guarantee my business is not as large as yours.

But we’re currently using them for everything. Look into ways to use containers for smaller/more specific loads, and VMs for larger workloads. In the development space, I’ve left my Android development environment in a VM, but use Docker for most Node.js, for example. There are still plenty of uses for VMs, but there’s room to re-evaluate if you need them for everything.

Private cloud is good for quick-turnaround solutions, but both historical and anecdotal evidence say don’t put long-term solutions there yet, because upgrades and failures cause so many headaches. It’s coming, and faster than we should expect, but for now, just “easy up, easy down” type apps.

Public cloud is good for public things. This angers many still in the cloud community, but it’s true. Public website? Good choice. Depending upon your taste for “new and different”, you can put your whole IT function out there, but very few enterprises are, even with the sweet selection of ops automation tools being developed by the big three vendors. I’d watch the space and the growth, but I wouldn’t rush to move everything.

Hybrid cloud is, even today, feeling its way along. Lots of organizations want to have the expandability of public cloud, but keep their terabytes of data internal, along with security systems. That’s reasonable, but it makes for a tough implementation cycle for hybrid cloud.

Automation tools make hardware deployment as easy as the other items listed here, but where these days is hardware in use aside from as the underlying infrastructure for the above deployment environments?

Large data processing and other resource intensive projects. Yes, you can (and some have) put Hadoop in the cloud, but the vast majority of serious users do it on hardware. Some few on VMs, and a tiny fraction would even consider the cloud. There is a belief (that is probably accurate) that IoT will change that… Assuming IoT blows up beyond B2B/Controls layer, and aggregators are best build in the cloud. Time will tell.

And as mentioned above, you’re going to be putting hardware under most of the solutions listed here – all but a subset of the “cloud” category – so I would consider full automation. Particularly since there won’t be 40 ways to set up machines, the variation will all come from the hardware side because software-wise they’ll look the same in different groups. VMware machines will have the same software on them for the most part, OpenStack has only a few architectural variations – yes, there are a ton if you’re using all of the available features/options/add-ons, but we’re really not there yet for the most part – same for Hadoop. Your install will require the same base level of software, with Hadoop installed over that.

That’s it.

Of course you know what works in your organization, so some of these may be slightly different, or you have better solutions in mind. The point is that you need to try it, not think about it.

So I offer you this Ultimation: Review your automation and make a decision on what to do moving forward. If you’re a regular reader, you know that I don’t generally scream “You must do X!!”, and I’m not now. I’m reminding you that looking at some of these technologies 24 – or even 12 – months ago doesn’t necessarily give you an up-to-date picture, and a review of what’s new both in features and use-cases is worthwhile. In fact, this blog may already be out of date.

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.