IT Is Power. Try Living Without A Single Breadboard For A Day.

Don MacVittie

Subscribe to Don MacVittie: eMailAlertsEmail Alerts
Get Don MacVittie: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, Virtualization Magazine, DevOps Journal

Blog Feed Post

The Cloud Is in a Datacenter By @DMacVittie | @CloudExpo #Cloud #Containers

Virtualization, cloud, and containers share one simple premise – make it easier for folks to get machines that do what they need

The Cloud Is in a Datacenter

Funny thing about the never-ending discussions of cloud, virtualization, and containers out here in pundit-land… Most writers blithely ignore the one truth that all of us need to be reminded of on occasion.

Your cloud is built on hardware.

Yes indeed, I did say that out loud. Talk about fifteen layers of server/network virtualization all you like, SDX your way into the 22nd century, but never forget that someone somewhere is racking and stacking to make it happen.

Why does that matter? Well for a lot of reasons, though they can be broken into the usual categories – public and private. On-premises and off.

Virtualization, cloud, and containers share one simple premise – make it easier for folks to get machines that do what they need, when they need it. Which is cool, but easier (and/or cheaper) always results in more. And more means that underlying hardware has to grow. It also means the complexity of the infrastructure grows.

On-premise
If you’re running VMs, a cloud, or containers on your internal network, increasing volume directly translates to increasing costs – both capital and operational. This matters more than we’d like to admit, simply because the business wants more agility, and even more control in their IT systems, but rarely does a VP or CEO end such a speech with “and we’re going to invest more to get there”.

This leaves an IT organization in the all-too-common situation of “do more with less”, but as these technologies have shown (at least virtualization and cloud have – containers are still headed there), more tends to be a huge multiplier. Sure, you can cram more onto physical hardware, but only so much more, and sure, without the need for more hardware administration, you can focus more tightly on systems administration, but in the end, the amount of usage increase outstrips the increase in productivity. Search on “VM Sprawl” if you don’t know what I mean.

You can further streamline operations with automation and DevOps configuration tools like Stacki and Puppet, but again, there is a limit to how much time you can gain to reinvest in all the new servers being spun up on the hardware and the maintenance/troubleshooting of those servers.

Clustered applications like Big Data and Cloud make this matter even more time consuming because (a) they require more cross-functional coordination than other apps, and (b) they tend to grow, precisely because they are seen as useful additions to the offering.

Off-Premise (Service Provider, Cloud)
Off premise has a different set of considerations, because the growth in hardware is somebody else’s problem, but the real growth area – number of servers – is still in your ballpark.

And “somebody else’s problem” is not really a good thing. There are several confounding issues with cloud providers, depending upon their size.

Large providers
Large cloud providers straight-up aren’t interested in your needs. Yes, they’ll be polite about it, but try to get wire-level security information out of them – even propose a way you can get it and filter it to your own machines only – and see how far you get.

I was sitting in the room when one of the world’s largest companies tried to get this exact information out of one of the world’s largest cloud providers. It was ugly. This customer – and not any customer, but the CIO of a massive customer – even threatened to pull all business if there was zero flexibility from the provider side. There was zero flexibility on the provider side. A whole new category of “cloud app” had to be created at this customer, and those applications shipped off to a provider that would allow them to insert wire-level forensics tools into the communications chain – in a co-hosted environment.

There’s also the question of “it’s so massive, we don’t know”. Scan the AWS forums (NOTE: Just using AWS as an example because I’ve been through it with them… Other vendors have the same problems) for “Missing machine” or similar. Inevitably the techs start from a position of “It’s the customers’ fault”. They do that because in the majority of cases they’re right (full disclosure, they were with me). But not always. And honestly, even if they’re right, at the time of customer contact, they should acknowledge that they’re probably dealing with a desperate IT person who just wants to know if (s)he can get the server back. When they’re wrong, and AWS really did swallow the machine, the remedies are not very acceptable. Or weren’t when I had reason to go down this path.

Finally, there’s networking. Totally virtualized networking does work astoundingly well for the general use case of “get requests to my machines and responses back”, but when you enter the enterprise environment of specialized requirements, large providers designed for near-consumer-grade provisioning of servers just can’t help you. I’m pretty close to an expert in AWS, and their networking options are great, and yet, I’ve had problems that were unresolvable without control of both sides of the issue – the network infrastructure and the virtualized network. We ended up working around them, and in fairness to AWS, one of them was a corporate policy for connectivity, the other was by virtue of using BIG-IP functionality cross-network… But solutions that worked anywhere else were not an option at the time in AWS, and probably aren’t today (for the same reasons). In short, if it’s below layer 7, they can’t/won’t help you.

With large providers, what they do, how they do it, and what you can expect from them are all pretty well documented a thousand times over – the good and the bad – on the Internet. Accept them for what they are, and keep good backups of cloud data, just in case. And do not forget that at this time it is nearly impossible to get your VMs out of these providers intact. If you are moving providers, hopefully you have some astounding scripts to recreate the servers and copy out the data. Likewise, the top providers are listed half a billion times on the Internet, so I won’t repeat them here. I do suggest that you look at Telcos while considering large providers. They don’t have the name recognition, but they certainly have the offerings and infrastructure of the large cloud providers.

Small Providers
Small providers are a completely different breed. Yes, they will give you better service, and with added functionality/flexibility may even end up less expensive… But you have to be careful about where your trusted data is residing. Flashy web pages and easily provisioned machines are both easy in this day-and-age, so it is entirely possible that your small provider with great service is located in a basement that could be flooded tomorrow by an over-flowing toilet. Or could go out of business next week. I’m not saying all small providers are a problem, I am saying you have to be a little careful.

It’s like the early days of the Internet… I worked for an ISP/message board company back in the day doing perl. They were pretty well known, and were well liked. They were also a sweatshop. Only a few of us were regular employees that were treated well, all of the Ops staff were underpaid and over-worked, and really didn’t care about the customer… Not to say this is all bad, the company launched the careers of a lot of people that didn’t go to college because few of us with degrees would work there, but it did have a negative impact on employee attitudes.

In short, with a small provider you need to visit them. See the datacenter and talk to the employees. Not the person trying to get you to sign the dotted line, but the Ops team. Understand their financials if you can. Of course they don’t want to share that kind of information with you, particularly if you’re a hard negotiator, it’s like handing you ammo. But at least get enough information that you’re comfortable they have a reasonable run life in them.

The nice part about small providers is that you can generally put whatever requirements you need into place. Instead of telling you “All our instances cost the same, you’re not special”, they’re likely to work with you (possibly for more compensation, but not always) to achieve your goals.

I do know a couple astounding small providers, but alas, small providers tend to be regional, and I don’t want to list companies with a small geographic focus in an internationally read blog. Search them out locally, through city government agencies or online.

Mid tier providers (including vertical specific providers)
These are the best and the worst of the crowd. They can be like large providers, or like small providers, but the best of them are in that sweet spot in-between. They still have the flexibility to be a trusted partner helping you meet your goals, yet are large enough that revenues are less of a concern. These are the companies that will assign you a person to work with you and get things configured just the way you need them.

Their only unique challenge is growing pains. Like the days of outsourced IT, the price you negotiated when you set up your agreement may have been astoundingly lucrative for them, and they reciprocated by giving you astounding service, but as time goes on, the value of that contract goes down – both because of inflation and because of a growing customer base. Best to renegotiate yearly to make certain both sides are satisfied with the relationship.

And for everything but co-hosting, make certain they’re using a platform you approve of, and that you have an exit plan. Since these providers do generally use off-the-shelf solutions like VMWare or OpenStack, make sure you can pull your images whenever you like. Hopefully you never need that requirement, but even shifting an app that is ill-suited to cohosting into your DC is easier with such a requirement.

And again, either demand they have a backup plan for your servers that meets your organizations’ requirements, or institute your own.

I do know one or two mid tier providers. I’ll mention Canada151 Data Centers as an option (full disclosure: I am friends with some of their management, and intend to use them for a project starting in a month or so – but I trust them with my project, and am willing to pay them, so that says something about their promise). They’re new, but are approaching the market with a fresh outlook and a state of the art datacenter, aiming at both Canadian and international clients.

For others, they’re a little more difficult to find online. If you need one that is local, again look to local government listings of businesses, or online with a local filter. For a general search in your country or region of the world, it is harder for this category, but possible. Try “managed hosting” or “cohosting facility” for Google searches, that should get you started at least.

Summary
There is a lot to consider when working with a hosting provider. The companies I deal with end up with a combination of: Big Three cloud provider for public websites that need no access to the DC, shared hosting for dynamic sites that do need access to the DC, and have specific security requirements, and internal for extremely sensitive apps or apps with high-interconnectivity requirements in regards to internal systems.

That middle tier is one you need to consider seriously. Take your time, get to know your potential vendors. Yes, I get that you have a project timeline, and it’s likely short. But you’re going to want a longer-term relationship with whomever you choose, unless you like follow-on projects to move things between providers… So breath for a minute and choose someone who is accessible, responsible, and willing to be your partner rather than your provider. You’ll thank yourself. Negotiate prices less harshly, and focus on level of service more. And make sure it’s a place you can just “walk into”. It’s a business, it needs to be accessible to its customers, even if they don’t want you in the DC unescorted (hope they don’t. If they do, who else is in there – with your servers – unescorted?).

Honestly, if you know a good hosting facility, or have a solid recommendation, start there. I listed ways to find some in case someone is starting from scratch… But recommendations and personal knowledge should obviate the need for that kind of search… Unless you’re looking for comparisons leading up to negotiations.

And remember, in the end, if that infrastructure goes out, no matter where you are hosted, you’re going to have a problem. The bigger vendors handle it better, but still, there have been astoundingly large outages at some of the biggest providers in the world, so expect it, have a plan, know what you’ll do, and what your various providers offer while outages are going on.

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.