IT Is Power. Try Living Without A Single Breadboard For A Day.

Don MacVittie

Subscribe to Don MacVittie: eMailAlertsEmail Alerts
Get Don MacVittie: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Virtualization Magazine, Virtualization Expo, IT Strategy, IT as a Service, F5 Networks

Blog Feed Post

The Rubik’s Cube of IT

Cloud Will One Day Make Your IT Life Easier

Rubik’s Cube was first patented in 1974. The first book talking about a solution algorithm was published in 1981. In 2007, computers were used to deduce the smallest number of moves to solve a cube, and in  2008 that number was further reduced. That’s 34 years after it was invented. And it’s just a toy.


I’ve danced around this quite a bit, but time to hit it head on. The maturing of server virtualization, the growth of virtual desktops, the introduction of cloud, and deduplication of at-rest data are all issues that are going to have a huge impact on your data center and your IT operations. Unfortunately, at this time it appears that this impact is going to be an increase in complexity. Like Rubik’s cube, to move new stuff into the correct position, you’re going to have to  shuffle about some already solved problems.

Virtual desktops will lighten the load of what must be deployed on each users’ desk, but the hardware will still be on the desk. Which means they offer overhead in the complexity department while giving you more ability to control licensing and clean up after infection, employee termination, whatever. It also allows you to reduce your desktop budget, but that money will go into the VDI infrastructure – at least early on. The complexity comes from having a desktop machine and more servers to handle the virtual desktops, and network bandwidth to switch your hundreds or thousands or hundreds of thousands of users to running over the network.

In some ways, these technologies make life easier – it is certainly easier to clean up after a successful attack if you are virtual, at this point in time you simply dump the virtual and load a clean copy. Problem is that it is a matter of time before jailbreaking a virtual will become a reality, then it will be harder, not easier, to clean up after a successful hack, with some outrageous number of virtuals potentially at risk. And that doesn’t even begin to address what it will look like (or if you will ever know) when this same scenario happens in the cloud. Think I’m full of it? Then you should read your cyber-crime history a bit better. Bad guys have been proving that nothing is impossible and attacking where you’re gloating is the blitzkrieg of hacking since about the time Rubik’s Cube was invented.

Image From Wikipedia

Cloud will one day make your IT life easier. Today is not one day. You will have to have secured connections to cloud providers, do performance testing with the cloud included, install infrastructure to load balance out to the cloud or back, have a global DNS style product to direct requests to the correct place – be it cloud, one of multiple data centers, or a SaaS vendor, and will have to make certain that your security policy both enables cloud-based applications accessing databases in the DC and protects your customers/employee’s personal data on-par with what you’re doing today. You’re going to have to scramble to stay ahead of departmental cloud usage with standards and requirements for cloud applications, and you’re going to have to review cloud-based departmental apps for compliance with security policy and an eye to how you would fold them back into the DC if you needed to.


Storage vendors are hailing deduplication of primary storage as the greatest thing to hit storage since RAID… And it is in some respects rather awesome. Problem is that the primary storage dedupe I’m hearing about is vendor-specific. It would be cool (that’s Don-speak for “here comes a pipe dream”) if the vendors got together and agreed on a standard mechanism for dedupe that utilized the storage to contain the dedupe store, then you could transfer from one vendor to another without rehydration. Why is this necessary? Well, let’s say that you have a 10 TB array, and it has 8 TB of deduped data on it. How much space do you suppose you would need to rehydrate said data? Almost certainly more than 10 TB. That means to get off the array, you would either need a dedupe engine that operated as the data was written to disk (some do), or you’d have to buy a monstrously large target array that most likely would end up with about 8TB of deduped data on it. Not very efficient. So the complexity here is more in limiting options than it is in operational complexity.


I agree with so many other writers to point out that this stuff makes for the data center of the future. It absolutely does… Things are going to look rather different because of these technologies, but while we’re getting there, you are going to have more work to do. We at F5 can help with several of these issues, at the cost of another device to administer… so you see the issue here, no matter which way you turn, integrating all of this into your existing infrastructure is going to take time, patience, and a little bit of genius. Thankfully, genius is not something IT is short of. Time and patience, well that’s another story.

And, like Rubik’s cube, the closer we get to completion, the more steps will be introduced to solve the problem. But eventually, we’ll get the 22 move solution, and our lives will be easier. Or so the theory goes.

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.