Virtualization 2.0?
The driving force for all major waves of change in IT, and perhaps all major industries, has been solving a “pain point”. LANs emerged as a solution to get prossessing capability to all the users in the organization, not just us propeller heads writing and running the software in the raised floor sanctuary. The PC extended that to the home user and facilitated the distributed, networked computing era of the 1990s. Storage arrays solved the need to handle the exploding data volumes ushered in with the web. Advanced graphics processors and faster CPUs enabled the GUIs that we all interface with computers today. Disk based backups, sophisticated SAN clones, Snap copies etc. all helped solve the need to keep systems online, and give other systems such as backup and decision support access to the business data without interfering with the transaction systems running the business.
So what is driving the Virtualization 2.0 movement? As Dan Kusnetzky points out in his blog:
With the proper planning and correct implementation, the use of virtualization technology can bring the following benefits:
• Higher application availability than can be found on a single industry standard system.
• Scalability beyond what can be found using a single industry standard system
• Higher application performance
• Optimization of current environment
• Application agility and mobility
• Streamline application development and delivery
• No need to over provision to obtain reasonable service levels
• “Green computing” (lower power consumption, lower heat generation, smaller datacenter footprint)
I would add to that list:
• Disaster recover that is true recovery capability vs. a plan
• Efficient asset management (retiring servers by moving the server “image” to a new system with minimal interruption)
• Consolidation of distributed data centers by moving the images, not the hardware
But, as a strategist from one of the most capable virtualization companies, I have to echo his statement:
Will this be enough?
While to the casual observer, this seems like a lot of solutions to many different problems, and should be a “no brainer” as far as adoption, we have seen quite a bit of resistance to change and adopt this “new technology”. I use the quotes, because IMHO virtualization has been around as long as I have been in IT… nearly 30 years (see my white paper ‘Virtualization beyond Hypervisors - enabling reliable DR, disrupting best practices’). Many of the reasons are related to the need to separate “processing” from “storage”. Again I use quotes to emphasize something that should be 2nd nature, not something that seems foreign.
Some of what seems to be resistance to change may be some perceived and some real risks to wide spread use of virtualization. Some of these risks we have faced before.
• Increased impact of a system failure – In the late 1990s, enterprise class UNIX machines became so large that they had to be partitioned to effectively use their capacity. I forbid our company from using them in production due to the risk that a system failure would take down multiple applications simultaneously. The same is true today with virtualized servers running multiple VMS.
• Troubleshooting complexity - In the open systems world, troubleshooting performance problems is known as “pushing the bottleneck around”. That is, if you are constrained by CPU, add CPUs. Now you may be constrained by RAM, add RAM. Now it is I/O, add I/O capacity. Now it is CPU… In the virtual world, it is far more difficult to pinpoint what (or who) is causing the problem.
• Lack of mature systems management tools – It took the SM vendors decades to perfect (if you can make that claim) their tools in use today. But unfortunately, their capabilities are based on static environments, and most often do not react well to dynamic changes in infrastructure operations and configuration.
• Business process change – ITIL best practices are based on keeping things the same, or managing change at a very granular level to prevent the introduction of problems during changes. Change, configuration and capacity management processes (among others) will have to be revamped to accommodate environments that dynamically change.
• SLA impact – ensuring that specific machines provide guaranteed response times can become very difficult in an environment where workloads can move between machines while running and all underlying hardware is shared. While it might not mean an outage it could certainly adversely affect performance in a noticeable way.
• Machine Sprawl – the very problem that VMWare supposedly solved is returning in the virtual world worse than before. x86 machines were proliferating so rapidly because they were cheap and easy to deploy. Now we have fewer physical boxes, but the rate of VM proliferation is greater than the physical machines were before, because they are even cheaper and easier to deploy. System administrators may soon find themselves with more servers (virtual) than they have the ability to manage.
While I don’t think any of these are insurmountable, it may explain why the adoption of more widespread virtualization approaches is meeting resistance.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment