Server virtualization has existed for a number of years now, and healthcare organizations are largely finding that virtualization decreases operating costs because of hardware consolidation. Even so, server virtualization introduces new challenges that might not have necessarily existed when an organization used physical hardware exclusively. As such, it's important for healthcare organizations to look ahead and try to anticipate some of the challenges that they could eventually encounter as a result of using server virtualization.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Server virtualization introduces new challenges that might not have necessarily existed when an organization used physical hardware exclusively.
One of the single biggest challenges that healthcare organizations typically face after virtualizing some or all of their servers is that of performance. Performance typically isn't a serious issue for a physical data center because physical hardware often delivers far greater capabilities than the software actually requires. The whole concept of server virtualization is based on this idea and on the idea that organizations can reduce costs by making better use of existing hardware by consolidating and virtualizing server workloads.
The big problem with this is that once a physical server starts handling multiple virtual workloads, managing and monitoring physical hardware resources become significantly more important. If physical hardware resources are overused, multiple virtualized workloads can suffer from performance degradation. This can lead to users reporting poor performance (or even lockups) and multiple applications.
From performance to virtual machine density
First-generation server virtualization products required administrators to spend a lot of time working on capacity planning for virtualizing server workloads. This allowed administrators to create virtual machines (VMs) and allocate resources in a way that ensured that virtual servers will be provisioned with adequate hardware resources to meet workload demands for the foreseeable future.
Today, hypervisor manufacturers such as VMware and Microsoft have tried to develop hypervisor features that reduce the importance of long-term capacity planning, while making it possible to increase a host server's VM density. Such features as dynamic memory make it possible for VMs to claim additional hardware resources on an as-needed basis, and to release those resources during periods of low utilization.
It's difficult to argue the benefits of dynamic hardware usage, but such benefits come at a price. Dynamic hardware provisioning makes it easy to over-commit physical hardware resources. When this happens, multiple virtualized workloads on the host server could begin to exhibit performance and/or stability problems.
Because of this, one of the big challenges for administrators going forward will be to achieve the maximum possible VM density in an effort to receive the best possible return on the organization's hardware investment. At the same time, however, these same administrators will have to work diligently to ensure all VMs consistently deliver an acceptable level of performance.
This big challenge is made even more difficult by the fact that VMs rarely experience consistent workloads. The demand placed on them changes throughout the day, based on user activity. For example, virtualized domain controllers typically see a major spike in activity early in the morning when users are first logging on, but they are practically idle throughout the rest of the day.
One more factor that contributes to the difficulty of ensuring adequate virtual server performance is the simple fact that virtual servers are no longer bound to a single physical host. Virtual machines can easily be moved from one host server to another. Live-migrating a VM to an already overloaded virtualization host can deplete the host of hardware resources.
So, how is a virtualization administrator in a healthcare organization to deal with these sorts of challenges? The best bet typically is to take a twofold approach to making sure that VMs always have the physical hardware provisioning they need.
Two-pronged approach to creating VMs
The first of the two approaches is simply to create virtual machines in a responsible manner. If a VM theoretically should never consume more than 2 GB of RAM, there is no reason to configure the virtual memory to allow for a maximum of 8 GB of RAM. Virtual servers sometimes have a sneaky way of consuming the memory that's given to them, even if that memory isn't really necessary. This same approach can also be applied to other forms of hardware allocation. Virtual servers should always be allocated the CPU cores, storage, and memory resources they really need, but they should never be provisioned with more resources than that.
Some might be quick to argue that you should provision virtual servers with some extra hardware resources, just in case there are changes to the way the VM is used. From a workload management standpoint, it's often better to create a brand-new virtual server for a new application than to try to run the new application on an existing virtual server. Using each virtual server for a single purpose makes the virtual server's workload much more predictable.
The second facet of the two-pronged approach is to make use of third-party management and monitoring software. A number of third-party products exist that can monitor large numbers of VMs in real time, and create detailed reports based on how the provisioned hardware is being used. Many such applications will allow administrators to set threshold values and generate alerts when those values are exceeded. For example, an alert might be generated when a host server's total available physical memory falls below 2 GB.
In addition to generating alerts, it's sometimes also possible to take automated corrective action when threshold values are exceeded. For example, if a host server experiences 90% CPU utilization across all CPU cores for more than a few seconds, there's a very good chance that the VMs are consuming more CPU resources than the host server can comfortably provide. In such a situation, management software might detect the excessive CPU consumption and automatically live-migrate the VM that is generating the greatest workload to an alternative host that is being less utilized.
Virtualizing server workloads can help a healthcare organization reduce costs, but it doesn't solve every problem. Such organizations often find that they have to commit additional resources to managing and monitoring servers once they have been virtualized.
About the author:
Brien M. Posey, MCSE, is a Microsoft Most Valuable Professional for his work with Windows 2000 Server and IIS. He has served as CIO for a nationwide chain of hospitals, and once was in charge of IT security for Fort Knox. Write to him at email@example.com or contact @SearchHealthIT on Twitter.