Tip

Tip: Next-gen virtualizing server workloads in healthcare

Server virtualization has existed for a number of years now, and healthcare organizations are largely finding that virtualization decreases operating costs because of hardware consolidation. Even so, server virtualization introduces new challenges that might not have necessarily existed when an organization used physical hardware exclusively. As such, it's important for healthcare organizations to look ahead and try to anticipate some of the challenges that they could eventually encounter as a result of using

    Requires Free Membership to View

server virtualization.

Server virtualization introduces new challenges that might not have necessarily existed when an organization used physical hardware exclusively.

One of the single biggest challenges that healthcare organizations typically face after virtualizing some or all of their servers is that of performance. Performance typically isn't a serious issue for a physical data center because physical hardware often delivers far greater capabilities than the software actually requires. The whole concept of server virtualization is based on this idea and on the idea that organizations can reduce costs by making better use of existing hardware by consolidating and virtualizing server workloads.

The big problem with this is that once a physical server starts handling multiple virtual workloads, managing and monitoring physical hardware resources become significantly more important. If physical hardware resources are overused, multiple virtualized workloads can suffer from performance degradation. This can lead to users reporting poor performance (or even lockups) and multiple applications.

From performance to virtual machine density

First-generation server virtualization products required administrators to spend a lot of time working on capacity planning for virtualizing server workloads. This allowed administrators to create virtual machines (VMs) and allocate resources in a way that ensured that virtual servers will be provisioned with adequate hardware resources to meet workload demands for the foreseeable future.

Today, hypervisor manufacturers such as VMware and Microsoft have tried to develop hypervisor features that reduce the importance of long-term capacity planning, while making it possible to increase a host server's VM density. Such features as dynamic memory make it possible for VMs to claim additional hardware resources on an as-needed basis, and to release those resources during periods of low utilization.

It's difficult to argue the benefits of dynamic hardware usage, but such benefits come at a price. Dynamic hardware provisioning makes it easy to over-commit physical hardware resources. When this happens, multiple virtualized workloads on the host server could begin to exhibit performance and/or stability problems.

Because of this, one of the big challenges for administrators going forward will be to achieve the maximum possible VM density in an effort to receive the best possible return on the organization's hardware investment. At the same time, however, these same administrators will have to work diligently to ensure all VMs consistently deliver an acceptable level of performance.

This big challenge is made even more difficult by the fact that VMs rarely experience consistent workloads. The demand placed on them changes throughout the day, based on user activity. For example, virtualized domain controllers typically see a major spike in activity early in the morning when users are first logging on, but they are practically idle throughout the rest of the day.

One more factor that contributes to the difficulty of ensuring adequate virtual server performance is the simple fact that virtual servers are no longer bound to a single physical host. Virtual machines can easily be moved from one host server to another. Live-migrating a VM to an already overloaded virtualization host can deplete the host of hardware resources.

So, how is a virtualization administrator in a healthcare organization to deal with these sorts of challenges? The best bet typically is to take a twofold approach to making sure that VMs always have the physical hardware provisioning they need.

Two-pronged approach to creating VMs

The first of the two approaches is simply to create virtual machines in a responsible manner. If a VM theoretically should never consume more than 2 GB of RAM, there is no reason to configure the virtual memory to allow for a maximum of 8 GB of RAM. Virtual servers sometimes have a sneaky way of consuming the memory that's given to them, even if that memory isn't really necessary. This same approach can also be applied to other forms of hardware allocation. Virtual servers should always be allocated the CPU cores, storage, and memory resources they really need, but they should never be provisioned with more resources than that.

Some might be quick to argue that you should provision virtual servers with some extra hardware resources, just in case there are changes to the way the VM is used. From a workload management standpoint, it's often better to create a brand-new virtual server for a new application than to try to run the new application on an existing virtual server. Using each virtual server for a single purpose makes the virtual server's workload much more predictable.

The second facet of the two-pronged approach is to make use of third-party management and monitoring software. A number of third-party products exist that can monitor large numbers of VMs in real time, and create detailed reports based on how the provisioned hardware is being used. Many such applications will allow administrators to set threshold values and generate alerts when those values are exceeded. For example, an alert might be generated when a host server's total available physical memory falls below 2 GB.

In addition to generating alerts, it's sometimes also possible to take automated corrective action when threshold values are exceeded. For example, if a host server experiences 90% CPU utilization across all CPU cores for more than a few seconds, there's a very good chance that the VMs are consuming more CPU resources than the host server can comfortably provide. In such a situation, management software might detect the excessive CPU consumption and automatically live-migrate the VM that is generating the greatest workload to an alternative host that is being less utilized.

Virtualizing server workloads can help a healthcare organization reduce costs, but it doesn't solve every problem. Such organizations often find that they have to commit additional resources to managing and monitoring servers once they have been virtualized.

About the author:
Brien M. Posey, MCSE, is a Microsoft Most Valuable Professional for his work with Windows 2000 Server and IIS. He has served as CIO for a nationwide chain of hospitals, and once was in charge of IT security for Fort Knox. Write to him at editor@searchhealthit.com or contact @SearchHealthIT on Twitter.

This was first published in March 2013

Join the conversation Comment

Share
Comments

    Results

    Contribute to the conversation

    All fields are required. Comments will appear at the bottom of the article.

    Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.