Guidelines and best practices for virtual systems in health care IT

What are the guidelines for using virtual systems in health care IT? We offer nine best practices, plus a glimpse into the future of virtual computer environments.

Best practices in health care IT change quickly: A best practice today may become a poor one tomorrow, as hardware and software change. So, my guidelines for using virtual systems are simple:

  • Purchase systems based on best cost for the performance.
  • Use common sense when purchasing any new technology -- use standard hardware and operating systems where possible.
  • Understand the difference between a standard clinical application and an application that’s critical to saving lives, and never try to force virtualization on the latter.

The virtual systems I use are mostly for business applications. I have some virtual systems for clinical applications, but they are smaller, and have limited user access and small databases. Most importantly, all the virtual systems I run have 100% vendor approval. If your vendor tells you it can’t support the system in a virtual environment, you should believe it. If you force a virtual environment on a vendor, you will not get support when you need it the most.

Based on my experiences, I offer nine more best practices:

  1. Know the risks. Remember that not all servers are good candidates for virtualization. If there are high risks associated with a server going down, make sure you assure its redundancy if you decide to virtualize it.
  2. Start small. Servers that have low CPU use, low I/O and standard operating systems are the easiest to virtualize. Don’t try to virtualize the largest or the most critical server you manage.
  3. Back up. Back up a virtual server’s clients often. Use the host server’s ability to take snapshots of virtual clients. Use the host monitoring tools and plan a standard recovery process.
  4. Keep your patches up to date. Don’t be lax in installing all the security and OS patches for the virtual environment. It’s better to stay ahead of the curve.
  5. Virus protection. Running a virtual environment doesn’t protect you from viruses. There is always someone out there who will try everything in the book to drop a virus on a virtual system, so protect your virtual servers.
  6. Automate systems management procedures. When the number of virtual hosts and clients starts to grow, it behooves you to automate your systems management procedures. Don’t forget to automate your server recovery procedures as well. It doesn’t help when you are under pressure to rebuild a virtual environment and have to do everything manually.
  7. Monitor. Once you build the virtual environment, make sure you run monitoring tools to ensure that errors are captured and reviewed. Install alarms at thresholds that, if exceeded, would cause performance problems. This means monitoring CPU use, available disk space, memory use and swapping. Sometimes end users get very excited about a specific application and open it up to more users than the configuration was designed to handle.
  8. Reclaim unused resources. Virtual servers cost real money. If you build an elaborate virtual server farm but its resources are not used, resize and rebuild it. No one is served well if the virtual servers are hardly utilized.
  9. Create standards and stick to them. A virtual server environment should have the same level of standards as a regular server environment. Don’t create one-off virtual systems that don’t adhere to standards.

The future of virtual systems

What does the future hold for virtual computing environments? Well, I don’t wish to sound crazed, but the sky really is the limit. Vendors are aggressively building hardware redundancy into the virtual enclosures offered today. This will only become better. I believe some of the major hardware vendors will integrate virtual server environments into their storage area network (SAN) solutions with elaborate central management, auto failover and auto recovery. This could be called a virtual server network system (VSNS).

The most difficult future steps, specifically in health care, will be getting application developers to embrace virtual server environments.

As with SAN farms -- where the systems manager no longer decides where the spindles are that hold his database -- VSNS farms will have multiple virtual server enclosures. Based on use and resource allocation, the VSNS manager will automatically decide where to build or move virtual systems. It will monitor virtual servers and move virtual clients on the fly to where the best performance and available resources are. It will take automatic snapshots and rebuild virtual servers automatically when they fail. The VSNS manager will know when server hardware components, such as CPUs, memory and network interface cards, start to fail and redirect users automatically to the snapshot-recovered virtual server.

I also believe that virtual systems’ servers will become significantly larger in a very short time. We are already using quad-core, 64-bit x86-64 Intel Corp. processors in some of our largest virtual hosts. The larger and faster Intel Itanium processors are becoming available in the latest blade enclosures. These processors will be embraced in future virtual OS environments. I believe Unix vendors will expand virtual capabilities in their operating systems, probably in the next year.

The most difficult future steps, specifically in health care, will be getting application developers to embrace virtual server environments. These application vendors will be slow to move to a virtual environment because to them it is a complete restructuring of their applications. They will find little benefit in verifying their applications in a virtual environment unless they can show performance improvement for less cost. Picture archiving and communication systems will need to go through a completely new and costly U.S. Food and Drug Administration recertification process. The one thing that may move the application vendors is the customer telling them the application has to run in a virtual environment.

Al Gallant is the director of technical services at Dartmouth-Hitchcock Medical Center in Lebanon, N.H. Let us know what you think about the story; email editor@searchhealthit.com.

This was first published in March 2010

Dig deeper on Electronic health care systems, data centers and servers

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

-ADS BY GOOGLE

SearchCompliance

SearchCIO

SearchCloudComputing

SearchMobileComputing

SearchSecurity

SearchStorage

Close