Health IT and Electronic Health Activate your FREE membership today |  Log-in

Virtualization Pulse


January 13, 2012  2:15 PM

Virtual Desktop Security in the Healthcare IT space



Posted by: FlorianB
desktop virtualization, EHR security, himss, HIPAA, HITECH, vdi

When we talk about application and desktop virtualization, we always talk about how virtual desktops and apps can increase security of application delivery. This is due to the fact that the apps, desktops, and associated data live in a centrally controlled datacenter as opposed to being stored on thousands of distributed end-points.

We are also encouraging BYO initiatives that can help reduce costs at the end point. IT departments let users bring their own Mac, PC, tablet, or smartphone to do their work and may provide them with a stipend to help offset the cost.

These are all great points, but these benefits do not come automatically. One has to apply some simple principles to actually secure any virtual computing environment, and this blog summarize some of my personal thoughts around two key questions:

  • How do I keep my data from walking out of my organization on my employee’s personal device? In the healthcare space, we have to be cautious about patient privacy under HIPAA and HITECH.
  • How do I actually save money by providing BYO?

Before I start, I have to state that I am not asserting or implying that these recommendations will stand up to industry specific security audits – if you’re working in high security environments, you should plan your security practices along with your auditors and security experts. I am also not implying  that you’ll save a specific amount of money with BYO – that all depends on your current and future organizational structure and other factors.

Now, with the disclaimers out of the way, here we go:

  •  The network: Conceptually, it’s pretty simple. Regard your corporate network outside of the datacenter as “dirty”. Think about it. With employees and third party users plugging in around your offices, you have no way to control network security, enforce antivirus standards, prevent malware, etc. So, don’t stress it and just regard your corporate network outside of the datacenter the same way your favorite coffee shop treats their wireless network – as Internet access only. Because you’re moving all apps and their data into the datacenter, focus on security there and treat your datacenter as “clean” by applying rigorous security and antivirus practices. You should put firewalls and gateways between the datacenter and the rest of the corporate network. Seamless access to applications and desktops is provided by Citrix Access Gateway.
  •  Access to the datacenter: Don’t allow wide open VPN access,  but allow direct access only to published apps and desktops. No drive mapping, no direct access to file shares from the end point.
  •  Access to data: If you prevent client drive mapping at the end point, you will effectively prevent users from copying corporate data onto their personal devices. As one additional layer of security, you could disallow any access to file shares other than through the servers that are hosting virtual apps or desktops.
  •  Client drive mapping: Don’t allow it – for the reasons stated above.
  •  Client-side printing:  Try not to allow it – control all printing via central print servers, so that users can’t print to their personal devices. You’ll have to think about it as printing may be an important part for your employees who are working from home. However, there are certain things that should not be printed outside of a controlled environment (patient or personal financial data, for example.)
  • Client USB storage devices: don’t allow them to prevent filed from being saved to a user’s personal device.

Now that you have done all that, you may wonder how you prevent users from emailing data directly to their personal email accounts.  This is not necessarily an easy thing to do. Within the virtual desktops and apps, start by blocking common web email sites, file sharing services, etc. You may have to employ common web security technologies to block web proxies that allow your users to simply circumvent some of the restrictions you put in place. Note that because you’ll regard your office network outside of the datacenter as “dirty”, there won’t be a need to block users from personal email access or other Internet resources.

What about offline users? Well, that’s a little bit tricky. Thanks to 3G personal hotspots and the wide availability of wifi even on airplanes, the times when we’re truly offline are very limited. If you have users who need to be offline, consider handling their data through enterprise-level file sharing services such as Citrix ShareFile. Note that the emphasis is  “enterprise-level” as popular consumer service allows users to move sensitive data to unmanaged and untrusted cloud services. Citrix XenClient, specifically the XT version, can also provide isolation and security on an offline device.

 This is it for security – again, the main goal here is to avoid that your data walks out of the organization if an employee leaves the company and takes their BYO end point with them.

Speaking of devices:  In order to actually achieve any kind of cost savings with BYO, I recommend that you consider this a mandatory program once you’re ready. When your infrastructure is live, discontinue the issuance and support of corporate devices. You’ll have to do this over a period of time so as to not disrupt current users, but whenever someone’s PC or laptop comes up for renewal, allow the user to choose between a BYO stipend or an IT owned thin client (for those not required to travel).

I’d like to hear from you and your thoughts. Please share your comments…

Florian Becker

Twitter: @florianbecker

My Healthcare blog: https://searchhealthit.techtarget.com/healthitexchange/virtualizationpulse

Citrix Consulting wisdom: http://community.citrix.com/p/consultingsolutions

March 17, 2011  9:00 AM

Virtual Desktops…from the cloud…literally



Posted by: FlorianB
desktop virtualization, EMR, himss, HIPAA

HiMSS 2011 is just behind us and many of us are still digesting all the information, sessions, products, and news we were taking in during that lovely week in Orlando.
Well, it’s time to think about the next conference that is worth attending… and – you guessed it – it’s Citrix Synergy 2011 in San Francisco in May.
Those of you who follow my blogs and updates know that I care deeply about electronic medical record applications and the way they transform the delivery of care to patients. I also care deeply about desktop and application virtualization and cloud computing.
In fact, I will speak at Synergy and lead Session 329 on Desktop Transformation. To give you a sneak preview on what is possible these days, I am showing you how to get a fully functional Windows 7 virtual desktop onto your Apple iPad.
As a self-professed aviation nut, I am taking a slightly different take on the whole “cloud computing” topic, so be sure to this out:

Desktop Virtualization in the cloud – literally

 
I hope to see you at Synergy!

Florian Becker
Twitter: @florianbecker


February 23, 2011  4:13 PM

On Cloud Computing and Virtualization – Part 3



Posted by: FlorianB

In this final part of my thoughts on cloud and virtualization, I would like to cover the security aspects presented by Feisal Nanji in session 119 at HiMSS 2011. Interested readers should have a look at Part 1 and Part 2 of this series and also download the slides from the HiMSS website.

On slide 8, Mr. Nanji explores some virtualization concerns. Among them, he explains that virtualization increases complexity in the sense that it touches multiple silos in the IT organizations (apps, servers, storage, network, backup, and security). I agree with his point and would like to predict that any IT organization that positions itself as private cloud provider will have an organizational chart that is different from today’s. Organizations adapt with technology and it’s a good idea to think about this before building a private cloud solution.
On the same slide, Mr. Nanji says that virtualization can cause large-scale failure. Let’s put this one into perspective. Mr. Nanji is correct when he says that the failure of a single physical server can bring multiple virtual servers down. However, thanks to virtualization, the recovery is MUCH faster.  Virtual servers can simply (sometimes automatically, depending on the design) fail over to another physical host and the user would not even notice that anything went wrong. The key here is to plan and design for failure modes and invest in High Availability (HA) and failover  where critical services are virtualized.
On the next slide, Mr. Nanji points out some of the benefits of cloud computing. However, these benefits are not automatically realized when servers are virtualized as his slide title may imply. As I point out in the previous articles in this series, virtualization is just the first step – automation & metering  are essential to converting a virtual environment into a private cloud.

Mr. Nanji is to be commended for his wonderful slides 10 through 16. He does a really nice job classifying clouds and they are a good read – I recommend that you have a look.
I’ll pick the topic back up on slides 16 and 17 where he talks about today’s enterprise security processes and asks questions to what happens to them “in the cloud”. I assume that Mr. Nanji is thinking about a public cloud in this context. Therefore it is important to review the security and audit practices of IaaS or SaaS vendors very carefully. It could very well be that the bulk providers are not adequately prepared to provide the required service levels for the healthcare industry and it is entirely possible that specialty vendors (think HIPAA certified!) will emerge to serve this market.
Finally, Mr. Nanji is asking the audience if it is “cloud ready” – and thinks about standardized operating procedures, automated deployment management, and self service. All those attributes would be important before building out a private cloud environment, but would not be necessary for the consumption of IaaS or PaaS type of services.

So, after having almost completed three articles on the topic this week, what are my conclusions?
 1. Public Cloud (IaaS) is not ready for healthcare any time soon. As I point out in Part 2 of this series, moving the workload that a single physical server can handle to an IaaS vendor who charges $1 per hour per VM would cost roughly $170k per year. So, EMR apps that run 24/7 can probably be provided at a lower cost internally and the security and privacy concerns (along with audit compliance) are very real. Specialty vendors for healthcare may emerge in this space.
 2. Public Cloud (IaaS) is awesome for environments that are short lived (training, demo environments) where the higher cost over a year is easily offset by the ability to provision environments automatically.
 3. Platform as a Service for Healthcare? The answer is: It depends… on the type of app you’re looking to develop and on the security policies and SLAs that the PaaS vendor has in place. I didn’t do much research on the topic, but I’d be surprised if there were any PaaS vendors out there who can meet the complex audit and security requirements required by healthcare.
 4. Software as a Service? Absolutely! In the sense that there are multiple EMR vendors offering fully web-based EMRs that meet the mandated standards, SaaS is the way to go. I would suspect that many of those vendors would not share infrastructure in the backend or allow for your organization to consume this type of software by the hour, but that hardly matters for production EMR applications that run 24/7.
 5. Finally – the Private Cloud for healthcare? Absolutely! Again – if you’re ready to make the investment into system automation and metering, you can act as a cloud provider to your businesses and functional unit. It may enable you to move operational expenses back on the business and out of IT and you would control the security and compliance policies. Implementing a private cloud environment is not something you’d be able to do overnight and it would probably have some impact on your organizational chart, but it may be worth the effort.
 6. One more… Hybrid  Clouds? The answer is again “it depends”. Connecting your IT infrastructure seamlessly to a cloud to provide burst capacity or additional functionality is not exactly easy to do and has a lot of moving parts. More and more vendors and system integrators have solutions in place though to cover the case. The same security and privacy concerns that I mention on Public Cloud apply here though.
 
Thoughts or questions? I would love to hear your comments!

Florian

Twitter: @florianbecker

Citrix Consulting Architects share real world experience: Ask the Architect


February 22, 2011  6:13 PM

On Cloud Computing and Virtualization – Part 2



Posted by: FlorianB

Here at HiMSS 2011 in Orlando, I was wondering about how cloud computing is catching on in the health care industry. Just like when I talk to other industry verticals, I still see a lot of confusion around the topic. Because of that, I decided to define virtualization in this recent blog first.

At HiMSS, I attended Feisal Nanji’s presentation “Securing Health Information in the cloud”. Feisal is an executive director at Techumen and he did a great job giving an animated presentation about cloud computing. He started by explaining cloud computing and its variants really well, and then proceeded to describing some of the security concerns associated with the cloud. On the latter part, Feisal lost me a little bit because I felt that the concepts of cloud and virtualization were a little mixed up. Hence my attempts at providing the virtualization definition first.

So, let’s recap Feisal’s definition of cloud first:

  • Computing Capability on Demand
  • Resource Pooling -storage, CPU
  • Rapid Deployment and Scaling of IT Services
  • Easy Measurement of what’s been used.

You’ll notice that the resource pooling aspect is really what is enabled by server virtualization as I described in part 1 of this blog. The remaining three aspects need some explanation. In today’s world, you can choose to get virtual servers from so-called Infrastructure as a Service (IaaS) providers. You pick from a menu of preconfigured server specs, provide a credit card, and through a seemingly impossible amount of automation, the virtual servers are ready to go in minutes. Most of the time, you will have to install the OS into those servers and add applications and other configurations. The servers are accessible via public IP addresses and you may have to provide your own lock down and security measures. The service provider will apply sophisticated metering of your system usage so that you only pay for what you use. The time is measured in hours rather than months or years and you may only spend a dollar per hour for a server and a few pennies per GB of storage per day. This is the beauty of cloud computing – it doesn’t require you to worry about adding capacity to your own datacenter and it can be very cost effective when you need to spin up servers for short periods of time (think testing scenarios or training environments).

Some service providers started to build on top of an IaaS infrastructure and provided a software developer platform, sometimes called Platform as a Service (PaaS). The Google AppEngine  or salesforce.com’s force.com are two prominent examples. Your application developers can write code that jives with the PaaS provider’s standards and the app will actually execute in on the PaaS infrastructure. These are obviously web-based applications and again – the price point is ridiculously low. Most of the time, you can run up to a certain user or transaction volume for free, and then spend a few dollars for incremental usage. You also never have to worry about system backup or server patching as the PaaS provider does all that for you.

If you like to take it one step further, some providers have built an entire software suite on top of their own PaaS infrastructure (think salesforce.com) and call this Software as a Service, or SaaS.

So, as you move up the stack from IaaS to PaaS and Saas, you have to do less and less admin work and get more and more canned applications out of it.

Feisal also did a good job distinguishing between the different cloud models:

  • Public Cloud. This is a model where your resources are hosted by a hosting provider who will use the same underlying physical infrastructure to serve the needs of other customers. Internally, the service provider has to worry about challenges associated with this “multi-tenancy” approach and isolate the workloads from different customers (“tenants”) from each other. Or maybe not? Before consuming anything from public cloud providers, it is important to read up on their service level agreements (what’s the guaranteed uptime) and security models (will my worloads and apps be affected by a spike in activity of other tenants on the infrastructure?). These concerns are less prevalent in testing and training environments, which is probably why public clouds are so popular for these use cases. Another thought is on cost. While $1 per hour sounds really cheap, the costs add up for production workloads that need to run 24/7 and 365 days per year ($8,760 per year, and that’s just the infrastructure for a single virtual server without any OS, management, patching, etc. If you think you can run 20 virtual servers on a single physical server (not uncommon at all), then you’ll have to compare $172,500 per year to the cost of running a physical server in your own datacenter.)
  • Private Cloud. Some savvy CIOs realized that they can learn something from the operational model of public cloud providers. If they templatize their server operating systems and workloads, apply a high degre of automation, and add granular usage monitoring, they will actually be able to act as a cloud provider to their internal customers, who are typically the business units of their employer or even the application groups within IT. So, instead of having to deal with an 8 week lead times to provision a new server to support IT project xyz, the requestor could simply go into some sort of self-service portal and get her servers provisioned within minutes and pay for their usage based on actual resource consumption. This is a very attractive model to CIOs because it would increase operational efficiencies, enable usage-based charge backs to the business and increase user satisfaction based on quick turn around times, but it’s not that easy to pull it off. The biggest obstacle to doing this successfully is the degree of automation required and the spare infrastructure and compute cycles you would have to have in order to serve your customer needs. Therefore, private clouds are likely unaffordable for small IT departments, but very attractive to larger outfits and managed service providers.
  • Hybrid Cloud. This is probably where the industry will go and it’s a mix of traditional, on-premise computing resources managed by corporate IT and additional IT resources in a public cloud. The public cloud can be leveraged as burst capacity, or for on-demand environments for training, demonstration, and testing and new initiatives. The implementation of a hybrid cloud has its own challenges as you would likely have to extend your network to the cloud provider, establish domain trust relationships and do other things to make the cloud resources look and feel as if they were in your own network.

Now that we have a good overview of the cloud, it’s a good time to remind ourselves that there is a much broader definition of “cloud” going around. And that definition pretty much describes any web-based application as “cloud”. There are numerous web-based EMR systems on the market that are typically aimed at small to mid-size organizations. The sensitive patient data lives on the vendors own secure network and there are (hopefully) measures in place to guarantee system uptime and patient privacy. Many health CIOs wonder whether it would be a good idea to implement an EMR in a purely web based fashion. If there are concerns around having sensitive patient information outside of the own datacenter, let’s look at salesforce.com real quick: Salesforce.com has been hugely successful taking Customer Relationship Management (CRM) apps away from corporate IT and running it on its own highly flexible cloud infrastructure. The CRM data is often highly sensitive to organizations but there is no apparent concern about running the CRM app on infrastructure that may be shared behind the scenes with the closest competitor. So long as salesforce.com is able to guarantee data security and privacy, it’s all good. Their success speaks for themselves.

In healthcare IT and when dealing with patient records, CIOs have to be much more cautious as they are subject to internal and government audits on HIPAA, HITECH, and other regulatory frameworks, so the salesforce analogy is necessary, but not sufficient. I anticipate that the web-based EMR vendors will embrace cloud technologies and will have to address the industry’s security and privacy concerns.

In the next article in this series, I will look at some of the security aspects of the cloud in more detail…

Florian

Twitter: @florianbecker

Citrix Consulting Architects share real world experience: Ask the Architect


February 22, 2011  5:06 PM

On Cloud Computing and Virtualization – Part 1



Posted by: FlorianB

On day 2 of HiMSS 2011, I’ve been frantically searching for some indications that the healthcare industry is looking towards (or at least investigating) cloud computing for EMR delivery.

What I found, though, is a lot of confusion on the topic, so I thought it would be best to describe a couple of the underlying terms first, and then provide some commentary on the topics that I encountered at this year’s HiMSS conference.

 

Before going into cloud, let’s define virtualization:

 

There are really three flavors here, but I’ll start with what is known as Server Virtualization. One of the ways to describe this goes like this:

“We put a computer inside of a computer so that you can compute while you compute. “

 

The initial idea was this: Today’s server hardware is so powerful and cheap that you’d be wasting resources if you only ran one operating system or “workload” on it. Think of a workload as a single instance of an operating system that has access to 100% of the physical resources of the physical server. Traditionally, that’s what people did. Buy a server, rack it, install an OS, install a database for example and that was it. If they needed a web server, they bought a second physical server, installed the OS, etc.

Today’s workloads typically don’t demand all the physical resources that a server has to offer. Through a technical principle  called a hypervisor, it is now possible to share the physical resources (CPU, Memory, network, IO, etc.) with multiple workloads. Those workloads are then called virtual machines or virtual servers and all they have in common is that they run on the same physical hardware. Today’s hypervisors ensure that there is no spillage of actual data from one workload to the other. Check with your hypervisor vendor for details on the security best practices.  Virtual servers can be limited to consume no more than a certain amount of memory of CPU cycles of the physical server and a spike in activity on one workload typically doesn’t affect the performance of other workloads on the server. However, this is subject to design and on how densely you would like to pack the virtual servers on your physical ones.

 

One of the benefits (other than using the physical servers to their full capacity) is that in case of a physical server failure, the virtual servers running on it can fail over very quickly to another physical server. For this to work, the virtual disks typically need to be independent of the actual server and are often running on a SAN or another shared storage device. The pool of physical servers is smart enough to detect a failure and can restart the virtual machines on another physical host. The downtime is typically counted in minutes compared to days of ordering, racking, and re-configuring a physical server in the old days of “one workload per server”. For some really critical workloads, the failover can even be stateful – a second physical server can mirror any activities that are happening within the virtual servers so that a physical failure of the primary server has no impact whatsoever on users and systems.

 

So, the beauty of Server Virtualization is that it greatly reduces cost and waste and also provides a level of redundancy and failover capabilities.

 

I am not going into the other two variants of virtualization in a lot of detail here, but they are

  • Desktop Virtualization, where desktop operating systems such as Windows XP and Windows 7 run virtually on server hardware in the datacenter and users connect to them very securely and over high-performance protocols from any type of device (PC, thin client device, iPad, tablets, smart phones, etc.)
  • Application Virtualization, where desktop applications run on a windows server and multiple users can access an individual instance of that app seamlessly. The app is delivered to the user via a high speed protocol. Another flavor of app virtualization targets getting around installation conflicts between apps. The application is packaged in a certain way that makes it basically think that it has its own dedicated portion of the file system and registry and it will execute on the desktop or server as if other applications were not disturbing it. It gets a little complicated, so I am not going into the topic much as it doesn’t apply directly to cloud computing.

 

 

In Server Virtualization, the big three Hypervisors are Vmware’s vSphere, Microsoft’s Hyper-V, and Citrix’ XenServer.

Now that the server virtualization topic is out of the way, we can look at cloud computing next.

 

Florian
Twitter: @florianbecker
Citrix Consulting Architects share real world experience: Ask the Architect


February 22, 2011  10:08 AM

First Reports from Orlando – HiMSS 2011



Posted by: FlorianB
arra, himss11, HIPAA, HITECH, vdi, Virtualization

There are an alleged 40,000 people here in Orlando for this year’s gathering of healthcare IT professionals at the HiMSS show. I just got here Monday morning and am sharing my first impressions. Well, my very first impression was a very pleasant flight up here from Fort Lauderdale. When it comes to the fly or drive decision, I always try to go for the former – thanks to a share I have with a group of other pilots in a set of two Piper Archers.

For those of you in aviation, the day started early – departing Pompano Beach in visual meteorological conditions (VMC) while the tower was still closed. Getting my IFR (instrument flight rules) clearance to Kissimmee airport in the air from Miami departure. Cruising at 6,000 ft mostly in calm air, but with clouds and thick patches of ground fog underneath me. Kissimmee was in solid IFR conditions most of the time. 10 miles out, I was told to expect the ILS (Instrument Landing System) runway 15 approach, unless I could see the airport – which I could (although barely…) Cleared for the visual approach a minute later and on the ground after 3 more minutes – visibility had improved to over 5 miles with clear skies – total flight time 1 hr 14 minutes – can’t beat that. (Track the flight on Flight Aware over the next few days).
Anyways – Early at the show I started to roam the exhibit floor and found the usual collection of Electronic Health Record (EHR) Vendors, consultants, device manufacturers, etc.
Last year, “Meaningful Use” and the “HIPAA Extensions ” were the big key words. This year, the industry has moved on to implementing EHRs and demonstrating the meaningful use criteria. Many vendors are trying to solve similar problems – but in different ways (more on that later). The challenges remain the same. The first impressions of important topics based on the exhibitor’s tag lines reveal the same trends as in past years:

  • Secure your data
  • Secure and optimize your networks
  • Enable mobility (last year, the iPad wasn’t even out yet – today it’s everywhere)
  • Don’t get fired and stay out of jail (no kidding – see any topics on the privacy breach rules and notification / audit requirements)
  • There is relatively little on cloud computing yet, but I’ll go to more sessions and report back.
    I talked to one System Integrator in the defense space who has “Citrix” as a block on their slideware, which piqued my interest. Talking to the engineer, I was told that “they” (the customer with no name) have a ship 500 miles off the coast of Hawaii with satellite uplinks, which is receiving apps and desktops powered by Citrix with great quality – even without WAN Optimization. Nice one!
    More updates later…
    Florian
    Twitter: @florianbecker
    Citrix Consulting Architects share real world experience: Ask the Architect


October 6, 2010  3:30 AM

Transforming Desktop Delivery for Clinical Users



Posted by: FlorianB
desktop virtualization, EMR, HIPAA, HIT, Meaningful use, vdi, Virtualization

 

Have you ever asked yourself how you will implement the “virtualization” everybody is talking about? You have probably wrapped your arms around server virtualization and even implementing a highly complex electronic medical records (EMR) system is not as scary as it used to be.

How about Desktop Virtualization? What does it entail and how do you implement it? It may sound complex, but it’s actually not bad if approached the right way. Let me just start by saying that desktop virtualization is distinctly different from server virtualization and just VDI (which is a special case of desktop virtualization where an individual’s desktop is simply virtualized and moved to a datacenter).

Many EMR implementations are already running on virtualized platforms – be it a simple terminal services or thin client implementation or a more sophisticated application and desktop virtualization implementation from Citrix Systems.

Different business needs call for different technical solutions. Citrix just released the Desktop Transformation Model, which provides clear definition of the different phases of desktop transformation and provides a few actionable steps any health CIO can take today to get the thought process started.

Let’s have a look:

The four phases of desktop virtualization are:

  • Traditionally Managed Desktops – nothing new here. This this the plain old distributed computing model.
  • Centrally Delivered: In this phase, organizations deliver virtual desktops or applications from a central location. This meets many security needs and enables the health CIO to more easily comply with the HIPAA provisions that mandate that any patient data be removed from any computing device before the device is decommissioned or discarded.
  • Optimally Managed: The optimally managed virtual desktop provides a clear leap over its centrally delivered predecessor. Organizations introduce single image management, simplification, lower storage requirements and greater system scalability into the environment.
  • Transformed: At the pinnacle of desktop virtualization, organizations provide user self-services, high degrees of process automation and monitoring and sometimes usage-based charge back models.

 

For most healthcare implementations, the centrally delivered model alone provide tremendous advantages. Even if it just means to deliver the EMR app via Citrix XenApp to the workstations in the offices and on clinical floors, this step can significantly ease the burden associated with application management and regulatory compliance.

Should an organization desire to deliver an entire virtual desktop, it will be best advised to select at least the optimally managed route or aspire to implement that model eventually. Most healthcare desktops require little personalization and run the same set of clinical applications, which makes them ideal candidates to work off the same centrally provided desktop image for all users.

Before we jump to conclusions and get into the technical weeds of the discussions, there are three important first steps that any health CIO should go through:

  1. Establish Business Priorities. This relatively simple exercise maps out various business drivers such as reducing costs, increasing data security, enabling a virtual workforce, etc. and prioritizes them according to your organization’s needs.
  2. Assess the time to value. You will find that desktop virtualization provides values that map directly to the business priorities set earlier. How quickly you can realize those values depend on a number of factors, including careful user segmentation and getting an idea of scope of the technical implementation.
  3. Develop a technology roadmap: Based on the business priorities, organizations should start with virtualization projects that provide a high business impact at a relatively low time to value. This is a fancy way of saying that organizations should go for quick win first and then tackle the more complex virtualization projects at a later time.

 

These steps are mapped out in greater detail in a recently published whitepaper.

 

In the case of healthcare  EMR implementations, the clinical users are an important group. If the organization rolls out a an integrated EMR application that contains all the required module, the scope of the project may just revolve around a single application.

Doctors and nurses constantly roam from patient room to patient room and may also wish from home or when on the road. Therefore, the “virtual workstyles” will probably be high on the list of priorities. Since EMR implementations are major projects by themselves, many CIOs will try to focus on cost containment or cost reductions relative to a traditional desktop model..

These two attributes lead IT decision makers to pick a centrally delivered, optimally managed virtual desktop environment for clinical users. This can often be accomplished by using Application Virtualization for the EMR app, or providing hosted virtual desktops that are either based on a shared Windows server and on a common server or desktop image.

These broad attributes enable IT decision makers to assess the time to value, or the time it will take to design and implement the delivery of the EMR app or its desktop to end users. The importance of medical apps require diligent testing and validation procedures, but the overall effort probably pales in comparison to moving from paper-based charts or outdated green screen systems to a modern EMR implementation.

 

Medical CIOs can use the experiences gained in the delivery of the EMR desktops to transform the desktop management of many other functional areas as well. Why let this experience go to waste? Once you get a taste for the immense flexibility that virtual desktop computing offers, you will probably not want to maintain a traditionally managed, device centric desktop computing model.

 

Florian Becker

Twitter: @florianbecker

Virtualization Pulse: Tech Target Blog

Ask the Architect – Everything Healthcare


September 30, 2010  9:01 AM

Cloud Computing in Healthcare IT



Posted by: FlorianB
cloud computing, EMR, Meaningful use, Virtualization

As healthcare CIOs are faced with the daunting task to implement electronic medical record applications and demonstrate their meaningful use, they are inundated by vendor messages and general industry buzz. Specifically the term Cloud Computing has gained wider circulation in the past 18-24 months or so, but it’s important to recognize that the way it is most often used, it doesn’t descrivbe anything new. However, there are aspects that are truly revolutionary… read on for more.

I am proud to report that I was an avid user and provider of cloud computing services in 1995. Here’s some background:
I am reading many articles these days where the cloud buzzword props up. Most of the time, this goes along with adding “aaS” or “as a Service” to your favorite capitalized letter (think Infrastructure as a Service, Desktop as a Service, maybe even Services as a Service – I am not kidding on the last one as I have seen a recent blog linking service oriented architecture (a software development concept) to the cloud.)
So, back in 1995, I started using email (a revolutionary cloud-based Message as a Service concept) and gopher and other information retrieval tools (google wasn’t invented yet, believe it or not) as cloud-based Knowledge as a Service providers. I provided my friends with scanned pictures from a recent vacation as on a hosted site on Beverly Hills Internet as “Florian as a Service”. Quick: What was the name of Beverly Hills Internet after it was acquired? Hint: the site is not operational anymore, but had 30+ million users.
I am sure you get my drift… if you need a full list of cloud taxonomy, Stefan Ried on forrester.com has a blog
about the topic.
Sarcasm aside, I think that cloud computing is different than anything else we have seen in the past and it’s worth exploring. The key to cloud computing is not that some information of software is accessed via the Internet, but that the backend of the systems scale dynamically. Uh? That’s right. Instead of standing up individual servers and growing the system into a larger set of interlinked resources that may need to be built and configured manually each time the demand is changing, cloud computing allows IT organizations to dynamically provision and de-provision computing resources based on demand.
Amazon’s EC2 cloud is an example where computing resources are provisioned instantly and so flexibly that it allows amazon to charge customers in very small increments of time. A couple of cents per hour of usage.
By that definition, the often cited salesforce.com is NOT a cloud provider JUST BECAUSE they are providing a web-based CRM tool, but they ARE a cloud provider PRECISELY BECAUSE their stuff runs on the force.com platform that allows salesforce.com to react very flexibly to changing demands on the computing infrastructure.

Why is this important to the CIO? Well, for two reasons:
1. If you apply the commonly used, broad definition of Cloud as anything web-based, you may wish to consider web based solutions instead of building out the on-premise infrastructure to support a particular application. This comes with greater flexibility and lower cost in many cases. Hosted solutions are often not as adaptable to your individual demands, but if you’re a smaller business and can’t afford to run your own datacenters and IT staff, web based solutions now allow you to have the same kind of computing capabilities as your much larger competitors. If the hosted infrastructure is actually a cloud per my definition, the cost to you as the end user may be a lot lower than traditional hosting.
2. More interestingly though, the concept of cloud computing should be understood and considered by CIOs. If enterprise IT organizations start to architect and develop their own “private cloud” services, they will potentially see the same operational efficiencies and flexibility as some of the big hosting providers.
If you think that your organization may not be big enough to implement private cloud services, think again. There can be significant improvements in IT ops if a few simple recommendations are followed:

  • Use server virtualization technology to gain maximum flexibility in terms of assigning the execution of computing workloads to a particular physical server.
  • Standardize your workloads. Carefully balance the desire to minimize the number of available basic server images against business requirements. The more you standardize, the more efficient the private cloud will run.
  • Use Desktop virtualization if you desire to provide a Desktop as a Service to users. Again, standardization will pay off in the long term.
  • Use App Virtualization to provide individual apps as a service.
  • Automation, automation, automation. The more you can leverage the system APIs to do things without having to touch them, the better.
  • Think about the “users” of the cloud. If you need to expose a self-service user interface to IT administrators, that’s fine. If you like to expose them to end users, additional automation and adaptation may be required.
  • Adding monitoring and logging capabilities will allow you to implement charge back models if that’s what you desire.

Citrix Architects share tons of best practices and implementation experiences on that and other topics. Simply point your browser to the Ask the Architect site for a wealth of information.

So, in summary – don’t fall into the buzzword trap and think that “aaS” or “cloud” is really something new. But if you do understand the cloud and embrace it, the possibilities seem endless. Here’s to the next 15 years of cloud computing.

Florian Becker
Twitter: @florianbecker
Virtualization Pulse: Tech Target Blog

Ask the Architect – Everything Healthcare


August 31, 2010  10:42 AM

Charging your departments for the delivery of HIT applications?



Posted by: FlorianB
citrix, EHR, HIT, Virtualization

Delivering Healthcare applications to clinical and administrative staff is not an easy feat. IT departments often operate on a shoestring budget and have to implement complex and costly infrastructures and human resources to deliver these new sets of services to end-users. With the advent of better tools and processes to meter use and resource consumption, IT departments increasingly considering to charge the user communities for their services via charge backs.
Niel Nicholaisen writes about the topic in this article?
Let me add a few of my own thoughts:
• IT departments can count on (or hope for) a small percentage of a company’s annual revenues as a budget for capex and opex. IT is asked to provide literally the entire workspace and infrastructure for all users and often has to do more with less compared to the previous year. In the healthcare industry, that number stands at roughly 3% of revenues in the US and only about 2% in Europe.
• IT departments often get frustrated, because they have to provide expensive and complicated applications to a handful of users that chew up a large portion of resources and expenditures to do so.
• With the dawn of desktop and broader application virtualization, IT departments are tempted to charge for their services on a per user or per application basis. $30 per month for a desktop, $20 per month for Internet access, $5 per month for anti virus, etc.
• The model is obviously tempting for two reasons: It discourages the use of complex and expensive applications and brings the true cost of computing back to the business and it also holds the promise of increasing the IT budget linearly with the services that are provided.

However, as Niel points out, this can alienate the users. First of all, as a user I may find that I get really shoddy service for the $70 per month or so for basic services per user. As a business, I don’t have the choice to go get my Internet access or email service from someplace else . Sometimes (as a business) I think I can, and I may go to a cloud-based email service or attempt to buy my own backup service, but all of that comes at the cost of increasing complexity and introducing expensive integration points.
Keep in mind that IT is just another corporate service. I am not getting charged for payroll processing, legal support, marketing support, etc. Larger companies tend to cross charge for internal consulting services and sometimes for recruiting activities, but that’s pretty much it.

So, here is my recommendation for IT: Go ahead and charge your business units. Be aware of the pushback this may generate. In order to prevent backlash, do the following:
• Be the best in the industry. That’s right. Users will be tempted to compare the service you are providing (at the price you are charging) to consumer-grade services that are available online and that are provided by much larger organizations with better economies of scale. The expectation for the quality of your service goes up as you start charging for it.
• Virtualize applications and desktops. This will not only centralize the data, but make cost more transparent and predictable. If you do this right, you can reduce costs. If you don’t, you can end up driving up your costs, so choose wisely.
• Consider using third party, cloud based services for certain types of apps. Just because you managed something in-house in the past, doesn’t mean that this is the best modality going forward. CRM and web hosting services are examples of apps that have been pushed (or elevated) to the cloud for a while now in the industry.
• Monitor your resource use and utilization to get a grip on the human cost of environment support. The smaller your organization, the more difficult this is going to be. After all, you can’t hire a fraction of a SQL Administrator.
• Ensure that you explain (via your executives) that you have much higher data availability and reliability standards to meet than any publicly available service and that the company is required to provide the services internally to maintain control and ensure compliance.
• Consider implementing a “Bring Your Own Computer” model. We’ve had it at my employer for a while and it’s great. I own the endpoint, and I can manage my computer just fine, thank you very much. I can now have my own desktop, anti-virus, and other consumer grade services to dabble around and get a corporate Windows 7 image (a virtual desktop) from IT with the key apps I need to do my work.
• Expect to get charged by your accountants for the support they may need to lend to you as part of this process

Questions? Comments? Let me know what you think and how you have been managing the cost of providing IT services.

Florian Becker
Twitter: @florianbecker
Virtualization Pulse: Tech Target Blog
Ask the Architect – Everything Healthcare


June 23, 2010  2:37 PM

Top 5 Reasons to virtualize your EMR implementation – 1: Data Centralization



Posted by: FlorianB
citrix, EMR, HIPAA, HIT, HITECH, Virtualization

Following on a recent blog on the three major areas of virtualization, I am going to share some reasons on why virtualization is a great idea for EMR implementations. Let’s start with Data Centralization:

I assume that the backend database for your electronic health records reside in a single, centralized datacenter. Through global server load balancing, you may have already implemented site-to-site redundancy, but that’s beside the point for today’s discussion.
So, traditionally, you would have rich client applications or web browsers on the user’s endpoint to consume and manipulate the medical records data. This automatically implies that a lot of health data moves to and from the datacenter and often to remote locations where it is challenging to maintain a tight grip on security.
Application or Desktop Virtualization can solve that problem. Both of these techniques move the client software piece (or web browser) to the datacenter, where it executes securely inside your facility. The health data never even leaves the datacenter. The user interaction happens via a secure, high performance protocol (such as Citrix’ HDX in the XenApp and XenDesktop product lines) and gives the user a snappy interaction with the software, while only exchanging screen updates and keyboard/mouse events between the end user and the datacenter. Additional data streams pertaining to peripherals, printers, USB devices, scanners, and client hard drives are possible, but can easily be disabled to promote further security.
No data ever makes it to the end point, and therefore reducing the risk of HIPAA/HITECH covered security breaches. In addition, user sessions can be audited to establish an independent trail of information in case the regulators or courts require a closer look.
If you’re curious, I encourage you to check out Dan Feller’s Ask the Architect site. Dan has a wealth of information on desktop and application virtualization and associated whitepapers and reference architectures.

Next, we’ll look at reducing Network Complexity through virtualization, so stay tuned.

Questions? Comments? Please share your thoughts.

Thanks,
Florian

Twitter: @florianbecker
Ask the Architect – Everything Healthcare

Tech Target Blog – Virtualization Pulse


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: