This article can also be found in the Premium Editorial Download "Health IT: Storage virtualization strategies for health care."
Download it now to read this article plus other related content.
If you’re a health care CIO trying to understand your organization’s future storage requirements, you have to consider some data points that CIOs in other industries would never worry about. For example, what is the age of majority in the states in which your organization operates?
According to legal dictionaries, the age of majority is set by statute as the age a person first gains the legal rights and responsibilities of an adult. But for health care CIOs, it also marks the end of the legally required data retention period for patients born in your facility.
The New Hampshire medical center where I work services approximately 400 births each year. Some of these births require extensive medical imaging diagnostics such as a computed tomography (CT) study. Typical CT studies are made up of 256 slices, each a 500 KB image. A single study would require 128 MB of data storage. For a single infant born in New Hampshire in 2009, this 128 MB study’s images would need to be retained until seven years after the infant reaches the age of majority. In New Hampshire, that’s 18 years old. The total years of retaining this image study in storage as required by the Health Insurance Portability and Accountability Act (HIPAA) and New Hampshire state law is 25 years. How many non-health care CIOs do you know who worry about their electronic health record storage requirements out to 2034?
Now, take the same patient and increase the electronic health record storage requirements for the patient’s EHR, which could include multiple diagnostic images, physician orders, prescription lists, progress notes, X-rays, MRI and lab results for every clinical visit, and all of a sudden the EHR storage requirements is in gigabytes. Multiply this by the number of patients born each year, and the number can quickly move to terabytes.
For many health care institutions, that’s a long-term problem. My hospital, for example, began with digital storage of radiology images only. Now we have image storage requirements for cardiology, neurology, cancer, obstetrics, cosmetic surgery, the spine center, orthopedics, the lab and the trauma center, with more and more departments requesting image storage.
The largest image storage requirement we manage is for the neurology center. Our neurology center has a process that synchronizes patient video monitoring with electroencephalography (EEG) imaging captures, allowing the neurologist to study a patient’s physical symptoms as the EEG records neurological events. Some of these studies use continuous monitoring for up to four days. These video images require significant amounts of disk storage. We are managing 8 terabytes (TB) of video storage for approximately six to eight months of patient visits.
These types of health care video/imaging storage requirements are substantially different from the data retention and storage requirements for banking, tax return and credit cards records, and what companies like Amazon.com Inc. keep on file regarding client purchase records.
So where does a health care CIO keep all this storage? Three places: tiered storage, tiered storage and tiered storage. Image storage is static storage. Once the image is captured, it will not be modified. Typically, the process is to capture the image on tier 1 storage and keep it there temporarily during clinical review. At some point, usually within a month, the images are moved to tier 2 storage. After six months, the images are then moved to tier 3 or higher because future clinical review would not require instantaneous access to the medical images. We do this quarterly with scripts, so that it takes very little staff time to do.
Click to enlarge
In Figure 1, you can see an example of how to use tiered storage in a hospital information system. Definitions of tiered storage vary greatly from vendor to vendor and medical organization to medical organization -- please do not take this example as strictly defined tiered storage. This example of tiered storage is based on RAID levels, performance and cost.
• Tier 1 15 K or greater, 146 GB Fibre Channel (FC) disk with RAID 5 and shadowing (approximately $15 per gigabyte).
• Tier 2 10 K, 300 GB FC disk with RAID 5 and shadowing (approximately $10 per gigabyte).
• Tier 3 10 K, 300 GB FC disk with RAID 5 and no shadowing (approximately $5 per gigabyte).
• Tier 4 1 TB FATA disk with RAID 5 and no shadowing (approximately $3 per gigabyte).
How many non-health care CIOs do you know who worry about their electronic health record storage requirements out to 2034?
The most cost-effective way to manage image storage is with an enterprise storage area network (SAN) solution. Some image vendors, especially those that want to manage the entire imaging system, will insist on a direct-attached storage array. They do not want other applications to affect their image systems and feel a closed imaging system provides them that level of risk assurance. Most imaging vendors realize the investment a health care institution makes to a SAN solution and will work with its information systems department to use SAN storage. One of the important things to remember when working with image storage vendors is that the Food and Drug Administration (FDA) does not require an approval process for disk storage for medical images. If your vendor tries to tell you the storage has to be FDA-approved, feel free to show it the actual regulation.
One last consideration is whether to mix clinical and other data on the same SAN. While some device and medical application vendors will push you away from that, the increasing integration of health care data demands at least some co-mingling. The key is to always make sure your storage for clinical data is delivering the performance you need.
Al Gallant is the director of technical services at Dartmouth-Hitchcock Medical Center in Lebanon, N.H. Let us know what you think about the story; email email@example.com.
This was first published in December 2009