Posted by: Jenny Laurello
Informatics, Laboratory Information Systems, LIS, Pathology, Pathology informatics, Research
Guest post by: Bob Killen, Systems Admin / Architect, University of Michigan Hospital
The management of the research arm of pathology informatics and LIS (laboratory information systems) can be quite difficult. With the ever present push to discover something new or bring the latest technology to the bedside, a huge burden is put on the shoulders of IT to support these far reaching goals. Often this drive is coupled with funding that must be used within a specific window of time. In the end it generally boils down to the Researcher asking IT: ‘Will this do what I need it to do, and how soon can I use it?’
With so much pressure behind getting results, more often than not these systems tend to be thrown together. While this method works, it will eventually fail. Fix after fix must be applied to keep the system going. With each fix, further limitations are placed on the system. It is possible to mitigate these issues but it does take a bit of forethought to build out the necessary support structure.
The support structure begins with having a point person or personnel assigned the duty of acting as a research ‘liaison.’ Depending on the size of the research department this may be a clinical systems administrator who is a bit more familiar with the researchers’ needs, a small team of engineers or, if large enough, each individual research group may require their own small IT department. Each research group has its own set of workflows, systems and security needs. For example, one research group may be dealing with a publicly available proteomics dataset and have minimal requirements whereas another may be working with genomics data that must be Title 21 CFR Part 11 compliant and more effort must be invested in proper record-keeping. It’s these differences that become increasingly difficult to keep track of if without at least one person managing the researchers’ projects and monitoring their needs.
Once the group’s base requirements are understood, it then becomes possible to start mapping out their needed compute resources. Part of this process will include creating and assigning various permissions to shared objects. This can be deceptively difficult to do while also minimizing access to clinical resources. In the research environment there are various outside parties that will likely contribute to the project; these parties should not have access, by any means, to any resource outside of their associated research group.
To minimize this possible threat, it is considered best practice to put these users in separate directory service (DS) and establish a one-way trust with your primary central authentication system. This will allow your researchers to have access to both their clinical and research resources while maintaining single sign-on, as well as limiting external personnel solely to the research based resources. This reduces the chance of a ‘run’ on possible clinical systems in the event of an account or system compromise.
Having a separate directory service does not mean you have an excuse to be lax on the proper security policies. This secondary DS should have the same or very similar policies of the clinical system. Setting this all up correctly at its inception will not only give you a smaller exploitable window, but also greatly reduce the amount of time spent performing compliance audits.
With the authentication system in place and the liaisons acting as the intermediary, aiding with the workflow design, it is possible to begin giving the researchers what they want: their systems can now be provisioned. Depending on the project, this could be as small as deploying a single virtual machine for some moderate analysis or could it could encompass a large data pipeline beginning with instruments and ending with multiple stages of analysis. Virtualization makes this entire process easier. This line will be familiar if you read part one of this series focusing on clinical systems.
A virtualized environment with certified templates allows for provisioning of systems at a dramatically faster pace. It also opens up the possibility of moving some or all of the components of the workflow off-site. This can occur when another department has become the new ‘owner’ of the project or when one or more of the systems has outgrown your internal virtualization environment. Therefore, it becomes necessary to offload the workload to an external cloud provider.
This deployment flexibility does have one common draw-back; system sprawl. With the ability to deploy systems in minutes instead of hours, days or weeks, it can lead to many half-built or poorly thought out system designs, not to mention ‘forgotten’ temporary development or test systems. These forgotten and half built systems are rarely kept up to date and ripe targets for exploitation. If any of them become promoted to production status while in this state, the issue is once again compounded as applications may be tied to more exploitable libraries.
It is here where borrowing strategies from the software development market can be quite beneficial. Using things such as resource and storage quotas to prevent over commitment of servers, configuration management tools to track system changes, as well as proper development and testing methodologies can help ensure that when an application is moved into production it will be stable and have a minimal threat profile. Going through these steps in the development cycle will decrease the amount of work needed to later certify the environment and to eventually update or patch the system.
While only scratching the surface of what is needed to properly manage research projects in a clinical environment, these strategies aid in creating a means to rapidly provide the researchers with their needed resources in a less disruptive fashion. Having a little bit of ground work and the right people to aid in managing their growth and projects will improve both the researcher’s experience as well as the general IT group as a whole. Less time is wasted provisioning and managing systems by IT, and the researcher’s gain access to their systems in a more secure and environment.