Health IT and Electronic Health Activate your FREE membership today |  Log-in

Meaningful Health Care Informatics Blog

Mar 4 2012   11:22PM GMT

Speech recognition and CLU are changing EHR interactions



Posted by: RedaChouffani
CLU, dictation, EHR, m-module, NLP, NLU, nuance

During the HIMSS conference this year, I had the chance to visit the booths of two voice recognition providers. I was very interested in seeing the progress and innovations that these vendors continue to bring to the health care market. As I continue to receive questions and requests from physicians about how voice recognition can help reduce transcription costs while still maintaining high quality patient care and satisfaction, I am finding that M*Modal and Nuance are taking it to the next level with some of their newer versions and platforms.

Clinical language understanding (CLU) has begun to be utilized by Nuance. Through their CLU product, they offer EHR vendors and integrators the functionality to convert speech to discrete data that can directly be stored in the EHR. While this technology is not completely new, it does offer the incredible value of simplifying and reducing the amount of data entry and scribing that has traditionally been associated with EHRs.

In one of the videos on Nuance’s web site, their system transcribes with a very high accuracy what a physician is dictating into a microphone. But what was the most interesting about the video was the fact that after the clinician finished, the Nuance solution was able to extract specific data from the speech and populate the different EHR areas such as vitals, medication list and problem list, without a single click.

As we continue to see this type of advancement and innovation in the speech recognition arena, and the success of other products such as Siri and IBM’s Watson, we are certain to see a tangible shift from the traditional interaction model with EHR systems.

The following is a list of areas that will see a significant change due to advancements in voice recognition and clinical language understanding:

Data Capture: One of the challenges that physicians face today in deciding to jump on the voice recognition train is concern with the level of accuracy of the captured data.While precision is not at a 99.99% rate yet, the technological advancements that have been made thus far continues to improve and is spurring a recent increase in adoption.

EHR adoption rates: In working with many physicians who use Dragon Naturally Speaking and other solutions from M*Module, most tend to dictate notes and letters into a text field or word processor. Very few have had an EHR system that actually supports or is tightly integrated with these packages.Some of the few that do allow for specific discrete data to be captured with Dragon during point of care are GE, EPIC and few other ones. But as Nuance releases their latest software development kits (SDK), EHR developers will have access to some of the new functionality that is being offered as part of natural language processing (NLP). This will provide additional functionality in EHR products that will enable clinicians to capture discrete data and use voice commands to control electronic health records.

System interactions: For many of the voice interactive systems available in the market today, a precise response and command is a must-have in order for the system to function properly and answer requests successfully. Many of these are being used in banking as customer service auto-attendants to help callers. Unfortunately, there are major limitations for these systems. They limit the ability to provide a personal, satisfactory interactive experience with patients. But as more speech and voice recognition engine are used in combination with NLP, systems will be able to provide far more services than what is currently being offered. Patients will be able to simply ask for refills, lab results and also schedule an appointment simply by interacting with a voice system.

Interactive clinical decision support tools: Since HIMSS 2012 and seeing many of the sessions covering the capabilities that IBM’s Watson has, many are starting to realize that this super computer will be able to provide assistance with the diagnosis of patients. The super computer that combines the power of Nuance’s CLU and IBM’s Deep Questions and Answers (DeepQA) will lead to a new generation of computer–assisted decision support tools to help improve diagnostic accuracy.

Mobility factor: While all eyes are on the capabilities and power of the NLU, the fact that Nuance and other speech recognition vendors are deploying their solution to mobile devices makes them even more valuable. More and more physicians will begin to use smart phones as recorders, microphones and voice controllers that interact with their apps.

Many caregivers have expressed their concerns as to how EHRs have affected their productivity.This was in direct connection with the amount of data entry required by most EHR systems.It has also been known that patient satisfaction suffers from the distraction seen when a physician is looking at a laptop or PC screen to enter data. With advanced and effective clinical speech recognition systems, as well as the ability through CLU to capture discrete data, providers can get the best of both worlds: the use of electronic health records without over use of a keyboard and mouse.

Comment on this Post

Leave a comment:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: