A medical speech recognition system can be a key element of an electronic health record (EHR) implementation for a hospital, small physician group or even a solo physician practice. Once physicians get the hang of using it,
That, at least, is the dream for such vendors as Nuance Communications Inc., which dominates the market it shares with Agfa-Gevaert NV, Dolbey Systems Inc., M*Modal Technologies, Kurzweil Technologies Inc., MedQuist IP LLC, 3M Health Information Systems and a few others. The reality? It takes training, patience and practice to make speech recognition work.
"The core goal [of meaningful use] really should be to document the care process -- by whatever means necessary -- in a way that when I go in and do my documentation, the next person sees that information right there, rather than waiting for it to appear on the chart two days later," said Ed Babakanian, CIO of University of California San Diego (UCSD) Health Sciences. "Voice recognition becomes a tool to achieving that. It needs to be easy enough -- and fast enough, with enterprise mobility built in -- so providers look at it as a tool that will make their lives easier."
As part of an initiative to eliminate paper throughout the health care workflow, Babakanian is integrating a Nuance medical speech recognition system across multiple campuses and across several EHR installations that use systems from Siemens AG and Epic Systems Corp., as well as one that's homegrown. UCSD Health Sciences has achieved 100% use for its computerized physician order entry and e-prescribing systems, an accomplishment that in part can be attributed to speech recognition.
Several speech recognition implementation options
Health care providers can implement speech recognition software two ways. In a front-end implementation, the physician or nurse activates the software, reviews the notes digitized to text and signs off. With a back-end implementation, the software automatically converts speech to text and routes both audio and text to a transcriptionist for review. In back-end systems, the person who originated the dictation still must review transcriptions for accuracy and sign off on them.
A review is still required because medical speech recognition isn't foolproof technology. But it is getting there. Speech recognition is more than 90% accurate once a user trains it to his or her voice and accent, according to Ovum analyst Christine Chang.
Of course, in this age of renewed emphasis on patient safety first, high medical liability risks, and The Joint Commission carefully watching how a hospital deals with easily confused drug names and abbreviations during its accreditation surveys, 100% transcription accuracy is a must -- even if it does mean that doctors and nurses have to stop and review transcripts as part of their regular workflow.
Some advanced EHR installations also can incorporate speech recognition systems for voice commands, enabling users to skip through the pages of a record without a mouse or trackpad. This feature has much potential to maximize efficiency during patient care -- even among the Luddite physicians, Chang said.
"A lot of doctors didn't grow up with computers, and they have trouble navigating with a mouse," Chang said. "But with the speech commands, you can say, 'I want to go to the summary page,' or 'I want to drill down into labs,' and it will do that for you. You don't have to worry about clicking the wrong tab and then finding yourself with five different windows open and not knowing what to do. It helps tremendously."
Pilot medical speech recognition, offer incentives
Dr. Steven Schiff is a private-practice cardiologist who oversees IT at his eight-physician practice and also serves as medical director for invasive cardiology and medical informatics chairman for Orange Coast Memorial Medical Center in Fountain Valley, Calif. CIOs who want to facilitate speech recognition software adoption should start with a pilot project, he recommended: Install the application and each participant's custom voice files on the network (as opposed to on desktops), and have a handful of physicians who are frequently seen using the technology on headsets throughout the facility.
"You will then have a network of fairly visible users in the hospital who are doing this," Schiff said. "I'm one of those people right now [at Orange Coast Memorial]. People come up to me and say, 'What are you doing? How are you doing that?' They watch, and they say, 'I want that.'"
In the second phase of the Orange Coast Memorial speech recognition implementation, IT leaders sent physicians and nurses home with a copy of the software so they could train it to recognize their voices. (By bringing it home, they can use a familiar application, such as Microsoft Word.) Once done, the voice profiles were put on the network.
Meanwhile, UCSD Health Sciences sponsors two-hour training sessions for physicians, complete with professional trainers. That's enough, judging from the fact that the help line they set up for follow-ups gets few calls, if any, Babakanian said.
The key to a successful speech recognition implementation, however, is not necessarily training, but how well the back end supports the tool, Babakanian said. If the network lets a physician access his voice profile at all the machines he might use in the facility, he is less likely to get frustrated and quit using it. If this piece is done poorly -- or if it's hard to access within the EHR system -- you can "poison" a good software tool that would otherwise positively affect your workflow, he warned.
Even if you have a strange accent, it picks up quite a lot of your speech.
Ed Babakanian, CIO, University of California San Diego Health Sciences
The last implementation phase is the toughest -- getting reluctant practitioners to give it a try, Orange Coast Memorial's Schiff said. In a typical hospital, some doctors, even with extra training, will not get the hang of it. Even if they do, they may opt to continue typing notes in free-text fields because they're more comfortable doing it that way. Still others, used to dictating notes but unable to grasp the foibles of speech recognition, will hold on to their transcriptionists or employ scribes to maintain the status quo.
In the ongoing UCSD Health Sciences implementation, 60% of the hospital doctors and 30% of ambulatory-practice physicians are using medical speech recognition. Babakanian expects that figure to rise once an Epic Systems rollout among its ambulatory doctors is finished.
UCSD Health Care shares some of the dollars saved on transcription services (formerly $1.4 million a year) with the departments that adopt speech recognition and show significant savings in those fees, Babakanian said.
"That's how you have to go at it," Orange Coast Memorial Medical Center's Schiff said. "Will you ever have 100% penetration? No, nothing's ever 100%. But I think with the right combination of incentives, disincentives, teaching and support, you can have high penetration … and it would make for better care."
Obviously, the combination will depend on the penalties a hospital needs to impose on physicians who dig in their heels, Schiff added. It remains for administrators to decide who will pay transcription fees or miss out on incentives for adopting speech recognition -- at the risk of driving doctors away to competing hospitals.
Strategies for choosing a speech recognition system
Because Nuance is the big fish in the small pond of speech recognition technology, it already has integrated its Dragon Medical speech recognition software with systems from a number of EHR vendors including Epic, Cerner Corp. and General Electric Co. There's also a well-established ecosystem of third-party educators and consultants who can help physicians in training the Dragon software on their individual voices and understanding how to activate their facility's particular implementation of it in the EHR system.
But don't just stop at Nuance. As part of due diligence -- and to build influence as a buyer -- evaluate several vendors' offerings, Ovum's Chang recommended.
If any software requires much more than a few hours of training, it probably will have a difficult implementation curve and less chance of success. Though the availability of outside training help can be reassuring, if the software isn't intuitive enough that users can get it going mostly on their own, don't make training the sole determining factor for picking a vendor.
Training also should focus on accessing the speech recognition system from within the EHR system, rather than on dealing with individual voices. Referring to Nuance, Babakanian said, "Even if you have a strange accent, it picks up quite a lot of your speech."
Let us know what you think about the story; email Don Fluckinger, Features Writer.
This was first published in August 2010