A few of us old souls can remember the early days of the World Wide Web, when you had to remember — or worse yet, guess — the URLs of sites you wanted to get to. “Ford.com” or “Whitehouse.gov” were simple to divine via the Mosaic browser’s address bar, but some of the .net and other locations weren’t so intuitive. Early search engines could find things — eventually — but search results were larded with off-topic ones as well. Worst of all, site content (blinking text!) was so primitive back then that using the Web could be a colossal waste of time. For many early-1990s Web sites, it was like Gertrude Stein said, “There is no there there.”
But before long, Web content improved, and second-generation search engines AltaVista and Google fixed most of the problems with previous tools (does anyone even remember major players like Lycos? WebCrawler? Excite?) and made search “right”: That is, they figured out mostly what people were actually searching for instead of throwing up results that weren’t even close. These newcomers quickly took over the space and crushed the old guard search engines, which still exist but haven’t been relevant since. When it first came out, Google was such a monster of accuracy that the mighty-at-the-time AltaVista died on the vine within months.
We’re still in the Excite era with electronic health record (EHR) systems today. EHR vendors are building search functions into their systems, it’s true. But it may take a “medical Google”-type company to search all the data in a hospital’s network to enable true interoperability among departments’ information silos and create the quality of care boost health IT leaders promise will come with the present push to digitize patient data.
After all, what good is data, digital or on paper, if practitioners can’t find it when they need it?
The next generation of EHR search tools doesn’t analyze just the words that are there but their context within the patient record. It should be simple but it’s not, because free-text fields in EHRs are where a lot of the key information resides. Physicians fill out their free-text reports in a variety of ways, dictating to speech recognition software, dictating to transcriptionists or even typing themselves. Variations in language, typing shorthand and abbreviations, and sentence construction might not pose much of a problem to humans reading it; most of the time, our comprehension adjusts well. To a computer scanning millions of words at once, however, the same concept expressed twice with different words (or the same words in a different order) can look like discrete ideas.
A new report in the Journal of the American College of Radiology explains how natural-language processing “slices, dices, minces and parses” words in a pilot project at Massachusetts General Hospital (MGH) called QPID (Queriable Patient Inference Dossier). MGH built its own semantic search engine to retrieve data based on clinical concepts. That might sound complicated, but in practical terms it boils down to this: When a patient visits one of the 500 radiologists using the tool, the practitioner can quickly access relevant data from his record in order to understand the diagnostic task before him.
Bottom line? The tool has helped MGH meet quality-of-care targets, as well as pay-for-performance goals insurers have set. It’s also affected patient safety, helping the gastroenterology department identify outpatients with conscious sedation risk factors.
Like search engines for the Web — and those on desktop computers that help us figure out what folder houses a long-forgotten file that we suddenly need again to complete a task at hand — EHR search engines have the potential to revolutionize patient care. The question is, who will build the “EHR Google,” and will it emerge from the murky fog of software development sooner rather than later?