Using artificial intelligence to augment a company's human workers is growing in popularity -- even for the risk averse industry of healthcare. At the recent MIT Sloan CIO Symposium, Beth O'Rorke, CIO and senior vice president for Blue Cross Blue Shield of Massachusetts, talked about her bullish approach on AI in security -- and her "cautiously optimistic" approach to AI in healthcare overall.
Editor's note: The following was edited for clarity and brevity.
Did you leverage AI in a way this year that has been helpful to your company?
Beth O'Rorke: AI has been of interest to our company, but, being in healthcare, we have to be cautious. Artificial intelligence -- and making sure it's right for healthcare -- is an important aspect. We do use AI in security. Security, obviously correlating a lot of events that happened during the day, making sure we understand what happened and taking action when we need to. In other aspects of our business, on the actual healthcare side of the house, we're experimenting. We're cautiously optimistic on how AI can help us and we're looking to see if we can use natural language processing and machine learning to help us move our business forward.
What are some of the challenges of using AI in healthcare?
O'Rorke: Using AI in healthcare can be a challenge because you want to be precise. Some of the things that we do, we have metrics and measurements that we have to produce for, say, Medicare, and looking at the information, making sure we eyeball everything, the human touch, and figuring out what we need to do has been our go-to plan. But using AI and natural language processing, we can actually automate a lot. So we're experimenting, but I can't say we've actually made it a hit just yet.
How do you talk about AI and implementing AI to the business?
O'Rorke: Artificial intelligence has challenges, but certainly our C-suite would like to push us and understand how we can use it better. Where there are limitations, we need to make sure we're up front in understanding what that could mean and what the implications of that are. On our operations side, we need to be 100% in a lot of our metrics. So with artificial intelligence, there's a balance. The balance we continue to look at is: What's the opportunity to what's the limitation -- and kind of make that match.
How important is security when looking at AI tools?
O'Rorke: Security for artificial intelligence platforms, we really look at the platform itself, where it sits. Is it on-prem or is it in the cloud, how do we make sure it's HIPAA compliant, HITRUST-certified. We push our vendor partners to be top notch on the security side because, aggregating that data -- if there is a concern or a breach -- exposes us quite a bit. We do a risk assessment and an evaluation, we make sure there's a certain level of criteria they need to hit, and if they don't meet those criteria, we can't go forward with them. But a lot of vendors do meet it, and we're excited to continue to learn more about how we can leverage the cloud and bring things together.