Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Hacking and fraud escalate concerns about AI in healthcare

While many in healthcare tout the benefits of AI in clinical settings, artificial intelligence isn't immune to nefarious misuses, such as hacking and clinical fraud.

This article can also be found in the Premium Editorial Download: Pulse: AI lowers hurdles to digital transformation of healthcare

Artificial intelligence continues to make headlines for its applications in automation, speech recognition, image...

processing and risk detection. Despite these advances, some researchers have concerns about AI and warn that the malicious use of the technology may have serious implications for healthcare.

Two of the more popular applications of AI in healthcare are data analysis and data mining, where clinical information is processed and the results provide clinical feedback to healthcare professionals. Early results around image analysis to detect cancer or advanced algorithms that match patients to appropriate treatments are examples of AI affecting patients in a positive way. Predicted uses include AI for surgeries, bot-based interactions with patients and advanced data analysis.

However, some in IT have concerns that AI can potentially do more harm than good. The dark side of AI isn't limited to the ill feeling some have toward the technology -- for example, AI's potential for replacing some human workers. Concerns also surround hackers and cybercriminals building or manipulating existing AI systems. Then there are the fears that the technology itself may simply fail.

People make mistakes, but so do robots. AI in healthcare carries a certain amount of risk related to their bugs and potential to make errors. These types of concerns about AI have been validated in the past. A 2015 study, detailed during the 21st Association for Computing Machinery's International Conference on Knowledge Discovery and Data Mining, confirmed that AI apps are not error-free. One app, for example, was used to predict which patients would develop complications from pneumonia. The app worked well in most cases, but it also made a critical mistake: High-risk asthma patients were sent home because the app ignored some of their data elements, putting them at even greater risk.

The dark side of AI isn't limited to the ill feeling some have toward the technology.

AI can attack hospitals and AI systems. Concerns about AI in healthcare also center on cyberattacks in which AI can target hospitals and their systems. AI has the capability of performing the complex tasks that can launch a hacking attempt on a network or system. Hospitals can be quickly overwhelmed with more attacks than they may be equipped to handle. These attacks may include AI targeting vulnerable systems or individuals and automated spear phishing attacks to gain access to a system or extract patient information that can be used later.

Vulnerable smart devices can be manipulated. At the DEFCON Hacking Conference in 2016, a group highlighted how a smart thermostat can be hacked, which sent warnings throughout hospitals and other health systems that rely on similar smart devices to control environments. The increased use of smart systems carries significant risks, potentially leaving hospitals, patients and staff physically vulnerable if these AI-based devices are taken over by hackers.

AI can be used to make false patient claims. Cyberattackers can target AI to create fake content that can be used for deceptive and manipulative purposes. These types of activities will increase the frequency of clinical fraud as impersonated physicians file false claims on unaware patients to receive reimbursements from Medicare or other private payers. These scams will escalate and become more sophisticated because of AI systems. Some researchers already predict that advanced human-like synthesized voices will be used to gather information over the phone to eliminate the need for a hacker or scammer to do the calling.

Hackers can use AI to quickly detect and exploit vulnerabilities. In the past, hackers relied heavily on scanning the attack surface of a target organization to determine which systems might contain exploitable flaws. With the use of AI, hackers can run far larger sweeps of hospital systems and detect patterns across multiple systems that can be used to discover vulnerabilities and gain access to hospital systems. AI also can be used to collect personal data on individuals to increase the success of phishing emails and to even guess passwords.

Despite the concerns about AI and its potential misuse, the technology can have a positive impact on healthcare and can address several challenges that humans can't meet without it. Over time, more checks and balances will need to be implemented to safeguard healthcare AI against cybercrime.

This was last published in July 2018

Dig Deeper on Health care business intelligence systems

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

How can healthcare organizations mitigate concerns about AI and still benefit from its use?
Cancel

-ADS BY GOOGLE

SearchCompliance

SearchCIO

SearchCloudComputing

SearchMobileComputing

SearchSecurity

SearchStorage

Close