Posted by: adelvecchio
cybersecurity, Data breach, data breach security, healthcare data
Guest post by Rick Kam, CIPP/US, president and co-founder, ID Experts
In a recent report from the Ponemon Institute, 70% of the surveyed healthcare organizations and business associates identified employee negligence as a top threat to information security. An article published earlier this year in the Federal Times noted “Every survey of IT professionals and assessment of cybersecurity posture shows at least 50 % of breaches and leaks are directly attributable to user error or failure to practice proper cyber hygiene.”
Now, to anyone who’s been paying attention for the last decade or so, it will come as no surprise that people make mistakes that cause data breaches. To err is human, and that is not going to change. What has changed is the scope of damage resulting from those errors.
A decade ago, a lost laptop or improperly discarded paper records might expose hundreds or even thousands of people to a potential data breach. Today, with massive digitization of medical information, mobile data usage, and widespread system integration, everyday human errors can cause breaches that expose millions of people to potential harm. To cite one example, InfoWorld and CSO reported that the Anthem data breach, which involved 80 million records, was probably caused when thieves infiltrated Anthem’s system using a database administrator password captured through a phishing scheme.
Attack vectors point from people to technology
A recent blog by Napier University professor William Buchanan aptly lists the top three threats in computer security as “people, people, and people.” Buchanan’s post mentions leaving devices unattended, sharing passwords, or accidentally emailing information to the wrong people as typical security errors. Many of the breaches from cyberattacks are also traced back to users unwittingly giving outsiders access to networks.
Whether thieves get users to share personal information via phishing schemes, enter their credentials on a spoofed website, or download apps with embedded malware, tricking people is the easiest route to cybertheft. Yes, hackers can exploit system vulnerabilities once they’re inside a network, but user mistakes give them the foothold. Kevin Mitnick — a notorious hacker in the 1980s and early 1990s — famously told the BBC, “What I found to be true was that it’s easier to manipulate people rather than technology. Most of the time, organizations overlook that human element.”
Plugging the people gap
Healthcare organizations face challenges in plugging the human security gap. The biggest risk is a lack of awareness on the part of users. Even if an organization has good security processes and training, and employees faithfully follow security procedures at work, they are typically unaware that actions in their private lives can put their employer at risk. The chance comment on Facebook, using the same password on personal and work accounts, or a secretly malicious app downloaded to a personal device that is also used at work can vault criminals right past an organization’s network security. If employees are bringing their own devices to work, their failure to do an operating system update with important security patches can put employers’ networks at risk.
The second biggest challenge is visibility: employers don’t know and can’t control what websites their employees, customers, and business partners visit, what links they click on in popup windows, and or who they chat with online.
Assume that every user is exposed to multiple risks every day. According to a new report from Palo Alto Networks, more than 40% of all email attachments and nearly 50% of portable executables examined by Palo Alto’s WildFire software were found to be malicious. The report also found that the average time to “weaponize” world events — to create phishing or other schemes to capture passwords or deliver malware — is six hours. Just think, within a few hours of an earthquake in Chile or a tsunami in Japan, your well-meaning employees trying to donate to a relief fund can be spoofed into providing information that leads to a data breach.
Improving your odds
Humans can’t be error-proofed any more than technology, but there are things that can be done to help a workforce, customers, and partners keep an organization and their information secure. A recent blog by Jeff Peters of SurfWatch Labs recommended fighting social engineering with user awareness programs and using technology to limit exposure. Email coming into networks can be scanned for malicious attachments and links. Periodic security training is great, but ongoing education is also needed: How about a short, fun weekly or monthly newsletter with news of scams and tips on how to avoid them? Or a bulletin board where users can post suspected scams and get recognition for warning others?
Despite the best efforts at promoting security, people will make mistakes. Among other things, scammers will capture or even guess passwords. Vast numbers of people still use birthdates, pets or children’s names, or other personal information for passwords. A new study covered in the Financial Times found that some nuclear plants are still using factory-set passwords such as “1234” for some equipment. For this reason, some security experts are beginning to advocate doing away with passwords altogether for critical systems and moving to multi-factor authentication. TechTarget reported that at the International Association of Privacy Professionals Privacy. Security. Risk. 2015 conference, keynote speaker Brian Krebs advocated stronger authentication schemes, saying “From my perspective, an overreliance on static identifiers to authenticate people is probably the single biggest threat to consumer privacy and security.”
In the Federal Times article mentioned previously, Jeremy Grant, a senior executive at the National Institute of Standards and Technology, advocated doing away with passwords. He uses two-factor authentication on his phone — biometric identification (a thumbprint) and derived credentials from a common access card or personal identity verification card on his phone — so that there is nothing to remember and nothing that can be stolen.
No foolproof solutions
Speaking at the Privacy, Security. Risk. 2015 conference, retired RSA chairman Art Coviello said with cloud computing and other new technologies, “The attack surface has expanded so dramatically that it’s becoming unfathomable…The United States is living in the biggest and most vulnerable digital glass house on the planet.” With medical data scattered from the cloud to multiple points of care and to the personal devices of millions of healthcare workers, security failures are going to happen. You may not be able to fool all of the people all of the time, as Abraham Lincoln said, but cybercriminals can fool enough of the people enough of the time to eventually overcome virtually any defense. Unless you envision a perfectly consistent robotic healthcare workforce (oh, wait, robots could be hacked), you can’t count on your staff, users, or your business associates to be 100% secure, 100% of the time.
Ultimately, the best you can do is educate people, consistently and comprehensively monitor for security incidents — based on thorough and up-to-date risk analysis — and have plans and teams ready to respond when human error leads to human and business peril.