Health IT and Electronic Health Activate your FREE membership today |  Log-in

Community Blog


August 22, 2016  11:50 AM

Shadow-Hunting: Managing the Ghosts That Live Within



Posted by: TaylaHolman
rogue applications, shadow IT

Guest post by Mac McMillan, CEO of CynergisTek, Inc.

mcmillan_mac Shadow IT has become increasingly prevalent in today’s enterprise environments, and for the most part is driven by employees who are just trying to find a way to get something done with a tool they are comfortable using. It is made possible because most organizations’ networks or devices are not managed well enough to detect rogue software or devices when they’re added. Usually an organization’s first awareness occurs when the person using the rogue software or device needs technical support and asks for help.

Recent hacking activity is fueling a new desire to limit exposure as well as to engage in discussions around how to best handle shadow IT. To have that discussion, however, we must remember that it includes the wired, wireless and mobile device environments.

The first step in managing shadow IT is not to overreact. Most of the folks responsible for these rogue applications and devices are good employees just trying to do their job. That said, make sure you establish a policy around the introduction of software or systems to the enterprise and educate the workforce to it. Consider creating a process for employees to nominate programs or devices for use so that you can enable innovation with responsibility. Provide a safe environment for those new programs and devices to be deployed within and that users can access to effectively preserve integrity while vetting new capabilities. Above all, create an environment where staff feel comfortable bringing new ideas or technologies to the table. After all, the idea they bring you is the one you don’t have to find.

The second step is to trust, but verify. While many will color within the lines once they understand what is expected and feel empowered to bring forward new things, others will for many different reasons not comply. For those, you’ll need to rely on controls and the network to alert you when something has been added that isn’t authorized or to block it from happening. Here are some tactics:

Port security. This falls in the oldie, but goodie category. Basically, network devices can be configured to remember MAC addresses or configured to enforce a number of MAC addresses on each port. Most modern network devices should support some version of this. Even wireless devices often support some version of managing MAC addresses. The biggest drawback is management. Anytime systems move or are replaced, the port would have to be reset or reconfigured.

NAC. Network access control (NAC) allows you to take port security to another level. It’s easier to manage a large network with NAC versus standard port security since you’re managing based on policies rather than endpoint configuration, however, it’s more expensive and can be very complex to implement. Basically, it allows you to define security requirements that need to be met in order to gain access. This could be simple like what port security provides, or it could be more complex and check patch levels, and/or whether anti-virus is running and current. Defining these policies and managing them across a large network can be a huge undertaking.

802.1x. This is an authentication method. The simplest way to think of it is as a certificate installed on the endpoint. This allows the system to authenticate with an authentication server and shows that the system is trusted. Most organizations use this method mainly on wireless networks, but it can be rolled out over the wired infrastructure as well. The biggest challenges here are certificate rollout and management.

MDM. Mobile device management (MDM) focuses on managing mobile devices. Like NAC, it allows you to establish strong policies for each device that connects and then permits you to manage those devices. Disabling a security feature covered by policy, such as encryption, the use of a password to gain access, or jail breaking the device, will cause it to not connect. This means that you won’t have to punch holes elsewhere in order to provide access to email or other applications and simplifies managing these devices through the use of policies.

VDI. Virtual desktop infrastructure (VDI) is the practice of enabling a desktop operating system within a virtual machine running on a centralized server. With the desktop, essentially a thin client and all of the controls resident on the server are restricted from the user, and downloading, installing or enabling other software and devices at the desktop is not permitted. Better still, its’ not necessary, because one of the big drivers for users to turn to other devices is lack of ubiquitous access to their desktop, but VDI allows you to extend that directly to their tablet or phone. Using VDI not only provides flexibility in providing and restricting access to sensitive systems and data, but also restricts rogue software and devices as well.

Network scanning. This can be accomplished either proactively or reactively through the use of various network scanning and monitoring technologies. Some permit active management as well. Essentially network scanners can look for and find unauthorized devices connected to the network. It can either disable them directly, or investigate and then decide what the appropriate course of action is. Network scanning performed reactively, which usually means manually, can be a huge time sink and delay critical decisions.

Shadow IT offers opportunities, both positive and negative, but creating a strategy for managing it can help eliminate the bad and take advantage of the good. You’ll likely need a combination of the technologies and methods discussed above to be successful. Like anything else we do in IT or security, if we start by thinking through the problem, develop our strategy, define our policies, select our controls, implement, manage and finally audit what we’ve done, we’ll likely have a better chance of succeeding at making shadow IT an ally.

About the author: 

Mac McMillan, FHIMSS, is co-founder and CEO of CynergisTek, Inc., a top-ranked information security and privacy consulting firm focused on healthcare IT industry. He brings nearly 40 years of experience in security and has worked in the healthcare industry since his retirement from the federal government. McMillan participates on many advisory boards, and is recognized as a thought leader in healthcare IT for his contributions to industry publications and events on compliance, security and privacy.

April 6, 2016  1:25 PM

The HIMSS16 takeaway: Health data interoperability or bust



Posted by: adelvecchio
HIE, HIMSS, Interoperability, Interoperability and health information exchange

Matthew MichelaGuest post by Matthew A. Michela, CEO, lifeIMAGE

The U.S. Department of Health and Human Services Secretary Sylvia Burwell, CMS Acting Administrator Andy Slavitt and national health IT coordinator Karen DeSalvo, M.D., were in lockstep with their themes at the 2016 Healthcare Information and Management Systems Society’s annual conference: Whatever — and whoever — is holding up health data interoperability will be made public. Not only will “data blockers,” as Congress calls them, be accountable to federal administrators, but to lawmakers who enact HHS’s powers to promulgate regulations.

Medical imaging interoperability issues, when solved, will be a big part of moving U.S. healthcare forward, as health systems evolve from fee-for-service to value-based care. Furthermore, with mergers and acquisitions of healthcare organizations large and small occurring what seems like daily across the healthcare industry, the challenge of providing needed imaging data to clinicians and the patients relying on their expertise grows exponentially.

Our healthcare system has long been plagued by the challenge of operating in data silos, which can result in costly and unnecessary duplicate testing, additional costs to the consumer in the form of multiple copays and dangerous delays in needed care. Patients and payers alike have taken notice and the industry is in the midst of a sea change forcing organizations to make significant strides in embracing and facilitating data interoperability.

Where we are now
To achieve this, healthcare organizations are evolving into clinically integrated networks and, as such, are investing in population health infrastructure and health information exchanges to provide system-wide, timely access to information and patients’ medical records. They are also seeking out strategies for better engaging patients in care processes.

So what does this mean relative to medical images? If U.S. healthcare goes where HHS is steering health IT vendors, through efforts such as DeSalvo’s 10-year interoperability roadmap, the industry can achieve interoperability with vendor, provider and payer cooperation.

Primary care physicians are, ideally, at the center of each individual patient’s care — they know a patient’s history, have an ongoing relationship and make the call when an outside or specialist opinion is needed. In the first stage of interoperability, when one practitioner attempts to send imaging or video results to another — a primary care physician to a specialist, vice versa, or specialist to specialist — the first hurdle is getting data onto a health system’s network.

Initially, this often required the cumbersome exchange of CDs or films, with today’s continuity of care documents (CCDs) being the equivalent of faxes or PDFs attached to a patient’s medical record. The result is unsearchable, unintelligent data — dead weight on a health system’s network. Actionable data cannot simply be a digital replication of a proprietary image or a fuzzy photo of a fax. To meaningfully impact care delivery, imaging data must include normalizing information allowing clinicians to easily search for and retrieve it, when needed.

Where we are going
Over the next few years as payers and patients demand freer data flow to not only see the big picture of prior care and today’s appointment, the medical image and EHR data exchange process will become simplified and more automated.

Image-intensive specialists, like radiologists, cardiologists, oncologists and emergency medicine clinicians, can provide a requesting physician with usable data coupled with a patient identifier. Some organizations have taken this a step further by creating a convenient electronic workflow among a trusted directory of physicians and normalized patient identifiers.

By doing so, all imaging data and reports can flow freely through a back-end network. This largely describes where leading healthcare organizations are with image exchange today — some can do it with EHR data, too — but there’s much room for improvement. Although data exchange is automated and accelerated, it often still requires a human touch to pull needed data and initiate the exchange process.

But that’s not enough. In order to facilitate better, more efficient healthcare delivery, data exchange needs to take place on a network that can locate a patient’s information and make it immediately available to clinicians providing treatment.

When health IT achieves this highest state of automation, physicians will be collaborative without requiring physical interaction. They will be able to summon test results and radiology studies as well as access consult notes and other data contributing to care delivery. For clinicians and caregivers, patient information will be indexed and available in a form akin to a Web search. Those searches would return a relevant list of available patient data.

The rewards will come
The Healthcare Information and Management Systems Society (HIMSS) is already on board. It is emphasizing to members how shifting payment models and new quality measures have increased the focus on coordination of care within and across enterprises. Patients typically do not receive care from only one network in a given region — and their choice of provider cannot be limited because of non-interoperability — so in order for providers to have access to a complete medical record, health systems are devising more comprehensive information interoperability strategies.

A truly comprehensive patient record needs to also include access to a patient’s complete imaging history. Such data may be the deciding factor as to whether or not a patient needs a second opinion, additional testing or a procedure.

When every provider within a patient’s circle of care can access and view their medical images, in the context of the broader medical record, the U.S. health system will see improvements in quality, safety and care coordination. That is the job of healthcare providers and health IT vendors. HHS outlined its goals at HIMSS16; it’s up to those in the trenches to connect the data to provide the ultimate benefit to patients.


February 17, 2016  2:36 PM

St. Luke’s Episcopal Health System: Modern mobile healthcare technologies



Posted by: adelvecchio
mHealth, Mobile devices, mobile health, mobile health implementation

High Res LeeGuest post by Lee Johnson, director of global marketing, NetMotion Wireless

St. Luke’s Episcopal Health System is one of the nation’s largest health systems. With more than 90 hospitals, 24 critical access facilities, home health agencies and a number of other health-focused facilities in 18 states, it has long been considered a pioneer in both medical procedures and technology implementations. The first successful human heart transplant in the United States was performed in its Texas Heart Institute. It was also among the first providers to deploy mobile healthcare technologies on a broad scale to better serve patients.

Almost a decade ago, St. Luke’s leadership realized that requiring nurses and physicians to gather at kiosk-style stations to retrieve and send patient information created slowdowns that negatively affected patient care. So the IT team launched a wireless network to provide clinicians with quick access to data whenever and wherever they needed it. This first, innovative wireless technology implementation would later lead St. Luke’s to develop a large-scale wireless network that has become a benchmark for the healthcare industry.

Growing pains
As St. Luke’s grew, increasing its workforce and adding multiple new buildings, it became apparent that the initial wireless pilot needed to evolve into to a more powerful multi-facility network. Wireless access points were installed throughout areas where instant access to information was important for patient care, such as the ER, central clinician stations and in hallways near patients’ rooms.

But hospital staffers often found that their connections to the network failed. “Clinicians using handheld devices had trouble maintaining their sessions while moving around the floors,” explained Gene Gretzer, senior analyst and wireless initiative project leader for St. Luke’s. “As doctors and nurses walked through areas where the wireless network was weak, such as long hallways, onto elevators, or through older areas in the hospital, they’d lose their network connection. This caused the legacy VPN [virtual private network] and applications to quit, requiring the clinicians to log in again, restart applications and re-enter any data that may not have been transmitted.” The situation frustrated busy doctors and nurses who felt that they were wasting time on devices that were supposed to be increasing their productivity.

St. Luke’s also had concerns about wireless security. Federal Health Insurance Portability and Accountability Act (HIPAA) data protection standards mandate that healthcare providers take steps to ensure that patient data cannot be compromised by network (including wireless) vulnerabilities or left unprotected on devices that can be lost or stolen.

To address its network performance and security issues, St. Luke’s IT team turned to NetMotion Mobility. Mobility provides reliable, seamless network connectivity for St. Luke’s staff and also acts as a second firewall, preventing unauthorized devices from accessing the network.

As St. Luke’s wireless network has spread from a single, IP segment virtual LAN (VLAN) to a multi-facility network, Mobility enables the staff and caregivers at St. Luke’s to move around their facilities and get their work done without worrying about dropped connections. If a user passes through an area with poor network coverage, applications maintain their state and reconnect automatically and quickly (often in less than a second), enabling the user to pick up right where he or she left off.

Mobility allows clinicians to roam between different rooms, floors and buildings, seamlessly reconnecting their devices to the network and allowing them to have anywhere, anytime access to charting applications and patient records. Mobility’s management console gives the IT team the ability to centrally manage all of the devices their clinicians use. They receive real-time data on devices and users such as the applications being run, the amount of data transmitted, even the battery life of each device. And they can immediately quarantine a lost or stolen device from the network.

With a better understanding of Mobility’s impact, St. Luke’s IT team began looking for other hospital processes that could be improved. They determined that Mobility could keep the health system’s state-of-the-art mobile scanning and X-ray units better connected, increasing the upload speed of images and other data to the patient records database. “Clinicians are viewing neurologic studies and X-rays faster and visiting more patients during their rounds,” added Gretzer. “We have also reduced the time for a clinician to receive electronic X-rays from, in many instances, 30 to 45 minutes down to about one and a half minutes — allowing clinicians to diagnose issues and begin treatment faster.”

Today, St. Luke’s staff is accessing data via wireless devices in real-time, 24 hours a day. Physicians and nurses are using wireless laptops and tablet PCs to track and chart patient care. Clinicians dispense medicine after scanning barcodes on patient ID bracelets with wireless barcode scanners. Case managers access information on the fly as they verify patient records and coordinate services with insurances companies. And even the hospital’s nutritionists plan menus with patients and transmit those orders wirelessly to kitchen staff for later preparation.

The combination of mobile devices and reliable, secure wireless technology continues to offer a number of opportunities to improve the delivery of healthcare services. Thanks to Mobility, for St. Luke’s the promise of that technology has become a reality, improving operational efficiency and, most importantly, leading to better patient care.

Click here to review the entire St. Luke’s Episcopal Health System case study.


December 16, 2015  1:44 PM

Protecting health IT data at rest



Posted by: adelvecchio
data encryption, Encryption, health data security, PHI

Dr  Mathews (2)Guest post by Dr. Michael G. Mathews, president, COO, & co-founder, CynergisTek, Inc.

In prior segments of this series, I touched on the fundamentals of encryption using symmetric (shared secret), asymmetric (public-key), and combinations of the two to get a hybrid approach to keeping data confidential. I also explained the concepts of data integrity (knowing a message has not been changed) and non-repudiation (verifying the sender is authentic), as well as ways to secure data in motion. In this final segment on encryption within the healthcare setting, I turn my focus to protecting health IT data at rest.

With as many breaches as there have been in recent years, it’s not uncommon for there to be an immediate cry to “encrypt everything” without knowing exactly what that means. As mentioned in my previous segment, the first step to knowing the right solution is understanding the location and type of data in question; email is different from data living in structured databases, and those types of data are different from standalone files containing sensitive data. Likewise, the steps used to protect a mobile device (smartphone, tablet, laptop, etc.) that roams onto various networks differ from those taken for a workstation that lives on the internal managed local area network behind the perimeter firewall.

In general, given the maturity and availability of full disk encryption options, it should be considered a best practice to deploy full disk encryption for any workstations or mobile devices that have a reasonable expectation of being exposed to sensitive data. This protects those devices against any sensitive files that get saved there, any cache or temporary files from connections that handle sensitive data, as well as covering locally-stored emails that might have personally identifiable information (PII) or protected health information (PHI) in them. In addition, this addresses the safe harbor requirement that pertains to unauthorized disclosure in the event of the theft or loss of a mobile device.

Database servers with PHI/PII in them present a significant challenge to health IT. It’s easy for people to say “encrypt it all,” but it’s not practical to do so because of performance, key management and access control issues. In many cases, encrypting certain data — usually those data elements that tie the data to an individual — within a relational database construct ensures the data is protected and still accessible to those that need it, without resulting in a significant hit to performance. In response to industry feedback and meaningful use requirements, electronic health record manufacturers have added roadmaps toward ensuring data integrity within the databases by using cryptography.

A major hurdle to protecting sensitive health IT data at rest is ensuring it stays where it should and is used as it should be. While data loss prevention tools are not encryption tools, they can be used to trigger encryption and are now generally available to help ensure data at rest is used appropriately and is encrypted when put in motion. Using a combination of pattern matching and metadata cataloging, these tools inspect data as it goes from at rest to in motion and evaluates whether that specific activity should be allowed and whether the data should be encrypted prior to going in motion. This can include simple moves of data to a local machine’s storage system all the way to emails being sent with data that might be sensitive.

Encryption is one of many tools available to information security professionals to protect data both in motion and at rest. More often than not, though, “the right answer” is a combination of many of those tools, not just encryption. Finding the right combination of tools to help ensure the security of health IT data requires a strong vision of the overall information security program and a commitment by the organization to find a skilled and visionary chief information security officer.


December 15, 2015  4:15 PM

The 10 worst data breaches of 2015



Posted by: adelvecchio
Data breach, data breach security, health data breach, healthcare data, healthcare data breach

RickKamGuest post by Rick Kam, CIPP/US, president and co-founder, ID Experts

There’s no sugarcoating the fact that 2015 was a dizzying year for data breaches, and disastrous for many organizations and consumers. In the first half of the year alone, Gemalto NV found that 888 disclosed security incidents compromised nearly 246 million records worldwide.

There were certainly trends in data breaches this year, including the rising sophistication of hackers, the ever-increasing threat of massive state-sponsored attacks, and the continuing prevalence of large breaches in the healthcare industry. In fact, the average healthcare breach through mid-2015 was 200% larger than in the first half of 2014.

With those trends in mind, let’s take a look back at the 10 biggest and baddest breaches of 2015 — and then see what consumers and security professionals can do to make 2016 a safer and more secure year.

The five biggest breaches of 2015
The following incidents were the five biggest breaches of the year in the U.S., based on number of records compromised.

1. Anthem, 80 million
Health insurer Anthem Inc. revealed in February 2015 that hackers, likely from China, had accessed a database that included encrypted and unencrypted data on patients and employees. According to the Huffington Post, it was the fifth-largest breach of all time.

2. Ashley Madison, 37 million
A hacking group known as Impact Team stole private information on 37 million people who use the Ashley Madison website, which encourages users to cheat on their partners. The hackers are threatening to reveal customers’ personal data unless the website shuts down, which it has yet to do.

3. U.S. Office of Personnel Management, 21.5 million
The U.S. Office of Personnel Management suffered two unrelated breaches in 2015. The larger one affected more than 21 million current and past federal workers. Again, the breaches of the government agency are believed to have originated in China.

4. Experian, 15 million
Experian Information Solutions, Inc., the world’s largest consumer credit monitoring firm, suffered its second massive breach in 2015. The breach exposed the sensitive personal data of about 15 million T-Mobile customers who underwent credit checks by Experian. An earlier attack on an Experian subsidiary exposed the Social Security numbers of 200 million U.S. citizens.

5. Premera Blue Cross, 11 million
The records exposed in Premera’s breach may have been more sensitive than those leaked in the far larger Anthem breach, including Social Security numbers and financial information of subscribers and people who do business with the company.

The five baddest breaches of 2015
Now let’s take a look at the five baddest breaches of the year — an admittedly subjective category that highlights breaches that are especially damaging or disturbing because of factors such as who they targeted, how they were carried out, and their lasting ramifications.

1. LastPass, 7 million
Consumers should be rewarded for taking smart steps to protect their online security. That’s the troubling aspect of this breach of a leading password management company, which has further undermined consumer confidence and could lead to unsafe practices. It’s a big problem if consumers stop believing in their ability to achieve digital security and fail to take even basic precautions.

2. Planned Parenthood, 333
While “only” 333 employees were affected by the Planned Parenthood attack, the troubling aspect of this breach is that it was done not to achieve financial gain but to pursue ideological agendas and blackmail affected individuals.

3. Securus Technologies, thousands
Prison phone company Securus Technologies, Inc. had 70 million call records hacked, involving thousands of prisoners across 37 states. The ugliest part? Many of those recorded calls appear to have violated prisoners’ constitutional rights because they involved confidential conversations between prisoners and their attorneys.

4. IRS, 333,000
Hackers accessed extremely sensitive information through past tax returns, including Social Security data and financial details. The total cost to taxpayers in fraudulent claims was about $50 million before the IRS noticed the breach.

5. Harvard University, eight schools and offices
Harvard University joined a long list of other universities to suffer a data breach in 2015. Education is being hit hard, accounting for 6% of all data breaches — slightly more than the retail industry — in the first half of the year. Budgets are tight in the education sector, but breaches at the most esteemed U.S. universities are a reminder that security must be prioritized to protect students and employees.

What can we learn from the big and the bad?
Want even more bad news? These lists include only U.S. breaches. Two of the largest breaches of 2015 — 50 million records breached at a Turkish agency and 20 million at Russian dating site Topface — occurred outside the U.S.

Here are a few takeaways that all organizations — big and small — can put into practice now and in 2016:

  • Beware of all sources of attacks. The largest two breaches were state-sponsored attacks, but Gemalto found that type of attack accounted for just 2% of all the data breach incidents in the first half of 2015. The biggest culprit over those six months? Malicious outsiders, which accounted for 62% of total breaches and nearly half of all records taken.
  • Brace yourself, especially in healthcare and government. According to Gemalto, the healthcare and government sectors accounted for about two-thirds of all compromised data records in the first half of the year.
  • Encrypt. The data stolen from LastPass was heavily encrypted, a protection which may limit the damage done. At the very least, organizations should follow LastPass’ example and encrypt sensitive data.
  • Learn from mistakes. One breach is bad enough. If an organization suffers a second large attack, as did Experian, the damage to its reputation will grow exponentially.
  • Heed the warnings. According to the Seattle Times, Premera Blue Cross was warned three weeks before its data breach began that it lacked sufficient network security procedures. Ironically, the warning was issued following an audit by the U.S. Office of Personnel Management — which suffered an even larger breach. Premera argued that the vulnerabilities found in the audit may not have been exposed by the hackers. But the point remains: Take any warning seriously, and act as quickly as possible to upgrade your security measures.


December 8, 2015  11:51 AM

The human risk factor of a healthcare data breach



Posted by: adelvecchio
cybersecurity, Data breach, data breach security, healthcare data

RickKamGuest post by Rick Kam, CIPP/US, president and co-founder, ID Experts

In a recent report from the Ponemon Institute, 70% of the surveyed healthcare organizations and business associates identified employee negligence as a top threat to information security. An article published earlier this year in the Federal Times noted “Every survey of IT professionals and assessment of cybersecurity posture shows at least 50 % of breaches and leaks are directly attributable to user error or failure to practice proper cyber hygiene.”

Now, to anyone who’s been paying attention for the last decade or so, it will come as no surprise that people make mistakes that cause data breaches. To err is human, and that is not going to change. What has changed is the scope of damage resulting from those errors.

A decade ago, a lost laptop or improperly discarded paper records might expose hundreds or even thousands of people to a potential data breach. Today, with massive digitization of medical information, mobile data usage, and widespread system integration, everyday human errors can cause breaches that expose millions of people to potential harm. To cite one example, InfoWorld and CSO reported that the Anthem data breach, which involved 80 million records, was probably caused when thieves infiltrated Anthem’s system using a database administrator password captured through a phishing scheme.

Attack vectors point from people to technology
A recent blog by Napier University professor William Buchanan aptly lists the top three threats in computer security as “people, people, and people.” Buchanan’s post mentions leaving devices unattended, sharing passwords, or accidentally emailing information to the wrong people as typical security errors. Many of the breaches from cyberattacks are also traced back to users unwittingly giving outsiders access to networks.

Whether thieves get users to share personal information via phishing schemes, enter their credentials on a spoofed website, or download apps with embedded malware, tricking people is the easiest route to cybertheft. Yes, hackers can exploit system vulnerabilities once they’re inside a network, but user mistakes give them the foothold. Kevin Mitnick — a notorious hacker in the 1980s and early 1990s — famously told the BBC, “What I found to be true was that it’s easier to manipulate people rather than technology. Most of the time, organizations overlook that human element.”

Plugging the people gap
Healthcare organizations face challenges in plugging the human security gap. The biggest risk is a lack of awareness on the part of users. Even if an organization has good security processes and training, and employees faithfully follow security procedures at work, they are typically unaware that actions in their private lives can put their employer at risk. The chance comment on Facebook, using the same password on personal and work accounts, or a secretly malicious app downloaded to a personal device that is also used at work can vault criminals right past an organization’s network security. If employees are bringing their own devices to work, their failure to do an operating system update with important security patches can put employers’ networks at risk.

The second biggest challenge is visibility: employers don’t know and can’t control what websites their employees, customers, and business partners visit, what links they click on in popup windows, and or who they chat with online.

Assume that every user is exposed to multiple risks every day. According to a new report from Palo Alto Networks, more than 40% of all email attachments and nearly 50% of portable executables examined by Palo Alto’s WildFire software were found to be malicious. The report also found that the average time to “weaponize” world events — to create phishing or other schemes to capture passwords or deliver malware — is six hours. Just think, within a few hours of an earthquake in Chile or a tsunami in Japan, your well-meaning employees trying to donate to a relief fund can be spoofed into providing information that leads to a data breach.

Improving your odds
Humans can’t be error-proofed any more than technology, but there are things that can be done to help a workforce, customers, and partners keep an organization and their information secure. A recent blog by Jeff Peters of SurfWatch Labs recommended fighting social engineering with user awareness programs and using technology to limit exposure. Email coming into networks can be scanned for malicious attachments and links. Periodic security training is great, but ongoing education is also needed: How about a short, fun weekly or monthly newsletter with news of scams and tips on how to avoid them? Or a bulletin board where users can post suspected scams and get recognition for warning others?

Despite the best efforts at promoting security, people will make mistakes. Among other things, scammers will capture or even guess passwords. Vast numbers of people still use birthdates, pets or children’s names, or other personal information for passwords. A new study covered in the Financial Times found that some nuclear plants are still using factory-set passwords such as “1234” for some equipment. For this reason, some security experts are beginning to advocate doing away with passwords altogether for critical systems and moving to multi-factor authentication. TechTarget reported that at the International Association of Privacy Professionals Privacy. Security. Risk. 2015 conference, keynote speaker Brian Krebs advocated stronger authentication schemes, saying “From my perspective, an overreliance on static identifiers to authenticate people is probably the single biggest threat to consumer privacy and security.”

In the Federal Times article mentioned previously, Jeremy Grant, a senior executive at the National Institute of Standards and Technology, advocated doing away with passwords. He uses two-factor authentication on his phone — biometric identification (a thumbprint) and derived credentials from a common access card or personal identity verification card on his phone — so that there is nothing to remember and nothing that can be stolen.

No foolproof solutions
Speaking at the Privacy, Security. Risk. 2015 conference, retired RSA chairman Art Coviello said with cloud computing and other new technologies, “The attack surface has expanded so dramatically that it’s becoming unfathomable…The United States is living in the biggest and most vulnerable digital glass house on the planet.” With medical data scattered from the cloud to multiple points of care and to the personal devices of millions of healthcare workers, security failures are going to happen. You may not be able to fool all of the people all of the time, as Abraham Lincoln said, but cybercriminals can fool enough of the people enough of the time to eventually overcome virtually any defense. Unless you envision a perfectly consistent robotic healthcare workforce (oh, wait, robots could be hacked), you can’t count on your staff, users, or your business associates to be 100% secure, 100% of the time.

Ultimately, the best you can do is educate people, consistently and comprehensively monitor for security incidents — based on thorough and up-to-date risk analysis — and have plans and teams ready to respond when human error leads to human and business peril.


November 5, 2015  2:17 PM

What’s stopping the wearables revolution?



Posted by: adelvecchio
Internet of Things, mHealth, mHealth devices, wearable devices, wearables

zach-watson 50Guest post by Zach Watson, marketing operations analyst, TechnologyAdvice

Despite obscene amounts of hype, wearable technology has yet to truly turn mainstream. A Nielsen Company report from 2014 found that only about 15% of consumers use wearable technology, though roughly 70% were aware of the electronics.

So what’s the problem? If wearables are truly a revolutionary force for health and wellness change, why are so few people buying in?

It’s safe to rule out innovation. Though the term “wearables” is often thought to refer to fitness trackers, the industry has already moved on to much more sophisticated technology.

For example, Google is currently developing contact lenses that can measure the wearer’s glucose level through the fluid in their eyes. And InteraXon, Inc. developed its Muse headband, which uses EEG technology to measure brain activity during meditation so users can calculate the effectiveness of their session and improve their concentration.

Both of these devices could be described as wearables, but they measure things more sophisticated than heart rate or number of daily steps.

This type of progress rightly inspires awe and excitement, but it’s the practical application of that progress which is hindering wider adoption (and more meaningful results) from wearables.

Minimal impact on populations with chronic conditions

“In the aggregate data being gathered by millions of personal tracking devices are patterns that may reveal what in the diet, exercise regime, and environment contribute to disease,” wrote the Washington Post.

And that’s true: the promise of wearable technology to pinpoint causes of illness is immense, both for personal and public health. Consider that a whopping 45% of U.S. adults report living with at least one chronic condition — maladies which require ongoing monitoring to manage — and the potential of wearable technology crystallizes even further.

However, the majority of wearable users don’t have chronic diseases. Nearly half of all wearable owners fall into the 18-to-34 years-old demographic. In contrast, having a chronic disease is statistically correlated with advanced age and lower education, as well as less access to the internet.

So not only does the population which needs wearables the most not have them, but these patients also have less ability to operate the devices via the internet should they ever acquire the hardware.

Now, getting wearables into the hands of the chronically ill isn’t the job of device manufacturers iper se; policy makers and healthcare providers have a large opportunity to contribute to the dissemination and use of wearables as well.

To be fair, some wearables such as remote monitors are standard fare for the chronically ill, but much more could be done. And until the wearables industry begins focusing on solving these types of problems, the heaviest users will remain quantified self fanatics.

Sustained adoption and actual utility

For the moment, the question of how to use wearables for the greater good remains an academic one. A far more pressing problem — at least in the eyes of manufacturers — will be the user drop off rate.

A study by Endeavour Partners LLC found that after three to six months roughly 30% of wearable owners stop using their device. The percentage of drop offs keeps rising in direct proportion to the period of time after purchase.

This study shows that a significant portion of consumers don’t find much payoff from using wearable devices.

This raises real questions about the current utility of these devices in comparison to the hype in which they’re basking. Healthcare providers and scientists are still debating over how much of the data provided to them — through wearable devices or otherwise — is truly useful.

The same conundrum arises from patient access to patient portal software, where users can upload data at any time. For example, the number of steps a user takes per day isn’t groundbreaking material for a physician, and it looks like it may not be groundbreaking for many users either.

Developing products that become irreplaceable in the lives of each user must be the ultimate goal for each device manufacturer. Integrating devices into an existing ecosystem, like the ever-growing Internet of Things, is a good start.

That’s what Remo, a wearable that allows users to manipulate different systems in their home, does. Using gestures, a user can manipulate a range of appliances, from the television to lights and alarm clocks. If this was coupled with health tracking, it could be of huge value to consumers who have trouble moving throughout their homes.

As it stands, wearables are garnering some attention from the general population, but most people aren’t biting. That’s because the fundamental value of tracking your steps and heart rate isn’t compelling enough to pull in the casual consumer.

Now, if these devices offer more complex functions that monitor things such as stress and anxiety, then consumers are likely to take more notice. The market for wearables will continue to increase even if that doesn’t happen in the near future, but the sustained utility of advanced heart rate monitors will only appeal to select groups.

Author Bio
Zach Watson is marketing operations analyst at TechnologyAdvice. He covers marketing automation, healthcare IT, business intelligence, HR, and other emerging technology. Connect with him on LinkedIn.


October 27, 2015  11:42 AM

Danger in the cloud



Posted by: adelvecchio
Cloud, cloud security, cybersecurity, healthcare data

DougPollack headshotGuest post by Doug Pollack, CIPP/US, chief strategy officer, ID Experts

Chances are that your healthcare organization has already chosen to use cloud computing as part of its IT infrastructure, and with good reason: Cloud computing is a cost-effective way to grow IT capacity, and software services available through the cloud can make a workforce more productive. And your IT team has worked with your service providers to protect data in the cloud. All good, right? But here’s the rub: A new study from cloud security vendor Skyhigh Networks shows the average healthcare organization is using more than 10 times more cloud services than the IT organization knows about. Think about that, more than nine out of 10 services used in the course of business are unmonitored and unsecured. That amounts to one huge security hole, and cybercriminals are jumping in to exploit this new threat to healthcare information.

Foggy about the cloud
In a recent report from the Ponemon Institute, the Fifth Annual Benchmark Study on Privacy and Security of Healthcare Data, survey respondents identified cloud usage as a primary security concern for the healthcare industry. A third of respondents rated public cloud service use as a top security threat to their organizations. Employee negligence was listed as the top threat, at 70%, and cyberattacks came in second at 40%.

In fact, the cloud security threat is likely bigger than most organizations realize. According to MedCity News, the Skyhigh study found that the average healthcare organization uses 928 different cloud services, 60 that are known to IT and 868 –about 93% — are “shadow services” that are not known or tracked by the IT, infosec, privacy, or compliance functions. While the volume of untracked cloud computing is troubling, it is not surprising. Statistics from the study reveal how much of today’s everyday communication and collaboration happens online:

  • On average, an employee uses 28 distinct cloud services, including seven collaboration services, four content-sharing services, three social media services and four file-sharing services.
  • The average organization shares documents with 826 external domains, including business partners and email providers such as Gmail.
  • Almost 28% of users have uploaded sensitive data to a file-sharing service.
  • The average organization is connected to 1,586 business partners via the cloud. A significant number of these may also be partners of partners, and hence unknown and unaccounted for. It’s best to assume that every employee of every partner is also using multiple cloud services.

The bottom line is that you can’t protect data you can’t see, and you can’t see a lot of what’s in the cloud.

Crime lurks in the cloud
It’s interesting that the Ponemon study respondents listed cloud computing behind employee negligence and cyberattacks on its list of security worries. The truth is that the three work hand-in-hand to put organizations at risk.

Virtually every security study this year has shown that cyberattacks are now the top cause of data breaches, and most are multi-stage attacks that begin with social engineering, proceed to gain network access with stolen passwords or malware, then exfiltrate sensitive information. As Dan Munro recently pointed out in Forbes, “The latest techniques for cyber theft at scale are less about breaching networks from the outside — and all about social engineering to capture privileged access from the inside. Consumer cloud services like LinkedIn, Snapchat, Zappos, Evernote… have all had significant data breaches.”

Cloud services expose employees to all kinds of social engineering. The Skyhigh report found each cloud user is tracked by an average of four analytics and advertising services, and cybercriminals are increasingly using these services to deliver “malvertising” that can lead users to spoofed sites and capture their passwords. Tracking also enables “watering hole” attacks where criminals impersonate users at a favorite site and trick other users into revealing information.

Employees may also download apps containing malware to their workstations or personal devices, giving criminals a foothold from which to attack. Even social media passwords can give criminals enough access to steal information. Skyhigh found an attack that used Twitter to exfiltrate data 140 characters at a time. While employees may not be outright negligent in these situations, most are certainly unaware their social media usage may be putting their employer’s data at risk.

Once criminals gain access to information in the cloud, stealing data is relatively easy. The Skyhigh report revealed that only 15% of cloud services supported multi-factor authentication and only around 9% encrypted data stored at rest. More than 57% of the sensitive data in the cloud is in Microsoft Office files. When breaches involving cloud data happen, not only do organizations face the normal risks, they also face potential regulatory penalties of having unsecured data. A CipherCloud data security report found that 64% of cloud security challenges stem from the areas of audit, compliance, and privacy regulations.

Safety tips for the cloud
Ironically, one of the motivations for adopting cloud computing has been to improve security. Lost devices have historically been a major cause of data breaches, and real-time access to data in the cloud eliminates the need to store large data sets on individual devices. Unfortunately, the threat balance has shifted toward cyberattacks. Cloud services provide an easy entrée for cybercriminals, and the genie is out of the bottle: Cloud services are not going away anytime soon. But there are steps an organization can take to help protect against cloud-based attacks. In Health Data Management, cloud security vendor Porticor Ltd. offered some tips for improving cloud security on the IT and compliance side:

  • Consider extending identity and access management solutions to the cloud.
  • Obtain business associate agreements from all vendors, including cloud vendors and service providers, and make sure the agreement clearly defines the associate’s compliance responsibilities.
  • Have the IT department occasionally perform penetration tests and request audits and certifications from cloud vendors. The Cloud Security Alliance offers multiple levels of security certifications for cloud-based vendors, and some of their certification levels include independent audits.

All of these steps will help improve security, but most of what happens in the cloud is in shadow services that employees and partners use and can’t be controlled or monitored. These risks can be lowered by granting users access to the minimum amount of information necessary to perform a given task. Staff and business partners should also be taught good security practices. But the siren call of the Web is strong, and since what people do in the cloud can’t be controlled, cloud-based risks have to be planned for in the same way as any other security incident or breach.

Regardless of where the data lives, if thorough data inventories and risk analyses have been done, an organization will know what protected health and personal information it holds and the risks of it being compromised. If a solid incident response plan is in place, an organization should be prepared for a cloud-based attack.

In the end, both risk and protection depend on people.


October 15, 2015  1:55 PM

Medical identity theft: Why healthcare data can be breached



Posted by: adelvecchio
Data breach, data breach security, Data security, health data security

RickKamGuest post by Rick Kam, CIPP/US, president and co-founder, ID Experts

Passengers on the London Underground are told to “mind the gap,” a warning to watch for the space between the train door and station platform. Healthcare organizations need to mind their own privacy and security gaps when it comes to protecting sensitive medical information.

According to the latest Gemalto NV Breach Level Index, the healthcare sector had the most data breaches in the first half of 2015, accounting for 21% of total incidents across all industries. Healthcare also had the largest number of records breached, at 84.4 million records, or 34%. The nature of these gaps has changed over the years — for instance, criminal attacks are now the leading cause of data breaches in healthcare, according to Ponemon Institute’s Fifth Annual Benchmark Study on Privacy & Security of Healthcare Data. Data breaches, particularly those caused by a criminal element, have caused medical identity theft to nearly double in five years.

The link between data Breaches and medical identity theft
According to the Wall Street Journal, medical identity theft is on the rise because of the surge in electronic health records and healthcare data breaches. But it’s more than the digitization of health records. Medical data is everywhere, due to a plethora of devices, from tablet computers to medical implants and even Fitbits and Apple watches that are recording health data and transmitting it over the Internet.

As noted in Forbes, healthcare data breaches are also on the rise because financial services and retail sectors have developed better strategies for protecting their data. This includes the use of EMV cards that use a chip instead of a magnetic stripe. As a result, many hackers are turning to the more vulnerable healthcare industry.

In addition, medical information is simply more profitable on the black market. The Dark Web offers cybercriminals multiple global marketplaces in which to sell stolen personal information, including healthcare records. According to the FBI, healthcare records can fetch as much as $60 to $70, as opposed to about $5 for credit cards.

This is all converging to create a perfect storm for getting this data. It’s more available, it’s worth more, and healthcare organizations aren’t as good at protecting the data because they haven’t had to be.

As Shantanu Agrawal, M.D. director of the Center for Program Integrity at the Centers for Medicare and Medicaid Services, told the Wall Street Journal, “Data breaches are increasing and becoming more common.”

Smart, strategic data protection
To protect patients against the harms of medical identity theft, the healthcare sector must step up its data protection measures. While there is no such thing as zero risk in today’s connected, digitized world, health plans, hospitals and other entities that hold medical information can mount a strategic defense against cybercriminals.

For instance, in an interview earlier this year, Dwayne Melancon, chief technology officer of Tripwire, recommended following the example of financial institutions that classify and segregate their data. “You…have to have good segregation of data,” he said, “where you make sure that only a select group of people can access sensitive data, that there are lots of controls around it.”

Melancon also cautioned healthcare organizations to spend their security dollars wisely. “A dollar spent on security doesn’t mean it’s worth spending,” he said. He added that security spending should be part of a risk framework, and not done to “just add window dressing.”

In other words, healthcare organizations must mind the gap.


October 8, 2015  12:42 PM

Security of healthcare data in motion



Posted by: adelvecchio
cybersecurity, data encryption, data in motion, data privacy and security, healthcare data

Dr  Mathews (2)Guest post by Dr. Michael G. Mathews, president, COO, & co-founder, CynergisTek, Inc.

In previous articles, I covered the fundamentals of encryption using symmetric (shared secret), asymmetric (public-key), and mixing the two to create a hybrid approach to keeping data confidential. I also covered the concepts of data integrity (knowing a message has not been changed) and non-repudiation (verifying the sender is authentic). This installment focuses on the security of healthcare data in motion. The final segment in this series will focus on the security of healthcare data at rest.

At the risk of sounding like a broken record (as it seems all things security start with this), it is critical to understand the application data flow for the data being protected. Knowing the type of data being moved and where it originates and is destined, as well as if there are intermediate stops/routings along the way, helps inform what type of protection makes the best sense for the data. For example, an application that is moving data from point A to point B within the internal network might simply be an exercise in proper network architecture design to segment the traffic as best as possible from those who don’t need access to it. Since network segmentation as a mitigating control falls outside the realm of encryption, I’ll reserve that topic for a future article.

Any data leaving the internal network and going beyond the perimeter firewall certainly deserves a critical eye from a data confidentiality perspective to include non-traditional health IT applications such as Voice over Internet Protocol (VoIP). In the case of VoIP, depending on how calls are routed, the data portion of the call might live on the internal network or it might leave the internal network to a hosted private branch exchange. In the latter case, any conversations that include protected health information would be exposed to the Internet –potentially creating an unauthorized disclosure — without mitigating controls in place. In general, where it’s possible to enable data confidentiality, there’s rarely a reason not to do so.

One of the prominent options available for protecting the confidentiality of healthcare data is transport layer security or TLS — which, together with its predecessor secure sockets layer (SSL), are often collectively referred to as SSL. TLS takes a hybrid cryptography approach in that it uses asymmetric (public-key) cryptography to establish a secure initial communication channel in which it then negotiates a session key (symmetric) for further communications.

The benefit of using SSL/TLS is that, for discussion’s sake, it works at the application layer. This means that by the time the traffic hits the network, it’s encrypted. One detriment is that unless the application in question is written to support SSL/TLS, it’s not something that can be added after the fact, though there are workarounds that use SSL tunneling to make non-SSL/TLS-aware applications work with SSL/TLS. In recent years SSL/TLS have started to become more ubiquitous in applications, making accessibility to this route of protecting data much more favorable. Though it hasn’t been without its setbacks, with Heartbleed being the most widespread and serious.

The other widespread route is IP security or IPsec. In contrast to SSL/TLS, IPsec works at the network layer and, as such, it can be used to secure the confidentiality of any application, including those that don’t have security or privacy as integral features. Readers will most likely associate IPsec with site-to-site virtual private network (VPN) connections and even some implementations of end user VPN connectivity. IPsec depends on what are called security associations to establish the rules of the connection and the rules must match on each side of the connection to be successfully negotiated. Like SSL/TLS, IPsec also uses a hybrid approach to cryptography with initial key exchange either using a shared secret or a protocol-based key exchange to generate session keys for the communication to be protected.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: