Healthcare and Technology news
37.0K views | +1 today
Follow
Healthcare and Technology news
Your new post is loading...
Your new post is loading...
Scoop.it!

Anthem's Audit Refusal: Mixed Reaction

Anthem's Audit Refusal: Mixed Reaction | Healthcare and Technology news | Scoop.it

Privacy and security experts are offering mixed reviews of Anthem Inc.'s denial of a government auditor's request to perform vulnerability scans of the health insurer's IT systems in the wake of a hacker attack that affected 78.8 million individuals.

The Office of Personnel Management's Office of Inspector General, in a statement provided to Information Security Media Group, says Anthem - citing "corporate policy" - refused to allow the agency to perform "standard vulnerability scans and configuration compliance tests" this summer, as requested by the OIG. The health insurer also refused to allow the OIG to conduct those vulnerability tests in 2013 as part of an IT security audit that was performed by the agency.


"Anthem is in a no-win situation on this [most recent] request," says Dan Berger, CEO of security services firm Redspin. "It does appear Anthem has the contractual right to decline the request for an OIG vulnerability scan. But they might want to rethink that. Refusing now looks bad - both to their client OPM and to the public at large."

Security expert Mac McMillan, CEO of the consulting firm CynergisTek, notes: "Usually most companies want to cooperate with the government regulators because, quite frankly, it's in their best interest to do so. Most government contracts provide a provision for the government to conduct an audit if they deem it necessary."

But some other security experts are not surprised that Anthem refused the vulnerability tests.

"In fairness to Anthem, their position may be perfectly well-founded," says Bob Chaput, founder and CEO of Clearwater Compliance. "It's unclear what is precisely meant by vulnerability scans. Ask five people for a definition and receive eight different definitions. External and/or internal technical testing - expanding for the moment to include penetration testing as a way to identify a weakness - can be quite intrusive and disruptive to an organization's operations."

OIG Requests

OPM's OIG performs a variety of audits on health insurers that provide health plans to federal employees under the Federal Employee Health Benefits Program, an OIG spokeswoman tells Information Security Media Group. However, under the standard FEHBP contract that OPM has with insurers, insurers are not mandated to cooperate with IT security audits. Sometimes amendments are made to insurers' federal contracts to specifically require the full audits, the spokeswoman says. In fact, the OIG is now seeking such an amendment to Anthem's FEHBP contract.

OIG also notes in a statement: "We have conducted vulnerability scans and configuration compliance tests at numerous health insurance carriers without incident. We do not know why Anthem refuses to cooperate with the OIG."

A Common Practice?

David Kennedy, founder of security consulting firm TrustedSec, says it's "very common" for corporations to prohibit or limit external parties from performing vulnerability scans. "Most corporations have sanctioned tests that occur from third parties that perform the same type of testing and go even more in depth," he says. "A vulnerability scan is the most basic form of an assessment and wouldn't have prevented the Anthem breach from occurring. Most corporations will provide a summary of the assessment that was performed to provide to third parties to satisfy them for appropriate due diligence."

Although Anthem's recent refusal of the OIG audit requests might now appear to be a public relations blunder for the company, "I can see Anthem's side too, though," says Redspin's Berger. "A vulnerability scan is always going to find vulnerabilities. They may be concerned that any post-breach vulnerability report will be linked back to the recent breach. In reality, such scans are a 'point in time' assessment; it's unlikely that running a scan in the summer of 2015 would determine conclusively whether the recent breach could have been prevented."

In addition, if a security audit is not mandated by a contract, Chaput says it's probably not that unusual for private entities to refuse such requests from government agencies. "It depends on the nature of the relationship of the parties, the structure of that relationship, sensitivity of information involved, etc.," he says. "For example, is OPM a HIPAA covered entity and Anthem a HIPAA business associate in this relationship?"

Time for Change?

Also, the audit hoopla might even signal a need for OPM to overhaul its contractual practices, Chaput argues.

"In fact, it's quite possible that OPM is in violation of the HIPAA Privacy and Security Rule 'organizational requirements,'" he says. "Did OPM update all BA agreements? Do the terms and conditions of whatever agreements exist meet the requirements set forth in these HIPAA Privacy and Security Rule 'organizational requirements' to receive satisfactory assurances that this PHI and other sensitive information would be safeguarded?"

The government should negotiate stronger security protections into their contracts with insurers, Berger suggests. And that could include third-party vulnerability scans, whether conducted by OIG or others.

But McMillan of CynergisTek says Anthem's refusal of OIG's request could potentially provoke even more scrutiny by other government regulators or perhaps even legislative proposals from Congress.

Anthem likely already faces an investigation by the HIPAA enforcement agency, the Department of Health and Human Service's Office for Civil Rights, which investigates health data breaches and has the power to issue settlements that include financial penalties.

"Whether it is appropriate or allowed under [Anthem's] current contract or not - refusing a test right after a breach of this magnitude is enough to make some people say there needs to be greater accountability," McMillan says.

Safeguarding Data

Ironically, Chaput says that by denying the vulnerability tests by OIG, Anthem could be actually taking extra precautions in protecting PHI. "With over-the-top issues of government surveillance of U.S. citizens, Anthem might be thought of as having implemented a reasonable and appropriate administrative control - i.e. their 'corporate policy' to safeguard information with which it has been entrusted," Chaput says. "In the HIPAA Privacy Rule, there are standards and implementation specifications in which PHI, for example, is required to be disclosed to the Secretary of HHS. Since this technical testing could result in a disclosure of PHI, PII or other sensitive information, under what standard is OPM OIG invoking a right of potential disclosure?"

Kennedy adds that when he worked for ATM security vendor Diebold, "we never let anyone scan us. However we would always have reputable third parties perform assessments on us on a regular basis and provide those upon request when an organization wanted to evaluate our security."

more...
No comment yet.
Scoop.it!

Top cybersecurity predictions of 2015 - ZDNet

Top cybersecurity predictions of 2015 - ZDNet | Healthcare and Technology news | Scoop.it

As noted by Websense, healthcare data is valuable. Not only are companies such as Google, Samsung and Apple tapping into the industry, but the sector itself is becoming more reliant on electronic records and data analysis. As such, data stealing campaigns targeting hospitals and health institutions are likely to increase in the coming year.



Via Paulo Félix
more...
Vicente Pastor's curator insight, December 6, 2014 10:26 AM

I am a bit skeptic about predictions in general. Anyway, it is always a good exercise thinking about the coming trends although we do not need to wait for the "artificial" change of year since threats are continuously evolving.

Institute for Critical Infrastructure Technology's curator insight, December 9, 2014 4:57 PM

Institute for Critical Infrastructure Technology

Scoop.it!

Health IT Security: What Can the Association for Computing Machinery Contribute?

A dazed awareness of security risks in health IT has bubbled up from the shop floor administrators and conformance directors (who have always worried about them) to C-suite offices and the general public, thanks to a series of oversized data breaches that recentlh peaked in the Anthem Health Insurance break-in. Now the US Senate Health Committee is taking up security, explicitly referring to Anthem. The inquiry is extremely broad, though, promising to address “electronic health records, hospital networks, insurance records, and network-connected medical devices.”

The challenge of defining a strategy has now been picked up by the US branch of the Association for Computing Machinery, the world’s largest organization focused on computing. (Also probably it’s oldest, having been founded in 1947 when computers used vacuum tubes.) We’re an interesting bunch, having people who have helped health care sites secure data as well as researchers whose role is to consume data–often hard to get.

So over the next few weeks, half a dozen volunteers on the ACM US Public Policy Council will discuss what to suggest to the Senate. Some of us hope the task of producing a position statement will lead the ACM to form a more long-range commmittee to apply the considerable expertise of the ACM to health IT.

Some of the areas I have asked the USACM to look at include:

Cyber-espionage and identity theft
This issue has all the publicity at the moment–and that’s appropriate given how many people get hurt by all the data breaches, which are going way up. We haven’t even seen instances yet of malicious alteration or destruction of data, but we probably will.

Members of our committee believe there is nothing special about the security needs of the health care field or the technologies available to secure it. Like all fields, it needs fine-grained access controls, logs and audit trails, encryption, multi-factor authentication, and so forth. The field has also got to stop doing stupid stuff like using Social Security numbers as identifiers. But certain aspects of health care make it particularly hard to secure:

  • The data is a platinum mine (far more valuable than your credit card information) for data thieves.
  • The data is also intensely sensitive. You can get a new credit card but you can’t change your MS diagnosis. The data can easily feed into discrimination by employees and ensurers, or other attacks on the individual victims.
  • Too many people need the data, from clinicians and patients all the way through to public health and medical researchers. The variety of people who get access to the data also makes security more difficult. (See also anonymization below.)
  • Ease of use and timely access are urgent. When your vital signs drop and your life is at stake, you don’t want the nurse on duty to have to page somebody for access.
  • Institutions are still stuck on outmoded security systems. Internally, passwords are important, as are firewalls externally, but many breaches can bypass both.
  • The stewards/owners of health care data keep it forever, because the data is always relevant to treatment. Unlike other industries, clinicians don’t eventually aggregate and discard facts on individuals.
Anonymization
Numerous breaches of public data, such as in Washington State, raise questions about the security of data that is supposedly anonymized. The HIPAA Safe Harbor, which health care providers and their business associates can use to avoid legal liability, is far too simplistic, being too strict for some situations and too lax for others.

Clearly, many institutions sharing data don’t understand the risks and how to mitigate against them. An enduring split has emerged between the experts, each bringing considerable authority to the debate. Researchers in health care point to well-researched techniques for deidentifying data (see Anonymizing Health Data, a book I edited).

In the other corner stand many computer security experts–some of them within the ACM–who doubt that any kind of useful anonymization will stand up over the years against the increase in computer speeds and in the sophistication of data mining algorithms. That side of the debate leads nowhere, however. If the cynics were correct, even the US Census could not ethically release data.

Patient consent
Strong rules to protect patients were put in place decades ago after shocking abuses (see The Immortal Life of Henrietta Lacks). Now researchers are complaining that data on patients is too hard to get. In particular, combining data from different sites to get a decent-sized patient population is a nightmare both legally and technically.
Device security
No surprise–like every shiny new fad, the Internet of Things is highly insecure. And this extends to implanted devices, at least in theory. We need to evaluate the risks of medical devices, in the hospital or in the body, and decide what steps are reasonable to secure them.
Trusted identities in cyberspace
This federal initiative would create a system of certificates and verification so that individuals could verify who they are while participating in online activities. Health care is a key sector that could benefit from this.

Expertise exists in all these areas, and it’s time for the health care industry to take better advantage of it. I’ll be reporting progress as we go along. The Patient Privacy Rights summit next June will also cover these issues.


more...
No comment yet.
Scoop.it!

'Wiper' Malware: What You Need to Know

'Wiper' Malware: What You Need to Know | Healthcare and Technology news | Scoop.it

The FBI has reportedly issued an emergency "flash alert" to businesses, warning that it's recently seen a destructive "wiper" malware attack launched against a U.S. business.

Security experts say the FBI alert marks the first time that dangerous "wiper" malware has been used in an attack against a business in the U.S., and many say the warning appears to be tied to the Nov. 24 hack of Sony, by a group calling itself the Guardians of Peace

Large-scale wiper attacks are quite rare, because most malware attacks are driven by cybercrime, with criminals gunning not to delete data, but rather to quietly steal it, and for as long as possible, says Roel Schouwenberg, a security researcher at anti-virus firm Kaspersky Lab. "Simply wiping all date is a level of escalation from which there is no recovery."

Many Sony hack commentators have focused on the fact that previous wiper attacks have been attributed to North Korea, and that the FBI alert says that some components used in this attack were developed using Korean-language tools.

But Schouwenberg advocates skepticism, saying organizations and IT professionals should focus their energies on risk management. "We are much better off trying to understand the attack better, and maybe use this incident as an opportunity for businesses everywhere to basically re-evaluate their current security strategy, which probably isn't quite tailored to this type of scenario and say: 'Hey, this is where I can improve my posture,'" he says. "So we should be focusing on that technical aspect, rather than on the potential motivations of the attackers."

In this interview with Information Security Media Group, Schouwenberg details:

  • The relative ease with which wiper malware attacks can be crafted;
  • Steps businesses can take to improve their security defenses against wiper malware;
  • The importance of whitelisting applications - meaning that only approved applications are allowed to run on a PC, while all others are blocked.



more...
No comment yet.