Healthcare and Technology news
50.9K views | +5 today
Healthcare and Technology news
Your new post is loading...
Your new post is loading...!

What is a HIPAA Limited Data Set?

What is a HIPAA Limited Data Set? | Healthcare and Technology news |
What is a HIPAA Limited Data Set?

Under HIPAA, a limited data set is protected health information (PHI) that excludes certain direct identifiers of an individual, or certain direct identifiers of relatives, employers, or household members of the individual. 

What is a Direct Identifier?

Under HIPAA, a direct identifier is Information that relates specifically to an individual. HIPAA designates the following information as direct identifiers:

  • Names
  • Postal address information, other than town or city, State, and zip code
  • Telephone numbers
  • Fax numbers
  • Electronic mail addresses
  • Social Security numbers
  • Medical record numbers
  • Health-plan beneficiary numbers
  • Account numbers
  • Certificate and license numbers
  • Vehicle identifiers and serial numbers, including license plate numbers
  • Device identifiers and serial numbers
  • Web Universal Resource Locators (URLs)
  • Internet Protocol (IP) address numbers
  • Biometric identifiers (including fingerprints and voice prints)
  • Full-face photographic images and any comparable images

What is the Relationship Between Direct Identifiers and a Limited Data Set?

A “limited data set” is information from which the above direct identifiers have been removed. All of the above-listed identifiers must be removed in order for health information to be a limited data set.

Is a Limited Data Set Still Considered Protected Health Information?

Yes.  A limited data set is still protected health information or “PHI” under HIPAA (or electronic protected health information, if in electronic form).

For patient data to lose its status as PHI, that information must be de-identified. De-identified patient data is health information from a medical record that has been stripped of all “direct identifiers”—that is, all information that can be used to identify the patient from whose medical record the health information was derived, not just the direct identifiers listed above.

Therefore, since a limited data set is PHI, is still subject to the use and disclosure requirements and restrictions of the HIPAA Privacy Rule. 

What is the Significance of Information Comprising a Limited Data Set?

Disclosures of a “limited data set” are not subject to the HIPAA accounting requirements.


HIPAA accounting requirements mandate that a patient or research subject has the right to request a written record (an accounting) when a covered entity has made certain disclosures of that person’s protected health information (“PHI”).  The accounting must include all covered disclosures in the six years prior to the date of the person’s request.


A covered entity may also disclose a LDS for public health purposes, including those that are emergency preparedness activities. The covered entity must have a data use agreement in order to disclose the limited data set (LDS).

Technical Dr. Inc.'s insight:
Contact Details : or 877-910-0004

No comment yet.!

IBM Announces Deal to Acquire Both Phytel and Explorys; Goal Is Data Transformation

IBM Announces Deal to Acquire Both Phytel and Explorys; Goal Is Data Transformation | Healthcare and Technology news |

Senior executives at the Armonk, N.Y.-based IBM announced in a press conference held on Monday afternoon, April 13, at the McCormick Place Convention Center in Chicago, during the course of the HIMSS Conference, that it was acquiring both the Dallas-based Phytel and the Cleveland-based Explorys, in a combination that senior IBM executives said held great potential for the leveraging of data capabilities to transform healthcare.

Both Phytel, a leading population health management vendor, and Explorys, a healthcare intelligence cloud firm, will become part of the new Watson Health unit, about which IBM said, “IBM Watson Health is creating a more complete and personalized picture of health, powered by cognitive computing. Now individuals are empowered to understand more about their health, while doctors, researchers, and insurers can make better, faster, and more cost-effective decisions.

In its announcement of the Phytel acquisition, the company noted that, “The acquisition once completed will bolster the company’s efforts to apply advanced analytics and cognitive computing to help primary care providers, large hospital systems and physician networks improve healthcare quality and effect healthier patient outcomes.”

And in its announcement of the Explorys acquisition, IBM noted that, “Since its spin-off from the Cleveland Clinic in 2009, Explorys has secured a robust healthcare database derived from numerous and diverse financial, operational and medical record systems comprising 315 billion longitudinal data points across the continuum of care. This powerful body of insight will help fuel IBM Watson Health Cloud, a new open platform that allows information to be securely de-identified, shared and combined with a dynamic and constantly growing aggregated view of clinical, health and social research data.”

Mike Rhodin, senior vice president, IBM Watson, said at Monday’s press conference, “Connecting the data and information is why we need to pull the information together into this [Watson Health]. So we’re extending what we’ve been doing with Watson into this. We’re bringing in great partners to help us fulfill the promise of an open platform to build solutions to leverage data in new ways. We actually believe that in the data are the answers to many of the diseases we struggle with today, the answers to the costs in healthcare,” he added. “It’s all in there, it’s all in silos. All this data needs to be able to be brought into a HIPAA-secured, cloud-enabled framework, for providers, payers, everyone. To get the answers, we look to the market, we look to world-class companies, the entrepreneurs who had the vision to begin to build this transformation.”

No comment yet.!

4 Things Health IT Leaders “Would Be” Thankful For

4 Things Health IT Leaders “Would Be” Thankful For | Healthcare and Technology news |

Like many of you, every year at Thanksgiving dinner, my family, friends and I gather around the table and say one thing that we are thankful for before we begin to dig into the delicious food in front of us. Some are thankful for the meal in front of them, some for the company around them, and others for the roofs over their heads.

In the world of health IT, appreciation comes a little differently, and right now, it’s hard to imagine that CIOs are grateful for much in their professional lives. Simply stated, CIOs are overworked and burnt out in an era when the pressure is on them now more than ever. In fact, statistics say that a CIO’s responsibilities have increased, in terms of both scope and complexity, by 25 percent to 50 percent since the passage of HITECH. As such,  I can’t imagine there are too many things that CIOs are thankful for, as the burden seemingly increases by the day. So as we approach Thanksgiving 2014, here is a list of four things that healthcare IT leaders “would be” thankful for if they had them.

Some more meaningful use flexibility. Just recently, the Centers for Medicare & Medicaid Services (CMS) extended the deadline for hospitals to attest to meaningful use for the 2014 reporting year back one month, from Nov. 30 to Dec. 31. While this can be seen as another little bone the federal agency has thrown provider organizations, I see it as small potatoes in the big picture. Clearly, the industry wants—and needs—more substantial change. As of Nov. 1, only 840 hospitals have attested to meaningful use Stage 2 within the 2014 calendar timeframe, out of the 2,300-plus hospitals that had attested to Stage 1; and 11,478 physicians have attested to Stage 2 within the 2014 calendar timeframe. A shortened, 90-day reporting period rather than  the current 365-day reporting period for 2015 would be something the industry would be very thankful for. Heck, some IT leaders have even suggested that’s time to “declare MU a victory and move on.”

More money and more manpower. Technology adoption is expensive, and some healthcare organizations simply don’t have the resources. What’s more, there are not a ton of qualified IT professionals, as the pool of experts seems shallow. A Healthcare Information and Management Systems Society  (HIMSS) survey from last year found that 31 percent of healthcare organizations had to place IT initiatives on hold due to staffing shortages, while 43 percent cited the lack of a qualified talent pool as a challenge to appropriately meeting their staffing needs. And the year before that, a College of Healthcare Information Management Executives (CHIME) CIO survey found that 67 percent of healthcare CIOs were reporting IT staff shortages. Consultant development programs such as this one could help solve the problem.

More clarity and guidance from the federal government. This is a general one, but it really applies to the plethora of federal mandates that are hitting the industry all at once. Earlier this month, HCI Editor-in-Chief wrote a great, in-depth blog highlighting the mass departures among top leadership at the Office of the National Coordinator for Health IT (ONC). Specifically, the decision to move National Coordinator for Health IT, Karen DeSalvo, M.D., to the Ebola response team when the industry needs leadership and vision now, perhaps more than ever, was a highly questionable one.  Will DeSalvo come back to her post at ONC when she is done helping out with Ebola? Couldn’t you argue that the Ebola crisis in the U.S. is already past us? CHIME and HIMSS were two industry organizations that expressed similar concerns about this move. In a joint letter to Health and Human Services Secretary (HHS) Sylvia Mathews Burwell , they wrote that, “If Dr. DeSalvo is going to remain as the Acting Assistant Secretary for Health with part-time duties in health IT, we emphasize the need to appoint new ONC leadership immediately that can lead the agency on the host of critical issues that must be addressed.”

No more ICD-10 delays. Just recently, the Coalition for ICD-10, a broad-based healthcare industry advocacy group, sent a letter to House and Senate leaders urging them not to delay the ICD-10 implementation date again. In the letter, they said, “nearly three quarters of the hospitals and health systems surveyed just before the current delay were confident in their ability to successfully implement ICD-10. Retraining personnel and reconfiguring systems multiple times in anticipation of the implementation of ICD-10 is unnecessarily driving up the cost of healthcare.” Many providers that you talk to actually challenge the notion that the switch to the new coding set carries any value. But at the very least, stick to the date!

While this above list might seem unrealistic right now, perhaps health IT leaders can take solace in the fact that we are feeling your pain. So for the time being, sit back and enjoy all the things in life that you really are  thankful for. Happy Thanksgiving, everyone!

No comment yet.!

Top cybersecurity predictions of 2015 - ZDNet

Top cybersecurity predictions of 2015 - ZDNet | Healthcare and Technology news |

As noted by Websense, healthcare data is valuable. Not only are companies such as Google, Samsung and Apple tapping into the industry, but the sector itself is becoming more reliant on electronic records and data analysis. As such, data stealing campaigns targeting hospitals and health institutions are likely to increase in the coming year.

Via Paulo Félix
Vicente Pastor's curator insight, December 6, 2014 10:26 AM

I am a bit skeptic about predictions in general. Anyway, it is always a good exercise thinking about the coming trends although we do not need to wait for the "artificial" change of year since threats are continuously evolving.!

AI must overcome data challenges to reach healthcare potential 

AI must overcome data challenges to reach healthcare potential  | Healthcare and Technology news |

Dive Brief:

  • Rapid digitization of health information in EHRs and other repositories is creating new opportunities for AI in healthcare, but challenges in data accessibility, privacy and security persist, according to a new ONC report.
  • Frustration with legacy medical systems, the omnipresence of networked smart devices and consumer comfort with at-home services offered by Amazon and other tech vendors is driving interest in AI's potential.
  • Smartphone, social and environmental data can all be potential sources to fuel AI's use in healthcare. However, the report concludes such data must be high quality and reliable. Otherwise, AI's promise will not be realized in healthcare.

Dive Insight:

AI is a hot healthcare topic but still needs to be translated into reality, especially in an industry as complex as healthcare. 

During the second quarter of 2017, CB Insights counted 29 investment deals in the healthcare AI space — a record number — and predicted 2017 would set a six-year high.


Enthusiasm is expected to stay heated into 2018, with demand for tools that go beyond noting social determinants of health to using that data to inform patient care plans.


While investors will continue to fund wearables and biosensors, what grabs their attention are specific clinical use cases these technologies can support, Megan Zweig, director of research at Rock Health, told Healthcare Dive recently.


Tech giants including IBM Watson, Microsoft, Google and Apple are staking a claim in the space, too. Last month, Google launched Deep Variant, an open-source tool that uses AI to create a picture of a person’s genetic blueprint using sequencing data. The goal is to pinpoint specific genes or gene mutations that can help providers better manage disease states.


But challenges to widespread use of AI in health remain, as the ONC study shows. Among these are the acceptance of AI applications in clinical practice, difficulty leveraging divergent personal networked devices and AI solutions, access to quality training data on AI applications in health and gaps in data streams.


The report belies a large obstacle for rampant AI use. White noting the importance of high quality and reliable data, the industry has a data standards problem at the moment which needs to be ironed out. 


Currently, different vendors and clinicians send unstructured data in medical records back and forth across EHR systems through continuity-of-care documents, which are format flexible. If the promise of AI relies on reliable data, standards will have to be well-defined to ensure the data are high quality.


On the bright side, the industry seems aware that healthcare is close to a breaking point at interoperability. The growing Internet of Things and consumerism in healthcare naturally demands a more networked, connected industry approach. 


CMS Administrator Seema Verma in a town hall webcast on Wednesday with American Hospital Association CEO and President Rick Pollack said interoperability will be a topic of interest for the agency. She told listeners they will hear more from CMS in the future.

Technical Dr. Inc.'s insight:
Contact Details : or 877-910-0004

No comment yet.!

What is Big Data for Healthcare IT? | EHR Blog | AmericanEHR Partners

What is Big Data for Healthcare IT? | EHR Blog | AmericanEHR Partners | Healthcare and Technology news |

Big data is a term commonly used by the press and analysts yet few people really understand what it means or how it might affect them. At it’s core, Big Data represents a very tangible pattern for IT workers and demands a plan of action. For those who understand it, the ability to create an actionable plan to use the knowledge tied up in the data can provide new opportunities and rewards.

Let’s first solidify our understanding of Big Data. Big Data is not about larger ones and zeros nor is it a tangible measurement of the overall size of data under your stewardship. Simply stated, one does not suddenly have “big data” when a database grows past a certain size. Big Data is a pattern in IT. The pattern captures the fact a lot of data collections that contain information related to an enterprise’s primary business are now accessible and actionable for that enterprise. The data is often distributed and in a variety of formats which makes it hard to curate or use, hence Big Data represents a problem as much as it does a situation. In many cases, just knowing that data even exists is a preliminary problem that many IT workers are finding hard to solve. The peripheral data is often available from governments, sensor readouts, in the public domain or simply made available from API’s into other organizations data. How do we know it is there, how can we get at it and how can we get the interesting parts out are all first class worries with respect to the big data problem.

To help illustrate the concepts involved in Big Data, we will use a hospital as an example. A hospital may need to plan for future capacity and needs to understand the aging patterns from demographics data that is available from a national census organization in the country they operate in. It also knows that supplementary data is available in terms of finding out how many people search for terms on search engines related to diseases and the percentage of the population that smokes, is not living healthy lifestyles and participates in certain activities.  This may have to be compared to current client lists and the ability to predict health outcomes for known patients of a specific hospital, augmented with the demographic data from the larger surrounding population.

The ability to plan for future capacity at a health institute may require that all of this data plus numerous other data repositories are searched for data to support or disprove the hypothesis that more people will require more healthcare from the hospital in ten years.

Another situation juxtaposed to illustrate other aspects to Big Data could be the situation whereby a single patient arrives at the hospital with an unknown disease or infection. Hospital workers may benefit from knowing the patients background yet may be unaware of where that data is. Such data may reside in that patients social media accounts such as FourSquare, a website that gamifies visits to businesses. The hospital IT workers in this scenario need to find a proverbial needle in a haystack. By searching across all known data sources, the IT workers might be able to scrape together a past history of the patient’s social media declarations which might provide valuable information about a person’s alcohol drinking patterns (scraped from FourSquare visits to licensed establishments), exercise data (from a site like and data about their general lifestyle (stripped from Facebook, Twitter and other such sites). When this data is retrieved and combined with data from LinkedIn (data about the patients business life), a fairly accurate history can be established.  By combining photos from Flickr and Facebook, Doctors could actually see the physical changes in the way a patient looks over time.

The last example illustrates that the Big Data pattern is not always about using large amounts of data. Sometimes it involves finding the smaller atoms of data from large data collections and finding intersections with other data. Together, these two hospital examples show how Big Data patterns can provide benefits to an enterprise and help them carry out their primary objectives.

To gain access to the data is one matter. Just knowing the data is available and how to get at it is a primary problem. Knowing how the data relates to other data and being able to tease out knowledge from each data repository is a secondary problem that many organizations are faced with.

Some of our staff members recently worked on a big data project for the United States Department of Energy related to Geothermal prospecting. The Big Data problem there involved finding areas that may be promising in terms of being able to support a commercially viable geothermal energy plant that must operate for ten or more years to provide a valid ROI for investors. Once the rough locations are listed, a huge amount of other data needs to be collected to help determine the viability of a location.

Some examples of the other questions that need to be answered with Big Data were:

  1. What is the permeability of the materials near the hot spot and what are the heat flow capabilities?
  2. How much water or other fluids are available on a year round basis to help collect thermal energy and turn it into kinetic energy?
  3. How close is the point of energy production to the energy consumption?
  4. Is the location accessible by current roads or other methods of transportation?
  5. How close is the location to transmission lines?
  6. Is the property currently under any moratoriums?
  7. Is the property parkland or other special use planning?
  8. Does the geothermal potential overlap with existing gas and oil claims or other mineral rights or leases?
  9. Etc…

All of this data is available, some of it in prime structured digital formats and some of it not even in digital format. An example of non-digital format might be a drill casing stored in a drawer in the basement of a University that represents the underground materials near the heat dome. By studying its’ structure, the rate of heat exchange through the material can provide clues about the potential rate of thermal energy available to the primary exchange core.

In order to keep track of all the data that exists and how to get at it, many IT shops are starting to use graphs and graph database technologies to represent the data. The graph databases might not store the actual data itself, but they may store the knowledge of what protocols and credentials to use to connect to the data, what format the data is in, where the data is located and how much data is available. Additionally, the power of a graph database is that the database structure is very good at tracking the relationships between clusters of data in the form of relationships that capture how the data is related to other data. This is a very important piece of the puzzle.

The conclusion of the introduction post to Big Data is that Big Data exists already. It is not something that will be created. The new Big Data IT movement is about implementing systems to track and understand what data exists, how it can be retrieved, how it can be ingested and used and how it related (semantically) to other data. Every IT shop in the world has done this to some degree from a “just use Google for everything” low tech approach to a full blown data registry/repository being implemented to track all metadata about the data.

The real wins will be when systems can be built that can automatically find and use the data that is required for a specific endeavor in a real time manner. To be truly Big Data ready is going to require some planning and major architecture work in the next 3-5 years.

No comment yet.!

CMS’s Latest Move on ACOs: A Shift Towards Greater Realism Going Forward?

CMS’s Latest Move on ACOs: A Shift Towards Greater Realism Going Forward? | Healthcare and Technology news |

It’s still just hours after the news broke this afternoon—Monday, December 1—but as we and others across healthcare digest the latest developments out of the Centers for Medicare and Medicaid Services (CMS), some important points leap to mind. Indeed, two elements in particular hold major implications for the future of provider participation in Medicare’s accountable care organization (ACO) programs.

First of all, CMS is creating a new category of ACO, separate from the “regular” Medicare Shared Savings Program category and from the Pioneer ACO Program category. This new category has rules similar to those of the Pioneer ACO Program, but with a major set of differences. Known as “Track Three,” the new program, as a report in Kaiser Health News confirmed, “would allow ACOs to keep up to 75 percent of the money they save Medicare. But if they cost Medicare extra, they would be held responsible for 15 percent of the excess spending. Currently, ACOs cannot be held responsible for more than 10 percent.”

This new sub-program involves something akin to an extra carrot (a good share of “shared savings”) with an extra stick (a higher percentage of downside risk). In the coming weeks and months, it will be interesting to see how many current Pioneer ACO leaders shift into “Track Three.” And it will be even more interesting to see whether any regular-MSSP ACOs do so—or whether organizations or collaborative not currently participating in either the regular-MSSP or Pioneer-model programs join Track Three because of those incentives.

(Providers should also welcome CMS's change to its attribution procedure, with no new patients being attributed to ACOs in the middle of a calendar year, as currently occurs.)

Meanwhile, very importantly, as KHN report noted, “The new rule would give ACOs, both new and existing ones, an extra three years before they faced penalties, for a total of six years. Sean Cavanaugh, Medicare’s director,” the report noted, “said the change was one of many prompted by concerns raised by ACOs. ‘The notion that 36 months later you’re going to be at downside financial risk is pretty intimidating,’ he said in an interview,” the KHN report said.

“However,” the report continued, “the extra time would come at a price: ACOs that after their first three years decide to avoid penalties for the next three could keep no more than 40 percent of the money they save Medicare, rather than the 50 percent maximum they can keep during their first three years.”

Both of these new measures are gambles on the part of senior CMS officials: at this point in time, participation in both current programs, the Pioneer program and the MSSP program, is faltering. But the calculated risk being taken by CMS officials could potentially pay off. Provider leaders have repeatedly told us at Healthcare Informatics that the threat of downside risk remains too daunting a hurdle for many patient care organizations, as their senior executives and clinician leaders consider whether to participate in any of the Medicare ACO programs.

Indeed, even after praising CMS officials for the “proposal to waive certain fee-for-service payment rules that now inhibit clinicians from using their best medical judgment as to the best time and place for care,” as well as the fact that “CMS appears willing to revisit the instability of the financial benchmarks and the inequity of the risk adjustment methodologies,” the Charlotte-based Premier health alliance’s senior vice president Blair Childs said in a statement, “We believe, however, that CMS needs to do much more to improve the one-sided risk model In fact,” he said, “it proposes to reduce the already inadequate shared savings payments for ACOs extending their contract under Track 1 from 50 percent to 40 percent in year 4, stepping payment down an additional 10 percent each year to reach 20 percent in year 6. This will impede participation and inadequately recognizes the financial and transformational contributions made by participating providers.” The folks at Premier should know: some of their members are involved in both current ACO programs, and are leader organizations in the field.

Ultimately, only time will tell what comes of all this. But at least one thing is clear: CMS officials are apparently beginning to listen to provider concerns with regard to financial risk issues, and realize that the entire overall program could be in peril if changes aren’t made.

A next constructive step would be for senior CMS officials to consider taking another look at some of the core clinical outcomes measures in the ACO programs. But perhaps that will have to wait for another day.

No comment yet.!

'Wiper' Malware: What You Need to Know

'Wiper' Malware: What You Need to Know | Healthcare and Technology news |

The FBI has reportedly issued an emergency "flash alert" to businesses, warning that it's recently seen a destructive "wiper" malware attack launched against a U.S. business.

Security experts say the FBI alert marks the first time that dangerous "wiper" malware has been used in an attack against a business in the U.S., and many say the warning appears to be tied to the Nov. 24 hack of Sony, by a group calling itself the Guardians of Peace

Large-scale wiper attacks are quite rare, because most malware attacks are driven by cybercrime, with criminals gunning not to delete data, but rather to quietly steal it, and for as long as possible, says Roel Schouwenberg, a security researcher at anti-virus firm Kaspersky Lab. "Simply wiping all date is a level of escalation from which there is no recovery."

Many Sony hack commentators have focused on the fact that previous wiper attacks have been attributed to North Korea, and that the FBI alert says that some components used in this attack were developed using Korean-language tools.

But Schouwenberg advocates skepticism, saying organizations and IT professionals should focus their energies on risk management. "We are much better off trying to understand the attack better, and maybe use this incident as an opportunity for businesses everywhere to basically re-evaluate their current security strategy, which probably isn't quite tailored to this type of scenario and say: 'Hey, this is where I can improve my posture,'" he says. "So we should be focusing on that technical aspect, rather than on the potential motivations of the attackers."

In this interview with Information Security Media Group, Schouwenberg details:

  • The relative ease with which wiper malware attacks can be crafted;
  • Steps businesses can take to improve their security defenses against wiper malware;
  • The importance of whitelisting applications - meaning that only approved applications are allowed to run on a PC, while all others are blocked.

No comment yet.