Better Data Quality for Better Care: Part 1
Cleaning Up Healthcare’s Dirty Little Secret
Imagine you’re a patient going to see your primary care physician for your annual exam. You enter the facility and make your way to the front desk, where the registrar takes your insurance card and begins to look you up in the system. Unfortunately, when they type in your name, several medical records come up on the screen, all of them pertaining to you. To correct the issue, they can either go through the different records to find the accurate one or create a new one entirely. Both options will take time, and the latter one will create yet another duplicate record.
The reason for all the confusion is dirty data–when patient information is inaccurate, incomplete, inconsistent, obsolete, or corrupt. The problem is more common than you might think, and the consequences can be severe. Consider the high potential for mistakes when the above scenario plays out at scale. The wasted time and resources, lost revenues, claims denials, penalties, and unacceptable patient outcomes can be staggering. On rare occasions, patients have paid with their lives.
Nevertheless, many healthcare organizations have yet to address the problem. In some cases, it’s almost as if the issue of dirty data and the unacceptable outcomes associated with it have been relegated to that foggy, neglected black hole of expenses often referred to as “the cost of doing business.” Just because a problem is common, however, doesn’t make it right, especially given the obligation healthcare organizations have to patients and the communities they serve. In the not-too-distant future, this particular cost of doing business may lead to being “out of business.”
In the simplest terms, patient data refers to a single patient’s medical information including medications, medical history, vitals, illnesses, and so on. The data is generated by a vast number of sources such as hospital EMRs, primary care practices emergency rooms, pharmacies, payers, and countless additional care delivery locations. Such data is critical in making informed decisions regarding treatments. If the data isn’t clean, then patient safety and quality of care can be severely compromised. The impact on patient outcomes and daily operations are huge, including lost revenues, non-compliance, financial penalties, and the unpredictable and sometimes enormous costs associated with malpractice lawsuits.
According to an article in the Journal of Technology Research, the cumulative financial impact of duplicate records can reach up to $40 million for a healthcare organization because of malpractice litigation and duplicate clean-ups. Moreover, determining pre-authorization and ensuring clean patient data upfront are critical to claim denials prevention. Whether the mistake is a misspelled name or a more complex coding issue, a claim could be denied. According to Black Book Market Research, an estimated 33% of all denied claims result from inaccurate patient identification or information, costing the average hospital $1.5 million in 2017 and the U.S. healthcare system over $6 billion annually.
The quality of healthcare data is critical at every step along the patient care continuum. As the reliance on data in the digital age of healthcare intensifies for collaboration among providers and payers, especially with the recent passage of the Cures Act, the demand for accurate and reliable clinical, administrative, and financial data has increased exponentially. In fact, incomplete patient data has now become a compliance issue. New federal rules under the Cures Act regarding patient data sharing and interoperability require providers and payers to ensure quality patient data that can easily be shared with patients through APIs. Failing to comply with these rules could result in hefty penalties in the future. Accordingly, healthcare organizations should be taking steps to bolster the quality of their data without delay, not only to comply with new federal requirements and avoid information blocking, but also to protect patient safety and their own bottom lines in the face of industry consolidation and the popularity of alternative care sites.
A key provision of the latest Cures Act ruling is the requirement that developers use FHIR as the technical standard underpinning the application program interfaces (APIs) that healthcare applications use to exchange data with other applications and information systems.
While FHIR may not be a silver bullet for all the industry’s interoperability problems, it should lead to some pretty substantial improvements. For payers and other health organizations, FHIR offers several advantages over other healthcare data standards.
Because of the FHIR mandates, a variety of applications will become a part of the ecosystem. The member portal experience will also be enhanced because more information will be available to members. It’s a huge turning point in the history of healthcare in the U.S. and will be a fundamental part of doing business moving forward.
While other standards include important health data, it’s not necessarily easy for applications to use. FHIR, in contrast, is a more modern technical standard that enables applications to plug directly into electronic health records systems or claims databases to obtain patient health data. Additionally, FHIR allows for the sharing of small, discrete, specific bits of data, as opposed to the volumes of information included in a Continuity of Care Document.
Although the deadline for compliance with the FHIR patient access rules is fast approaching, most payers are not prepared, according to a June report from Gartner. That’s because implementing these new standards presents several challenges to payers.
Stay tuned for part two of our blog, where we’ll delve into those challenges.