NCQA Data Tips and Tricks: Q&A with Rick Moore

As part of our commitment to impact population health through clean clinical data, Verinovum entered a strategic advisory partnership with Rick Moore, an award-winning, board-certified, healthcare strategist with more than 25 years of experience in the federal government and non-profit sectors.

Verinovum’s Chief Strategy and Marketing Officer, Mike Noshay, sat down with former NCQA Chief Information Officer Rick Moore to discuss current challenges around collecting healthcare data and how payers can unlock the right data at the right time to improve performance measures.

Rick Moore: What have you experienced in getting good, clean, usable clinical data for use in performance measurement?

Mike Noshay: We look at this as a journey. When we try to engage with our customers and prospects, we talk a lot about the intent of the data. If you’re a payer that’s interested in improving your HEDIS® and Stars ratings, there’s a very specific set of content that’s necessary to drive that outcome driven by NCQA and CMS. The way that we try to coach our clients is understanding with the end in mind, and then backwards planning to not only the data specification you need, but how qualified and how quality that information needs to be.

Learn more about clean clinical data

Complete the form below to watch the entire discussion between Rick Moore and Mike Noshay.

    Moore: It seems like the easy part is connecting and getting data, the harder part is getting the data that’s useful. What do you think are the reasons for that? What causes those challenges to occur in the field? For example, if having an EMR transfer my continuity care document from one provider to the next seems to be good enough for care, why can’t that be good enough for measurement?

    Noshay: I think that there’s been an evolution in healthcare standards adoption. The way these capture mechanisms EHRs have been deployed and the way that the standards are implemented at those EHR levels has really been a snowflake. Every institution has approached the problem in a slightly different way. And although the industry has relatively standardized on health level seven (HL7), the information on the back end is sort of all over the place.

    What we found fascinating about the way Meaningful Use was deployed, they put a heavy amount of significance on the human readable side of CCDs. Which means the way that John, the consumer, ends up asking for their health records from Dr. John Smith at Epic, is that they end up having a printout or a PDF that is viable and useful. The challenge is the machine readable on the back end can have a whole bunch of gaps in it. And the way that these standards have been deployed and mass create a lot of confusion about what is meant for human consumables in comparison to what is meant for an API driven or technology-based solution.

    Moore: What do you think plans consider when embarking on this journey to get this clinical data to become digital?

    Noshay: The reality is going to be more complicated than you think. The content is just messier, more complex, and more detailed than the claims data most payers are used to working with.

    To put into a little bit more context, we’ve seen a lot of plans doing a really good job capturing admission, discharge and transfer content, and the diagnoses they’re within, from EHRs, or from a laboratory repository. That’s really, rich clinical content that’s highly valuable. The challenge is it doesn’t complete the encounter. And we look at the way HEDIS® and Stars and other value-based payment models are continuing to escalate with their weight towards clinical relevancy.

    The more you can put into a longitudinal record of a patient visit will give you the appropriate context to be proactive, to really stem the problem before it ends up hitting your bottom line when you’re doing a quality measurement reporting period at the end of the year.

    Moore: Makes a lot of sense. In your experience, when you think about how to connect to get the data, I hear you saying HL7, which is the national standard around an industry standard around this interoperability. What’s your take on why start there, as opposed to let’s say, going right to the back end of the database and doing a direct connect?

    Noshay: It’s an age-old question. And it’s something that we’ve grappled with a lot. We’ve seen a lot of institutions do good work connecting into the backend of EHRs, doing scrapes that have databases. We’ve philosophically opted to take the opposite end of the spectrum and rely heavily on the standards that exist in the marketplace, recognizing there are gaps in some of those standards, 2x 3x and 4x. The reason being as systems upgrade, or new updates are rolled out, that creates breakage. If I have a direct database protocol that must go in and query the innards of an EHR reporting system, when they do that upgrade it invariably is going to have a crack. There’s going to be something that’s missing some data that’s left on the cutting room floor, something that I can no longer see that I really need. And we’re going to be the stewards of accepting data, when and where it exists. There will always be leaders and laggards and standards adoption. You don’t have to figure that out. We’re going to be your partner in crime to take whatever is available. And then bring all those different standards into something that is truly fit for purpose and useful to you as a receiver.

    You can rest assured that it’s our responsibility to ensure that all the data received is of sufficient quality fit for purpose for the outcomes and business objectives that you’re trying to achieve.

    Moore: You got it down to a science now, how to do this. What’s special about the way you do it and what are the results to get to that use case? What do you start with and where do you end up and what makes Verinovum’s process so special?

    Noshay: When we work with clients, we hit pause and say “before I take any data, what are the goals or objectives of your organization? What type of data? How is it going to be applied? When is it going to be applied?” We’ll canvas case management teams, HEDIS® and STARS reporting teams, risk adjustment teams, other analytical obligations for the organization, and build a full repository of the type of information that’s going to drive your bottom line. By creating that cross section, it allows us to drill down to the lowest common denominator and to have the greatest impact today.

    We look at clinical data in three different buckets, data that comes out of the door clean – roughly 40% of information is wholly usable when it hits our front door. That means that there’s enough context to support the business objective that’s in front of you. The next 40% is broken, but we go through the process of completing it. That means either through business logic rules, or algorithms that we’ve already pre tuned, we can have the data run through our system, fill in gaps, bring them to structural norms fill it to semantic norms, to raise the quality of that information to a sufficient level, that it’s still aligned to those business objectives. The last 20% is what we call a learning environment. We know that there will never be 100% of the information. The goal is to continue to take as much of that 20% as possible and move it back into the beginning part of the chain.

    Moore: What’s your cautious recommendation to others trying to solve healthcare data issues?

    Noshay: We wish anybody in the industry good fortune as they try to find ways to solve these really nitty gritty data problems. We’ve seen several AI vendors over the course of the last decade, whether it be IBM Watson Health, or Google Health, or Amazon, that have attempted to tackle this problem. And by and large, their underpinning failure point was not the algorithms they built; they built amazing, sophisticated workflows that could stem major cost problems in the industry and dramatically improve the lives of patients. But they all fell prey to a single problem: the quality of the data feeding their algorithms wasn’t appropriately stress tested. We caution the industry to say if you’re going to build unique tools, make sure that your foundations are solid first. And your foundation in any of these workflows is your data.

    Moore: How much do you think you’ve invested in getting to where you are, and what would be your recommendation to others that are trying to do that?

    Noshay: What we find fascinating is that most of capital in the industry is typically deployed to try to build a fancy new visualization layer or a workflow tool that are really important to solving clinical outcomes. But there’s oftentimes little invested in the data foundations that will drive those tools. And that’s where we get when people ask us what’s different. We do one thing and one thing only. We get clean, curated, enriched clinical data. We’re not an AI or another bi or another case management. We’ve invested all our time and energy into solving what we believe is the foundational issue in healthcare, which is the quality of the information driving healthcare’s sophisticated tools.

    1HEDIS® is a registered trademark of the National Committee for Quality Assurance (NCQA).