CMS’ New Measure Testing Rule Provisions at Risk of Undermining Qualified Clinical Data Registries
Monday, January 6, 2020
Posted by: Kasia Januszewski
Published January 7, 2020 on LinkedIn.com
Starting this year, new provisions in the final rule from the Center for Medicare and Medicaid Services’ Quality Payment Program (QPP) require all QCDR measures to be fully specified and tested before inclusion in the Merit-based Incentive Payment System (MIPS).
The full measure development process is a rigorous, evidence-based process, one which the PCPI has refined and standardized over the past twenty years. Although measure testing is the last step in the development process, it is critical and vital to ensure that a measure works, measuring what it was intended to measure, and is fit for its planned use. This phase of development establishes the reliability and validity of the measures and requires significant resources in time, data, and money.
Measure testing generally requires at least a year’s worth of data collection, which is representative of the data collection necessary for reporting on measures currently in use within a performance program(s), such as the Quality Payment Program.
As written, these provisions will add significant cost and delay in developing innovative clinical quality measures for QCDRs. This further pressures specialty societies and other registry sponsors already stressed by frequently changing rules and unrealistically short compliance deadlines. We believe sensible modifications should be adopted that address CMS’ legitimate measure quality and harmonization concerns while encouraging QCDR sponsors to expand, rather than curtail, their efforts to improve care for more patients.
We appreciate the necessity of maintaining rigorous requirements for the QCDR program. We also fully support adherence to standard, tested practices – these are the key to our success as measure developers. Tested, methodologically sound measures create a valid and even playing field for performance measurement. However, we are also aware of the unintended consequences of changing program requirements too quickly.
In this blog post, we outline the intent and requirements of the new rules; the cost, resource and program-effectiveness challenges the new rules present; and offer some practical suggestions for measure testing that would strengthen, rather than undermine, this important patient care monitoring and improvement program.
Intent and requirements – QCDRs were developed to encourage creation of clinical quality measures designed for specialties and conditions not well covered by broader quality measures adopted by CMS through its standard MIPS measure approval process. The intent was to support better patient care and greater clinician accountability by allowing more specialties and systems to participate in MIPS.
While the QCDR program has been largely successful, and greatly expanded the reach and benefits of MIPS, it also has led to a proliferation of measures. CMS views many of these as duplicative and/or potentially invalid and has adopted a range of processes to address these concerns, including efforts to harmonize and share QCDR measures among users. The new provisions extend these efforts by requiring that measures are fully specified, and field tested before inclusion in MIPS, rather than allowing conditional approval and annual re-evaluation as in the past.
Costs and challenges – Testing is perhaps the most rigorous phase of measure development and generally falls into two categories: measure validity, which assesses the clinical and societal need for a measure and its ability to measure its intended variables; and measure reliability, which assesses a measure’s fitness for use in the field and the extent to which it can deliver consistent, usable and complete data. Both are necessary to ensure integrity of individual measures and the QCDR program generally.
Measure development is costly in terms of money, time and most especially expert resources. Indeed, the need for specialized expertise profoundly shapes the nature of QCDR sponsors. Such expertise necessarily resides in clinical specialty societies and systems – and the more focused they are the smaller they tend to be, with correspondingly fewer resources.
Measure testing costs are typically in the tens of thousands of dollars and take between 12 to 24 months to complete, which can significantly strain an organization’s resources. Moreover, testing can significantly stress individual clinician members who volunteer their own time and office resources to specify and field test measures. The cost and difficulty of field testing is further increased by the need to set up new office processes to collect and submit test data, or pay EHR vendors or others to do it.
Until now, these investments could be partially offset by provisional approval of new measures, allowing society and system members to benefit from reporting them while testing reliability. Under the new rules, this field testing must be completed before a measure can be reported in MIPS – a process that can take up to a year or more depending on the measure, with the outcome measures CMS increasingly favors often taking longer to reliably document.
The added time and expense of completely field-testing measures before using them leaves many sponsors no choice but to pare back measure development activities. For some that means focusing on developing the simplest, easiest to document measures with the quickest payoff rather than more complex measures that might better support activities that produce broader or longer-term clinical benefits. Others contemplate abandoning QCDRs altogether.
As such, requiring complete testing of measures before allowing their use in MIPS undermines the goal of QCDRs – which is, after all, encouraging measure innovation to improve care for an ever-broader range of patients with unique and complex needs.
Strengthening QCDRs through flexible measure testing – When money rides on the outcome, we tend to think of new rules in a binary win-lose framework. Instead by focusing on the shared interests of stakeholders it is almost always possible to come up with a solution that works for all.
In this case the shared interest is better care for more patients, which is the whole point of QCDRs. We recognize the need for QCDRs to adhere to existing requirements and recommend both an interim and long-term approach to measure testing implementation. Provisionally, we believe testing measures using de-identified data from measure implementation in the QCDR aggregated at the clinician level is a suitable approach. However, it's important to note that this type of testing requires that the measure is implemented broadly enough to allow data analysis on a sample size that would be large to provide statistically significant results.
The downside of this approach is that this type of analysis requires that measures remain stable from one year to another since changes made to measures would require further implementation and additional testing. It also puts newer measures at a disadvantage and is likely to stifle innovation. Long-term, we believe that QCDRs should engage in standard practices for measure testing which require an extended timeframe, consistent data collection, and planning and commitment from the registry.
Such changes will require a closer partnership with CMS. As a convener of clinical care improvement stakeholders and a leader in measure development science, PCPI actively works toward this partnership through regular meetings with CMS, including upcoming sessions to discover and discuss shared solutions to these complex issues. Our goal is a system in which regulators benefit from clinician and developer insights in designing more-effective program rules, while developers better understand how they can help CMS better serve the broadest range of patients.
Tested, methodologically sound measures create a valid and even playing field for measuring performance and we do believe that this should be the gold standard. A more collaborative approach will make it possible.