A reliable risk-adjusted sepsis outcome measure could complement current national process metrics by identifying outlier hospitals and catalyzing additional improvements in care. However, it is unclear whether integrating clinical data into risk adjustment models identifies similar high- and low-performing hospitals compared with administrative data alone, which are simpler to acquire and analyze.
We ranked 200 US hospitals by their Centers for Disease Control and Prevention Adult Sepsis Event (ASE) mortality rates and assessed how rankings changed after applying (1) an administrative risk adjustment model incorporating demographics, comorbidities, and codes for severe illness and (2) an integrated clinical and administrative model replacing severity-of-illness codes with laboratory results, vasopressors, and mechanical ventilation. We assessed agreement between hospitals' risk-adjusted ASE mortality rates when ranked into quartiles using weighted kappa statistics (к).
The cohort included 4 009 631 hospitalizations, of which 245 808 met ASE criteria. Risk-adjustment had a large effect on rankings: 22/50 hospitals (44%) in the worst quartile using crude mortality rates shifted into better quartiles after administrative risk adjustment, and a further 21/50 (42%) of hospitals in the worst quartile using administrative risk adjustment shifted to better quartiles after incorporating clinical data. Conversely, 14/50 (28%) hospitals in the best quartile using administrative risk adjustment shifted to worse quartiles with clinical data. Overall agreement between hospital quartile rankings when risk-adjusted using administrative vs clinical data was moderate (к = 0.55).
Incorporating clinical data into risk adjustment substantially changes rankings of hospitals' sepsis mortality rates compared with using administrative data alone. Comprehensive risk adjustment using both administrative and clinical data is necessary before comparing hospitals by sepsis mortality rates.