Surveillance of non-ventilator-associated hospital-acquired pneumonia (NV-HAP) is complicated by subjectivity and variability in diagnosing pneumonia. We compared a fully automatable surveillance definition using routine electronic health record data to manual determinations of NV-HAP according to surveillance criteria and clinical diagnoses.
We retrospectively applied an electronic surveillance definition for NV-HAP to all adults admitted to Veterans' Affairs (VA) hospitals from January 1, 2015, to November 30, 2020. We randomly selected 250 hospitalizations meeting NV-HAP surveillance criteria for independent review by 2 clinicians and calculated the percent of hospitalizations with (1) clinical deterioration, (2) CDC National Healthcare Safety Network (CDC-NHSN) criteria, (3) NV-HAP according to a reviewer, (4) NV-HAP according to a treating clinician, (5) pneumonia diagnosis in discharge summary; and (6) discharge diagnosis codes for HAP. We assessed interrater reliability by calculating simple agreement and the Cohen κ (kappa).
Among 3.1 million hospitalizations, 14,023 met NV-HAP electronic surveillance criteria. Among reviewed cases, 98% had a confirmed clinical deterioration; 67% met CDC-NHSN criteria; 71% had NV-HAP according to a reviewer; 60% had NV-HAP according to a treating clinician; 49% had a discharge summary diagnosis of pneumonia; and 82% had NV-HAP according to any definition according to at least 1 reviewer. Only 8% had diagnosis codes for HAP. Interrater agreement was 75% (κ = 0.50) for CDC-NHSN criteria and 78% (κ = 0.55) for reviewer diagnosis of NV-HAP.
Electronic NV-HAP surveillance criteria correlated moderately with existing manual surveillance criteria. Reviewer variability for all manual assessments was high. Electronic surveillance using clinical data may therefore allow for more consistent and efficient surveillance with similar accuracy compared to manual assessments or diagnosis codes.