Little is known about how disease risk score (DRS) development should proceed under different pharmacoepidemiologic follow-up strategies. In an analysis of dabigatran vs. warfarin and risk of major bleeding, we compared the results of DRS adjustment when models were developed under "intention-to-treat" (ITT) and "as-treated" (AT) approaches.
We assessed DRS model discrimination, calibration, and ability to induce prognostic balance via the "dry run analysis". AT treatment effects stratified on each DRS were compared with each other and with a propensity score (PS) stratified reference estimate. Bootstrap resampling of the historical cohort at 10 percent-90 percent sample size was performed to assess the impact of sample size on DRS estimation.
Historically-derived DRS models fit under AT showed greater decrements in discrimination and calibration than those fit under ITT when applied to the concurrent study population. Prognostic balance was approximately equal across DRS models (-6 percent to -7 percent "pseudo-bias" on the hazard ratio scale). Hazard ratios were between 0.76 and 0.78 with all methods of DRS adjustment, while the PS stratified hazard ratio was 0.83. In resampling, AT DRS models showed more overfitting and worse prognostic balance, and led to hazard ratios further from the reference estimate than did ITT DRSs, across sample sizes.
In a study of anticoagulant safety, DRSs developed under an AT principle showed signs of overfitting and reduced confounding control. More research is needed to determine if development of DRSs under ITT is a viable solution to overfitting in other settings.