BACKGROUND: Population-level data spanning different countries describing oral and parenteral treatment in pregnant women with inflammatory bowel disease (IBD) are scarce. We studied treatment with sulfasalazine/5-aminosalicylates, corticosteroids, thiopurines/immunomodulators, and tumor necrosis factor (TNF)-inhibitors in the United States (Optum Clinformatics Data Mart and the Medicaid Analytics Extract [MAX]) and in the Swedish national health registers.
METHODS: We identified 2975 pregnant women in Optum (2004-2013), 3219 women in MAX (2001-2013), and 1713 women in Sweden (2006-2015) with a recorded diagnosis of IBD. We assessed patterns of use for each drug class according to filled prescriptions, assessing frequency of treatment continuation in those that were treated in the prepregnancy period.
RESULTS: The proportion of women with Crohn's disease and ulcerative colitis on any treatment during pregnancy was 56.1% and 56.3% in Optum, 47.5% and 49.3% in MAX, and 61.3% and 64.7% in Sweden, respectively, and remained stable over time. Sulfasalazine/5-aminosalicylates was the most commonly used treatment in Crohn's disease, ranging from 25.1% in MAX to 31.8% in Optum, and in ulcerative colitis, ranging from 34.9% in MAX to 53.6% in Sweden. From 2006 to 2012, the TNF-inhibitor use increased from 5.0% to 15.5% in Optum, from 3.6% to 8.5% in MAX, and from 0.7% to 8.3% in Sweden. Continuing TNF-inhibitor treatment throughout pregnancy was more common in Optum (55.8%) and in MAX (43.0%) than in Sweden (11.8%).
CONCLUSIONS: In this population-based study from 2 countries, the proportion of women with IBD treatment in pregnancy remained relatively constant. TNF-inhibitor use increased substantially in both countries.
Background The bias implications of outcome misclassification arising from imperfect capture of mortality in claims-based studies are not well understood. Methods and Results We identified 2 cohorts of patients: (1) type 2 diabetes mellitus (n=8.6 million), and (2) heart failure (n=3.1 million), from Medicare claims (2012-2016). Within the 2 cohorts, mortality was identified from claims using the following approaches: (1) all-place all-cause mortality, (2) in-hospital all-cause mortality, (3) all-place cardiovascular mortality (based on diagnosis codes for a major cardiovascular event within 30 days of death date), or (4) in-hospital cardiovascular mortality, and compared against National Death Index identified mortality. Empirically identified sensitivity and specificity based on observed values in the 2 cohorts were used to conduct Monte Carlo simulations for treatment effect estimation under differential and nondifferential misclassification scenarios. From National Death Index, 1 544 805 deaths (549 996 [35.6%] cardiovascular deaths) in the type 2 diabetes mellitus cohort and 1 175 202 deaths (523 430 [44.5%] cardiovascular deaths) in the heart failure cohort were included. Sensitivity was 99.997% and 99.207% for the all-place all-cause mortality approach, whereas it was 27.71% and 33.71% for the in-hospital all-cause mortality approach in the type 2 diabetes mellitus and heart failure cohorts, respectively, with perfect positive predicted values. For all-place cardiovascular mortality, sensitivity was 52.01% in the type 2 diabetes mellitus cohort and 53.83% in the heart failure cohort with positive predicted values of 49.98% and 54.45%, respectively. Simulations suggested a possibility for substantial bias in treatment effects. Conclusions Approaches to identify mortality from claims had variable performance compared with the National Death Index. Investigators should anticipate the potential for bias from outcome misclassification when using administrative claims to capture mortality.
To add to the limited existing evidence on clinical outcomes and healthcare use in sickle cell disease (SCD) among beneficiaries of the US Medicaid program, we conducted a cohort study using nationwide Medicaid claims data (2000-2013). Patients were included based on HbSS SCD diagnosis and followed until Medicaid disenrollment, death, bone marrow transplant, or end of data availability to assess vasoocclusive crises (VOC), emergency room (ER) visits, hospitalizations, outpatient visits, and blood transfusions. Annualized event rates (with 95% confidence intervals [CI]) were reported. The impact of VOCs on the risk of mortality was analyzed using a multivariable Cox model with VOC modeled as time-varying and updated annually. In a total of 44,033 SCD patients included with a mean (SD) age of 15.7 (13.6) years, the VOC rate (95% CI) was 3.71 (3.70-3.72) per person-year, with highest rate among patients 19-35 years who had ≥ 5 VOCs at baseline (13.20 [13.15-13.26]). Event rates (95% CI) per person per year for other outcomes were 2.97 (2.97-2.98) ER visits, 2.39 (2.38-2.40) hospitalizations, 5.80 (5.79-5.81) outpatient visits, and 0.91 (0.90-0.91) blood transfusions. A higher VOC burden in the preceding year was associated with an increased risk of mortality, with a hazard ratio (95% CI) of 1.26 (1.14-1.40) for 2-4 VOC vs. < 2 and 1.57 (1.41-1.74) for ≥ 5 VOC vs < 2. In conclusion, we documented a substantial burden of SCD in US Medicaid enrollees, especially during early adulthood and noted that ongoing burden of VOC is associated with mortality in these patients.
BACKGROUND: Despite the widespread use, only sparse information is available on the safety of gabapentin during pregnancy. We sought to evaluate the association between gabapentin exposure during pregnancy and risk of adverse neonatal and maternal outcomes.
METHODS AND FINDINGS: Using the United States Medicaid Analytic eXtract (MAX) dataset, we conducted a population-based study of 1,753,865 Medicaid-eligible pregnancies between January 2000 and December 2013. We examined the risk of major congenital malformations and cardiac defects associated with gabapentin exposure during the first trimester (T1), and the risk of preeclampsia (PE), preterm birth (PTB), small for gestational age (SGA), and neonatal intensive care unit admission (NICUa) associated with gabapentin exposure early, late, or both early and late in pregnancy. Gabapentin-unexposed pregnancies served as the reference. We estimated relative risks (RRs) and 95% confidence intervals (CIs) using fine stratification on the propensity score (PS) to control for over 70 confounders (e.g., maternal age, race/ethnicity, indications for gabapentin, other pain conditions, hypertension, diabetes, use of opioids, and specific morphine equivalents). We identified 4,642 pregnancies exposed in T1 (mean age = 28 years; 69% white), 3,745 exposed in early pregnancy only (28 years; 67% white), 556 exposed in late pregnancy only (27 years; 60% white), and 1,275 exposed in both early and late pregnancy (29 years; 75% white). The reference group consisted of 1,744,447 unexposed pregnancies (24 years; 40% white). The adjusted RR for major malformations was 1.07 (95% CI 0.94-1.21, p = 0.33) and for cardiac defects 1.12 (0.89-1.40, p = 0.35). Requiring ≥2 gabapentin dispensings moved the RR to 1.40 (1.03-1.90, p = 0.03) for cardiac defects. There was a higher risk of preterm birth among women exposed to gabapentin either late (RR, 1.28 [1.08-1.52], p < 0.01) or both early and late in pregnancy (RR, 1.22 [1.09-1.36], p < 0.001), SGA among women exposed to gabapentin early (1.17 [1.02-1.33], p = 0.02), late (1.39 [1.01-1.91], p = 0.05), or both early and late in pregnancy (RR, 1.32 [1.08-1.60], p < 0.01), and NICU admission among women exposed to gabapentin both early and late in pregnancy (RR, 1.35 [1.20-1.52], p < 0.001). There was no higher risk of preeclampsia among women exposed to gabapentin after adjustment. Study limitations include the potential for residual confounding and exposure misclassification.
CONCLUSIONS: In this large population-based study, we did not find evidence for an association between gabapentin exposure during early pregnancy and major malformations overall, although there was some evidence of a higher risk of cardiac malformations. Maternal use of gabapentin, particularly late in pregnancy, was associated with a higher risk of PTB, SGA, and NICUa.
OBJECTIVE: Certain antihyperglycemic therapies modify cardiovascular and kidney outcomes among patients with type 2 diabetes, but early uptake in practice appears restricted to particular demographics. We examine the association of Medicaid expansion with use of and expenditures related to antihyperglycemic therapies among Medicaid beneficiaries.
RESEARCH DESIGN AND METHODS: We employed a difference-in-difference design to analyze the association of Medicaid expansion on prescription of noninsulin antihyperglycemic therapies. We used 2012-2017 national and state Medicaid data to compare prescription claims and costs between states that did (n = 25) and did not expand (n = 26) Medicaid by January 2014.
RESULTS: Following Medicaid expansion in 2014, average noninsulin antihyperglycemic therapies per state/1,000 enrollees increased by 4.2%/quarter in expansion states and 1.6%/quarter in nonexpansion states. For sodium-glucose cotransporter 2 inhibitors (SGLT2i) and glucagon-like peptide 1 receptor agonists (GLP-1RA), quarterly growth rates per 1,000 enrollees were 125.3% and 20.7% for expansion states and 87.6% and 16.0% for nonexpansion states, respectively. Expansion states had faster utilization of SGLT2i and GLP-1RA than nonexpansion states. Difference-in-difference estimates for change in volume of prescriptions after Medicaid expansion between expansion versus nonexpansion states was 1.68 (95% CI 1.09-2.26; P < 0.001) for all noninsulin therapies, 0.125 (-0.003 to 0.25; P = 0.056) for SGLT2i, and 0.12 (0.055-0.18; P < 0.001) for GLP-1RA.
CONCLUSIONS: Use of noninsulin antihyperglycemic therapies, including SGLT2i and GLP-1RA, increased among low-income adults in both Medicaid expansion and nonexpansion states, with a significantly greater increase in overall use and in GLP-1RA use in expansion states. Future evaluation of the population-level health impact of expanded access to these therapies is needed.
INTRODUCTION: With increasing rates of opioid overdoses in the US, a surveillance tool to identify high-risk patients may help facilitate early intervention.
OBJECTIVE: To develop an algorithm to predict overdose using routinely-collected healthcare databases.
METHODS: Within a US commercial claims database (2011-2015), patients with ≥1 opioid prescription were identified. Patients were randomly allocated into the training (50%), validation (25%), or test set (25%). For each month of follow-up, pooled logistic regression was used to predict the odds of incident overdose in the next month based on patient history from the preceding 3-6 months (time-updated), using elastic net for variable selection. As secondary analyses, we explored whether using simpler models (few predictors, baseline only) or different analytic methods (random forest, traditional regression) influenced performance.
RESULTS: We identified 5,293,880 individuals prescribed opioids; 2,682 patients (0.05%) had an overdose during follow-up (mean: 17.1 months). On average, patients who overdosed were younger and had more diagnoses and prescriptions. The elastic net model achieved good performance (c-statistic 0.887, 95% CI 0.872-0.902; sensitivity 80.2, specificity 80.1, PPV 0.21, NPV 99.9 at optimal cutpoint). It outperformed simpler models based on few predictors (c-statistic 0.825, 95% CI 0.808-0.843) and baseline predictors only (c-statistic 0.806, 95% CI 0.787-0.26). Different analytic techniques did not substantially influence performance. In the final algorithm based on elastic net, the strongest predictors were age 18-25 years (OR: 2.21), prior suicide attempt (OR: 3.68), opioid dependence (OR: 3.14).
CONCLUSIONS: We demonstrate that sophisticated algorithms using healthcare databases can be predictive of overdose, creating opportunities for active monitoring and early intervention.
Background: The differential impact of various demographic characteristics and comorbid conditions on development of heart failure (HF) with preserved (pEF) and reduced ejection fraction (rEF) is not well studied among the elderly.
Methods: Using Medicare claims data linked to electronic health records, we conducted an observational cohort study of individuals ≥65 years of age without HF. A Cox proportional hazards model accounting for competing risk of HFrEF and HFpEF incidence was constructed. A gradient-boosted model (GBM) assessed the relative influence (RI) of each predictor in the development of HFrEF and HFpEF.
Results: Among 138,388 included individuals, 9701 developed HF (incidence rate = 20.9 per 1000 person-years). Males were more likely to develop HFrEF than HFpEF (HR = 2.07, 95% CI: 1.81-2.37 vs. 1.11, 95% CI: 1.02-1.20, P for heterogeneity <0.01). Atrial fibrillation and pulmonary hypertension had stronger associations with the risk of HFpEF (HR = 2.02, 95% CI: 1.80-2.26 and 1.66, 95% CI: 1.23-2.22) while cardiomyopathy and myocardial infarction were more strongly associated with HFrEF (HR = 4.37, 95% CI: 3.21-5.97 and 1.94, 95% CI: 1.23-3.07). Age was the strongest predictor across all HF subtypes with RI from GBM >35%. Atrial fibrillation was the most influential comorbidity for the development of HFpEF (RI = 8.4%) while cardiomyopathy was the most influential comorbidity for the development of HFrEF (RI = 20.7%).
Conclusion: These findings of heterogeneous relationships between several important risk factors and heart failure types underline the potential differences in the etiology of HFpEF and HFrEF.
Drug discovery for disease-modifying therapies for Alzheimer's disease and related dementias (ADRD) based on the traditional paradigm of experimental animal models has been disappointing. We describe the rationale and design of the Drug Repurposing for Effective Alzheimer's Medicines (DREAM) study, an innovative multidisciplinary alternative to traditional drug discovery. First, we use a systems biology perspective in the "hypothesis generation" phase to identify metabolic abnormalities that may either precede or interact with the accumulation of ADRD neuropathology, accelerating the expression of clinical symptoms of the disease. Second, in the "hypothesis refinement" phase we propose use of large patient cohorts to test whether drugs approved for other indications that also target metabolic drivers of ADRD pathogenesis might alter the trajectory of the disease. We emphasize key challenges in population-based pharmacoepidemiologic studies aimed at quantifying the association between medication use and ADRD onset and outline robust causal inference principles to safeguard against common pitfalls. Candidate ADRD treatments emerging from this approach will hold promise as plausible disease-modifying therapies for evaluation in randomized controlled trials.
OBJECTIVE: The objective of this study is to compare the risk of incident diabetes mellitus (DM) in patients with rheumatoid arthritis (RA) treated with biologic or targeted synthetic disease-modifying antirheumatic drugs.
METHODS: A new-user observational cohort study was conducted using data from a US commercial (Truven MarketScan, 2005-2016) claims database and a public insurance (Medicare, 2010-2014) claims database. Patients with RA who did not have DM were selected into one of eight exposure groups (abatacept, infliximab, adalimumab, golimumab, certolizumab, etanercept, tocilizumab, or tofacitinib) and observed for the outcome of incident DM, defined as a combination of a diagnosis code and initiation of a hypoglycemic treatment. A stabilized inverse probability-weighted Cox proportional hazards model was used to account for 56 confounding variables and estimate hazard ratios (HRs) and 95% confidence intervals (CIs). All analyses were conducted separately in two databases, and estimates were combined using inverse variance meta-analysis.
RESULTS: Among a total of 50 505 patients with RA from Truven and 17 251 patients with RA from Medicare, incidence rates (95% CI) for DM were 6.8 (6.1-7.6) and 6.6 (5.4-7.9) per 1000 person-years, respectively. After confounding adjustment, the pooled HRs (95% CI) indicated a significantly higher risk of DM among adalimumab (2.00 [1.11-3.03]) and infliximab initiators (2.34 [1.38-3.98]) compared with abatacept initiators. The pooled HR (95% CI) for the etanercept versus abatacept comparison was elevated but not statistically significant (1.65 [0.91-2.98]). The effect estimates for certolizumab, golimumab, tocilizumab, and tofacitinib, compared with abatacept, were highly imprecise because of a limited sample size.
CONCLUSION: Initiation of abatacept was associated with a lower risk of incident DM in patients with RA compared with infliximab or adalimumab.
BACKGROUND: To compare the risk of incident heart failure (HF) between initiators of hydrophilic and lipophilic statins.
METHODS: Using claims data for commercial health insurance program enrollees in the USA (2005-2014), we identified new initiators of hydrophilic or lipophilic statins. Follow-up for the primary outcome of incident HF began after a lag period of 1 year after statin initiation. The outcome was defined as 1 inpatient or 2 outpatient diagnosis codes for HF and the use of loop diuretics. Propensity scores (PS) were used to account for confounding. Hazard ratios (HR) for incident HF were computed separately for low and high-intensity statin users, and then pooled to provide dose-adjusted effect estimates.
RESULTS: A total of 7,820,204 patients met all our inclusion criteria for statin initiation (hydrophilic and lipophilic statins). Mean age was 58 years, 40% had hypertension, and 23% had diabetes mellitus. After PS matching, there were 691,584 patients in the low-intensity statin group and 807,370 patients in the high-intensity statin group. After a median follow-up of 725 days (IQR 500-1,153),there were 8,389 cases of incident HF (incidence rate 4.5/1,000 person years, 95% confidence interval [CI] 4.4-4.6). The unadjusted HR for the risk of HF was 0.77 (95% CI 0.76-0.79) and the pooled adjusted HR for incident HF after PS matching was 0.94 (95% CI 0.90-0.98) for hydrophilic versus lipophilic statins. The HR for incident HF was 1.06 (95% CI 1.00-1.12) for hydrophilic versus lipophilic statins for the low-intensity statin group and 0.82 (95% CI 0.78-0.87) for the high-intensity statin group. In subgroup analyses, a similar trend persisted for those younger and older than 65 years and when comparing rosuvastatin with atorvastatin.
CONCLUSION: In this observational cohort study, hydrophilic statins were associated with a modest risk reduction in incident HF as compared to lipophilic statins. Future research replicating these findings in different populations is recommended.
Regulators wish to understand whether real world evidence can be used for secondary indications of biologics. Using the secondary indication of adalimumab for ulcerative colitis (UC) as an example, we aimed to replicate the ULTRA-2 randomized controlled trial finding on the effectiveness of adalimumab in patients with UC using realworld data analyses. Adalimumab, a TNF-alpha receptor inhibitor initially approved for Crohn's disease, was approved for moderate to severe UC in 2012. The ULTRA-2 trial had shown improved remission against placebo in patients with UC. Using claims data (2006-2012), we conducted a cohort study of patients with UC who initiated adalimumab and compared them with (i) nonusers and (ii) new users of infliximab using propensity score matching. The coprimary end points were corticosteroid (CS) discontinuation within 8 weeks and 1 year of treatment. We computed hazard ratios (HRs) and 95% confidence intervals (CIs). We identified 398 matched pairs of adalimumab users vs. nonusers and 326 pairs of adalimumab vs. infliximab users. Adalimumab users were 28% more likely to achieve CS-discontinuation compared with nonusers over 1 year (HR = 1.28; 95% CI 0.94-1.73). However, unlike in ULTRA-2, this effect was not observed in the first 8 weeks (HR = 0.79; 95% CI 0.65-0.97). Compared with infliximab, adalimumab initiators showed no incremental benefit over 1 year (HR = 1.08; 95% CI 0.80-1.04), but showed a 22% reduction (HR = 0.78; 95% CI 0.64-0.95) during the first 8 weeks of treatment. In summary, our results highlight opportunities and some limitations of database analysis to identify treatment effects for secondary indications.
The anticoagulant response to warfarin, a narrow therapeutic index drug, increases with age, which may make older patients susceptible to adverse outcomes resulting from small differences in bioavailability between generic and brand products. Using US Medicare claims linked to electronic medical records from two large hospitals in Boston, we designed a cohort study of ≥ 65-year-old patients. Patients were followed for a composite effectiveness outcome of ischemic stroke or venous thromboembolism, a composite safety outcome, including major hemorrhage, and a 1-year all-cause mortality outcome. After propensity score fine-stratification and weighting to account for > 90 confounders, hazard ratios comparing brand vs. generic warfarin initiators (95% confidence intervals) for the effectiveness, safety, and all-cause mortality outcomes, were 0.97 (0.65-1.46), 0.94 (0.65-1.35), and 0.84 (0.62-1.13), respectively. Results from subgroup analyses of patients with atrial fibrillation, CHADS-VASc score ≥ 3, and HAS-BLED score ≥ 3 were consistent with the primary analysis.
Importance: Accurate risk stratification of patients with heart failure (HF) is critical to deploy targeted interventions aimed at improving patients' quality of life and outcomes.
Objectives: To compare machine learning approaches with traditional logistic regression in predicting key outcomes in patients with HF and evaluate the added value of augmenting claims-based predictive models with electronic medical record (EMR)-derived information.
Design, Setting, and Participants: A prognostic study with a 1-year follow-up period was conducted including 9502 Medicare-enrolled patients with HF from 2 health care provider networks in Boston, Massachusetts ("providers" includes physicians, clinicians, other health care professionals, and their institutions that comprise the networks). The study was performed from January 1, 2007, to December 31, 2014; data were analyzed from January 1 to December 31, 2018.
Main Outcomes and Measures: All-cause mortality, HF hospitalization, top cost decile, and home days loss greater than 25% were modeled using logistic regression, least absolute shrinkage and selection operation regression, classification and regression trees, random forests, and gradient-boosted modeling (GBM). All models were trained using data from network 1 and tested in network 2. After selecting the most efficient modeling approach based on discrimination, Brier score, and calibration, area under precision-recall curves (AUPRCs) and net benefit estimates from decision curves were calculated to focus on the differences when using claims-only vs claims + EMR predictors.
Results: A total of 9502 patients with HF with a mean (SD) age of 78 (8) years were included: 6113 from network 1 (training set) and 3389 from network 2 (testing set). Gradient-boosted modeling consistently provided the highest discrimination, lowest Brier scores, and good calibration across all 4 outcomes; however, logistic regression had generally similar performance (C statistics for logistic regression based on claims-only predictors: mortality, 0.724; 95% CI, 0.705-0.744; HF hospitalization, 0.707; 95% CI, 0.676-0.737; high cost, 0.734; 95% CI, 0.703-0.764; and home days loss claims only, 0.781; 95% CI, 0.764-0.798; C statistics for GBM: mortality, 0.727; 95% CI, 0.708-0.747; HF hospitalization, 0.745; 95% CI, 0.718-0.772; high cost, 0.733; 95% CI, 0.703-0.763; and home days loss, 0.790; 95% CI, 0.773-0.807). Higher AUPRCs were obtained for claims + EMR vs claims-only GBMs predicting mortality (0.484 vs 0.423), HF hospitalization (0.413 vs 0.403), and home time loss (0.575 vs 0.521) but not cost (0.249 vs 0.252). The net benefit for claims + EMR vs claims-only GBMs was higher at various threshold probabilities for mortality and home time loss outcomes but similar for the other 2 outcomes.
Conclusions and Relevance: Machine learning methods offered only limited improvement over traditional logistic regression in predicting key HF outcomes. Inclusion of additional predictors from EMRs to claims-based models appeared to improve prediction for some, but not all, outcomes.
PURPOSE: As more biosimilars become available in the United States, postapproval noninterventional studies describing biosimilar switching and comparing effectiveness and/or safety between switchers and nonswitchers will play a key role in generating real-world evidence to inform clinical practices and policy decisions. Ensuring sound methodology is critical for making valid inferences from these studies.
METHODS: The Biologics and Biosimilars Collective Intelligence Consortium (BBCIC) convened a workgroup consisting of academic researchers, industry scientists, and practicing clinicians to establish best practice recommendations for the conduct of noninterventional studies of biosimilar and reference biologic switching. The workgroup members participated in eight teleconferences between August 2017 and February 2018 to discuss specific topics and build consensus.
RESULTS: This report provides workgroup recommendations covering five main considerations relating to noninterventional studies describing reference biologic to biosimilar switching and comparing reference biologic to biosimilars for safety and effectiveness in the presence of switching at treatment initiation and during follow-up: (a) selecting appropriate data sources from a range of available options including insurance claims, electronic health records, and registries; (b) study designs; (c) outcomes of interest including health care utilization and clinical endpoints; (d) analytic approaches including propensity scores, disease risk scores, and instrumental variables; and (e) special considerations including avoiding designs that ignore history of biologic use, avoiding immortal time bias, exposure misclassification, and accounting for postindex switching.
CONCLUSION: Recommendations provided in this report provide a framework that may be helpful in designing and critically evaluating postapproval noninterventional studies involving reference biologic to biosimilar switching.
BACKGROUND: Biologic agents may pose a potential risk for exacerbations of pulmonary comorbidities in rheumatoid arthritis (RA) patients.
METHODS: Using U.S. Medicare and Truven MarketScan databases, we identified three cohorts of RA patients with interstitial lung disease (ILD), chronic obstructive pulmonary disease (COPD), or asthma who initiated abatacept or a TNF inhibitor (TNFi). The primary outcome was a composite exacerbation of individual pulmonary comorbidities based on inpatient or emergency department (ED) visits due to exacerbation of the given pulmonary comorbidity. To adjust for >60 baseline confounders, we used propensity-score fine stratification (PSS) and weighting. Negative binomial regression model estimated a cohort-specific incidence rate ratio (IRR) and 95% confidence interval (CI) of the primary outcome per database, comparing abatacept versus TNFi. Database-specific IRRs were combined using a random-effects meta-analysis.
RESULTS: We identified 3,295 ILD, 7,161 COPD, and 5,613 asthma patients with RA who initiated either abatacept or a TNFi. IR of composite exacerbation was high in all three pulmonary cohorts but highest in COPD cohort (3.59-11.80 per 100 person-years in ILD, 20.68-34.97 in COPD, and 4.66-13.78 in asthma). After PSS weighting, the combined IRR (95%CI) in abatacept initiators versus TNFi initiators was 0.44 (0.18-1.09) for ILD exacerbation, 0.91 (0.80-1.03) for COPD exacerbation, and 0.81 (0.54-1.22) for asthma exacerbation.
CONCLUSION: Among patients with RA and pulmonary comorbidities, exacerbations requiring inpatient or ED visits occurred frequently after initiating abatacept or TNFi. Overall, we found no significant difference in the risk of ILD, COPD or asthma exacerbation between abatacept and TNFi initiators, but precision of our estimates is limited.
OBJECTIVE: To compare the risk of serious infections between the use of tumor necrosis factor inhibitors (TNFi) plus methotrexate (MTX) versus triple therapy among rheumatoid arthritis (RA) patients in a real-world setting.
METHODS: Using claims data from Truven MarketScan (2003-2014), we conducted a cohort study to compare RA patients receiving MTX who added a TNFi (TNFi plus MTX group) versus MTX plus hydroxychloroquine and sulfasalazine (triple therapy group). The primary outcome was any serious infection (i.e., a composite end point of hospitalized bacterial and opportunistic infections or herpes zoster). Secondary outcomes were individual components of the composite end point. To adjust for baseline confounding, we used propensity score (PS)-based fine stratification and weighting. A weighted Cox proportional hazards model estimated the hazard ratio (HR) and 95% confidence interval (95% CI) of the outcomes.
RESULTS: After PS stratification (PSS) and weighting, we included a total of 45,208 TNFi plus MTX initiators and 1,387 triple therapy initiators. Mean age was 53 years and 70% were female. The incidence rate of any serious infection per 100 person-years was 2.46 in the TNFi plus MTX group and 2.03 in the triple therapy group. The PSS-weighted HR for any serious infection comparing TNFi plus MTX versus triple therapy was 1.23 (95% CI 0.87-1.74). For the secondary outcomes, the PSS-weighted HR was 1.41 (95% CI 0.85-2.34) for bacterial infection and 0.80 (95% CI 0.55-1.18) for herpes zoster.
CONCLUSION: In this real-world cohort of RA patients, we noted no substantially different risk of any serious infection, bacterial infection, or herpes zoster after initiating TNFi plus MTX versus triple therapy, although CIs were wide.
Estimating hazard ratios (HR) presents challenges for propensity score (PS)-based analyses of cohorts with differential depletion of susceptibles. When the treatment effect is not null, cohorts that were balanced at baseline tend to become unbalanced on baseline characteristics over time as "susceptible" individuals drop out of the population at risk differentially across treatment groups due to having outcome events. This imbalance in baseline covariates causes marginal (population-averaged) HRs to diverge from conditional (covariate-adjusted) HRs over time and systematically move toward the null. Methods that condition on a baseline PS yield HR estimates that fall between the marginal and conditional HRs when these diverge. Unconditional methods that match on the PS or weight by a function of the PS can estimate the marginal HR consistently but are prone to misinterpretation when the marginal HR diverges toward the null. Here, we present results from a series of simulations to help analysts gain insight on these issues. We propose a novel approach that uses time-dependent PSs to consistently estimate conditional HRs, regardless of whether susceptibles have been depleted differentially. Simulations show that adjustment for time-dependent PSs can adjust for covariate imbalances over time that are caused by depletion of susceptibles. Updating the PS is unnecessary when outcome incidence is so low that depletion of susceptibles is negligible. But if incidence is high, and covariates and treatment affect risk, then covariate imbalances arise as susceptibles are depleted, and PS-based methods can consistently estimate the conditional HR only if the PS is periodically updated.