PURPOSE: The Food and Drug Administration's Sentinel System developed parameterized, reusable analytic programs for evaluation of medical product safety. Research on outpatient antibiotic exposures, and Clostridium difficile infection (CDI) with non-user reference groups led us to expect a higher rate of CDI among outpatient clindamycin users vs penicillin users. We evaluated the ability of the Cohort Identification and Descriptive Analysis and Propensity Score Matching tools to identify a higher rate of CDI among clindamycin users.
METHODS: We matched new users of outpatient dispensings of oral clindamycin or penicillin from 13 Data Partners 1:1 on propensity score and followed them for up to 60 days for development of CDI. We used Cox proportional hazards regression stratified by Data Partner and matched pair to compare CDI incidence.
RESULTS: Propensity score models at 3 Data Partners had convergence warnings and a limited range of predicted values. We excluded these Data Partners despite adequate covariate balance after matching. From the 10 Data Partners where these models converged without warnings, we identified 807 919 new clindamycin users and 8 815 441 new penicillin users eligible for the analysis. The stratified analysis of 807 769 matched pairs included 840 events among clindamycin users and 290 among penicillin users (hazard ratio 2.90, 95% confidence interval 2.53, 3.31).
CONCLUSIONS: This evaluation produced an expected result and identified several potential enhancements to the Propensity Score Matching tool. This study has important limitations. CDI risk may have been related to factors other than the inherent properties of the drugs, such as duration of use or subsequent exposures.
A retrospective cohort study, supplemented with a nested case-control study, was performed using two administrative databases from commercial health plans in the United States to compare the incidence of pancreatic and thyroid cancer among users of exenatide versus other antidiabetic drugs (OADs). Patients with type 2 diabetes who initiated exenatide or OADs between 1 June 2005 and 30 June 2015 were included. Pancreatic and thyroid cancers were identified using chart-validated algorithms in the cohort study. Cases in the nested case-control study were chart-confirmed pancreatic or thyroid cancers, and controls were sampled using risk-set sampling. The time-fixed analyses comparing 33 629 exenatide initiators with 49 317 propensity-score-matched OAD initiators yielded hazard ratios of 0.76 (95% confidence interval [CI] 0.47-1.21) for pancreatic cancer and 1.46 (95% CI 0.98-2.19) for thyroid cancer. Results in the time-dependent analyses by cumulative duration or dose were similar. Nested case-control analyses yielded rate ratios of 0.48 (95% CI 0.25-0.91) for pancreatic cancer and 0.87 (95% CI 0.59-1.29) for thyroid cancer. This observational study suggested exenatide use was not associated with an increased risk of pancreatic or thyroid cancer.
Use of disease risk score (DRS)-based confounding adjustment when estimating treatment effects on multiple outcomes is not well studied. We designed an empirical cohort study to compare dabigatran initiators and warfarin initiators with respect to risks of ischemic stroke and major bleeding in 12 sequential monitoring periods (90 days each), using data from the Truven Marketscan database (Truven Health Analytics, Ann Arbor, Michigan). We implemented 2 approaches to combine DRS for multiple outcomes: 1) 1:1 matching on prognostic propensity scores (PPS), created using DRS for bleeding and stroke as independent variables in a propensity score (PS) model; and 2) simultaneous 1:1 matching on DRS for bleeding and stroke using Mahalanobis distance (M-distance), and compared their performance with that of traditional PS matching. M-distance matching appeared to produce more stable results in the early marketing period than both PPS and traditional PS matching; hazard ratios from unadjusted analysis, traditional PS matching, PPS matching, and M-distance matching after 4 periods were 0.72 (95% confidence interval (CI): 0.51, 1.03), 0.61 (95% CI: 0.31, 1.09), 0.55 (95% CI: 0.33, 0.91), and 0.78 (95% CI: 0.45, 1.34), respectively, for stroke and 0.65 (95% CI: 0.53, 0.80), 0.78 (95% CI: 0.60, 1.01), 0.75 (95% CI: 0.59, 0.96), and 0.78 (95% CI: 0.64, 0.95), respectively, for bleeding. In later periods, estimates were similar for traditional PS matching and M-distance matching but suggested potential residual confounding with PPS matching. These results suggest that M-distance matching may be a valid approach for extension of DRS-based confounding adjustments for multiple outcomes of interest.
PURPOSE: To explore generalized boosted modeling (GBM) as a method for identifying subgroups with greater benefit or harm with dabigatran versus warfarin for treatment of atrial fibrillation.
METHODS: We identified new initiators of warfarin or dabigatran with nonvalvular atrial fibrillation in 2 healthcare claims databases (2009-2013) and used GBM within 1 data source (development cohort) to explore subgroups where their effect on thromboembolism and major bleeding may differ. Identified subgroups were evaluated in the second data source (validation cohort) with stabilized-inverse-probability-of-treatment weights to adjust for confounding.
RESULTS: Development and validation cohorts included 13 624 (28% dabigatran) and 62 596 (29% dabigatran) initiators, respectively. In development data, the strongest exposure interactions were prior thromboembolism and renal disease. In validation data, reduction in thromboembolism with dabigatran was greater for patients with versus without a history of thromboembolism by 2.8 (95% CI, -0.5 to 5.4) events per 100 patient-years. Major bleeding was reduced by 1.6/100 patient-years for dabigatran compared to warfarin initiators, without evidence of variation by renal disease.
CONCLUSIONS: We explored use of GBM to identify potential subgroups with different treatment effect. Dabigatran's superiority to warfarin at prevention of thromboembolism may be greater in secondary than primary prevention. In practice, secondary prevention patients are more often treated with warfarin.
Small changes in bioavailability of narrow therapeutic index (NTI) drugs can alter clinical outcomes, raising concern over generic NTI substitution. We surveyed pharmacists to identify their perceptions of generic NTI drugs, their frequency of performing generic NTI substitution, and predictors of this behavior. Of 710 respondents (33% response rate), 87% perceived generic NTI drugs as effective as their brand-name versions and 94% as safe. Whereas 82% almost always performed generic NTI substitution for initial prescriptions, only 60% did for refills. Pharmacists in non-chain settings (odds ratio (OR) = 2.37; 95% confidence interval (CI) = 1.40-4.02), in practice longer (per year OR = 1.04; 95% CI = 1.02-1.06), in states with affirmative patient consent laws (OR = 1.88; 95% CI = 1.06-3.32), and in states with NTI-specific substitution requirements (OR = 1.95; 95% CI = 1.16-3.26) were more likely not to substitute initial prescriptions. Education of non-chain and veteran pharmacists and elimination of affirmative patient consent and NTI-specific substitution requirements could increase generic NTI substitution.
Active surveillance for unknown or unsuspected adverse drug effects may be carried out by applying epidemiological techniques to large administrative databases. Self-controlled designs, like the symmetry design, have the advantage over conventional design of adjusting for confounders that are stable over time. The aim of this paper was to describe the output of a comprehensive open-ended symmetry analysis of a large dataset. All drug dispensings and all secondary care contacts in Denmark during the period 1995-2012 for persons born before 1950 were analyzed by a symmetry design. We analyzed all drug-drug sequences and all drug-disease sequences occurring during the study period. The identified associations were ranked according to the number of outcomes that potentially could be attributed to the exposure. In the main analysis, 29,891,212 incident drug therapies, and 21,300,000 incident diagnoses were included. Out of 186,758 associations tested in the main analysis, 43,575 (23.3%) showed meaningful effect size. For the top 200 drug-drug associations, 47% represented unknown associations, 24% represented known adverse drug reactions, 30% were explained by mutual indication or reverse causation. For the top 200 drug-disease associations the proportions were 31, 15, and 55%, respectively. Screening by symmetry analysis can be a useful starting point for systematic pharmacovigilance activities if coupled with a systematic post-hoc review of signals.
BACKGROUND: Shortages of chronic medications are an increasingly common problem, yet little is known about their impact on drug utilization and clinical outcomes. We evaluated the population-level impact of metoprolol extended release shortage that occurred in the United States in 2009 to 2010.
METHODS AND RESULTS: We conducted a population-based, time series analysis of 38 914 patients (mean age, 60 years; 69% men) discharged after hospitalization for myocardial infarction (MI) between January 2006 and November 2012 in a large commercial insurance database. The shortage period was defined as February 2009 to June 2010. Data before September 2008 was defined as preshortage period and data after June 2010 as postshortage period. Outcomes were proportion of patients who filled any long- or short-acting β-blocker within 30 days of discharge, adherence to β-blockers within the first year of therapy among patients who initiated β-blockers, and rates of 1-year rehospitalization for MI or unstable angina. Post-MI statin utilization and adherence were evaluated as control outcomes. During the preshortage period, 70% of patient filled a β-blocker, mean monthly adherence was 76%, and the average monthly rate of rehospitalization was 6.5 events per 100 person-years, as compared with β-blocker use of 62%, average adherence of 70%, and rehospitalization rate of 5.6 events per 100 person-years during the shortage. After accounting for the baseline (preshortage) trends, the shortage was associated with significant monthly reductions in postdischarge β-blocker use (-0.57% of patients [95% CI, -0.90 to -0.24] per month) and an immediate decrease in adherence (-4.58% days covered [95% CI, -6.12 to -3.04]). No negative impact on rates of rehospitalization, post-MI statin utilization, or statin adherence was observed. β-Blocker utilization began to increase after the resolution of the shortage.
CONCLUSIONS: The nationwide metoprolol extended release shortage in the United States was associated with fewer patients receiving any long- or short-acting β-blocker post-MI and lower adherence to β-blocker therapy for those who did receive it, but did not appear to appreciably affect clinical outcomes at the population level.
BACKGROUND: High deductible health plans (HDHP) are associated with high levels of patient cost-sharing and are becoming increasingly used in the United Status as a means of reducing healthcare utilization and spending. Our objective is to determine whether HDHP enrollment is associated with a change in adherence to evidence-based medications to treat cardiovascular risk factors and whether such changes vary based on race/ethnicity or socioeconomic status.
METHODS AND RESULTS: We conducted a retrospective cohort study using an interrupted time series with concurrent control group design among beneficiaries of Aetna-a national commercial insurer. We included 14 866 patients who filled prescriptions for medications to treat hypertension, high cholesterol, or diabetes mellitus between 2009 and 2014 and who switched from a traditional plan into an HDHP and 14 866 controls who did not switch to an HDHP matched based on calendar time, medication class, race/ethnicity, socioeconomic status, and propensity score. We were specifically interested in evaluating 4 prespecified subgroups based on race/ethnicity (white versus nonwhite) and socioeconomic status (higher versus lower). The main outcome was medication adherence as measured by proportion of days covered. The overall cohort had an average age of 53 years, and 44% were women. Baseline adherence was the lowest in the nonwhite patient group. Switching to an HDHP was associated with a decrease in the level of adherence of 5 percentage points across all 4 subgroups (change in level, -5.0%; 95% CI, -5.9% to -4.0%; P<0.0001).
CONCLUSIONS: HDHP enrollment was associated with a reduction in adherence to medications to treat cardiovascular risk factors. The magnitude of this effect did not vary based on race/ethnicity or socioeconomic status. Because racial/ethnic minorities have lower rates of medication adherence, future studies should evaluate whether HDHP-associated changes in adherence have greater clinical consequences for these patients.
Medication synchronization programs based in pharmacies simplify the refill process by enabling patients to pick up all of their medications on a single visit. This can be especially important for improving medication adherence in patients with complex chronic diseases. We evaluated the impact of two synchronization programs on adherence, cardiovascular events, and resource use among Medicare beneficiaries treated between 2011 and 2014 for two or more chronic conditions-at least one of which was hypertension, hyperlipidemia, or diabetes. Among nearly 23,000 patients matched by propensity score, the mean proportion of days covered (a measure of medication adherence) for the control group of patients without a synchronization program was 0.84 compared to 0.87 for synchronized patients-a gain of 3 percentage points. Adherence improvement in synchronized versus control patients was three times greater in patients with low baseline adherence, compared to those with higher baseline adherence. Rates of hospitalization and emergency department visits and rates of outpatient visits were 9 percent and 3 percent lower in the synchronized group compared to the control group, respectively, while cardiovascular event rates were similar. Synchronization programs were associated with improved adherence for patients with cardiovascular disease, especially those with low baseline adherence.
Adults with repaired coarctation of the aorta (CoA) suffer reduced long-term survival compared with the general population, in part due to coronary artery disease (CAD). There is conflicting evidence as to whether or not CoA is an independent risk factor for CAD. The primary aim was to determine if CoA is independently associated with premature myocardial infarction (MI) in the contemporary era. The secondary aim was to determine if CoA is independently associated with early coronary intervention. In a cross-sectional study using the National Inpatient Sample database from 2005 to 2014, we compared the age at MI and the age at coronary intervention (coronary artery bypass grafting or percutaneous coronary intervention, in the absence of MI diagnosis) in patients with and without CoA using weighted linear regression. Among 5,472,416 observations with a primary diagnosis of MI, 174 had a diagnosis of CoA. Patients with CoA had MI 7.2 years younger than those without CoA, after adjusting for potential confounders (95% CI -11.3, -3.1, p = 0.001). Among 3,631,718 patients without a diagnosis of MI who underwent coronary artery bypass grafting or percutaneous coronary intervention, 279 had a diagnosis of CoA. Patients with CoA who underwent coronary intervention were 15.6 years younger than those without CoA, after adjusting for potential confounders (95% CI -18.3, -12.9, p < 0.001). In conclusion, patients with CoA have MI at a slightly younger age and undergo coronary intervention at a significantly younger age than those without CoA in the contemporary era. Our findings support continued close surveillance for and treatment of modifiable risk factors for CAD.
PURPOSE: The primary objective of this study was to characterize variation in patterns of opioid prescribing within primary care settings at first visits for pain, and to describe variation by condition, geography, and patient characteristics.
METHODS: 2014 healthcare utilization data from Optum's Clinformatics™ DataMart were used to evaluate individuals 18 years or older with an initial presentation to primary care for 1 of 10 common pain conditions. The main outcomes assessed were (1) the proportion of first visits for pain associated with an opioid prescription fill and (2) the proportion of opioid prescriptions with >7 days' supply.
RESULTS: We identified 205 560 individuals who met inclusion criteria; 9.1% of all visits were associated with an opioid fill, ranging from 4.1% (headache) to 28.2% (dental pain). Approximately half (46%) of all opioid prescriptions supplied more than 7 days, and 10% of prescriptions supplied ≥30 days. We observed a 4-fold variation in rates of opioid initiation by state, with highest rates of prescribing in Alabama (16.6%) and lowest rates in New York (3.7%).
CONCLUSIONS: In 2014, nearly half of all patients filling opioid prescriptions received more than 7 days' of opioids in an initial prescription. Policies limiting initial supplies have the potential to substantially impact opioid prescribing in the primary care setting.
PURPOSE: The US Food and Drug Administration's Sentinel system developed tools for sequential surveillance.
METHODS: In patients with non-valvular atrial fibrillation, we sequentially compared outcomes for new users of rivaroxaban versus warfarin, employing propensity score matching and Cox regression. A total of 36 173 rivaroxaban and 79 520 warfarin initiators were variable-ratio matched within 2 monitoring periods.
RESULTS: Statistically significant signals were observed for ischemic stroke (IS) (first period) and intracranial hemorrhage (ICH) (second period) favoring rivaroxaban, and gastrointestinal bleeding (GIB) (second period) favoring warfarin. In follow-up analyses using primary position diagnoses from inpatient encounters for increased definition specificity, the hazard ratios (HR) for rivaroxaban vs warfarin new users were 0.61 (0.47, 0.79) for IS, 1.47 (1.29, 1.67) for GIB, and 0.71 (0.50, 1.01) for ICH. For GIB, the HR varied by age: <66 HR = 0.88 (0.60, 1.30) and 66+ HR = 1.49 (1.30, 1.71).
CONCLUSIONS: This study demonstrates the capability of Sentinel to conduct prospective safety monitoring and raises no new concerns about rivaroxaban safety.
Postapproval drug safety studies often use propensity scores (PSs) to adjust for a large number of baseline confounders. These studies may involve examining whether treatment safety varies across subgroups. There are many ways a PS could be used to adjust for confounding in subgroup analyses. These methods have trade-offs that are not well understood. We conducted a plasmode simulation to compare relative performance of 5 methods involving PS matching for subgroup analysis, including methods frequently used in applied literature whose performance has not been previously directly compared. These methods varied as to whether the overall PS, subgroup-specific PS, or no rematching was used in subgroup analysis as well as whether subgroups were fully nested within the main analytical cohort. The evaluated PS subgroup matching methods performed similarly in terms of balance, bias, and precision in 12 simulated scenarios varying size of the cohort, prevalence of exposure and outcome, strength of relationships between baseline covariates and exposure, the true effect within subgroups, and the degree of confounding within subgroups. Each had strengths and limitations with respect to other performance metrics that could inform choice of method.
PURPOSE: The Heart Protection Study 2-Treatment of HDL to Reduce the Incidence of Vascular Events (HPS2-THRIVE) trial found higher incidence rates of adverse reactions, including bleeding, in patients receiving the combination of extended-release niacin and laropiprant versus placebo. It is not known whether these adverse events are attributable to laropiprant, not approved in the USA, or to extended-release niacin. We compared rates of major gastrointestinal bleeding and intracranial hemorrhage among initiators of extended-release niacin and initiators of fenofibrate.
METHODS: We used Mini-Sentinel (now Sentinel) to conduct an observational, new user cohort analysis. We included data from 5 Data Partners covering the period from January 1, 2007 to August 31, 2013. Individuals who initiated extended-release niacin were propensity score-matched to individuals who initiated fenofibrate. Within the matched cohorts, we used Cox proportional hazards models to compare rates of hospitalization for major gastrointestinal bleeding events and intracranial hemorrhage assessed using validated claims-based algorithms.
RESULTS: A total of 234 242 eligible extended-release niacin initiators were identified, of whom 210 389 (90%) were 1:1 propensity score-matched to eligible fenofibrate initiators. In propensity score-matched analyses, no differences were observed between exposure groups in rates of major gastrointestinal bleeding (hazard ratio [HR], 0.98; 95% confidence interval [CI], 0.82 to 1.18) or intracranial hemorrhage (HR, 1.21; 95% CI, 0.66 to 2.22). Results were similar in pre-specified sensitivity and subgroup analyses.
CONCLUSIONS: We did not observe evidence for an association between extended-release niacin versus fenofibrate and rates of major gastrointestinal bleeding or intracranial hemorrhage.
In the presence of heterogeneity of treatment effect (HTE), the average treatment effect from a randomized controlled trial (RCT) may not be applicable to different patients, such as those in observational settings. Our objective was to develop a novel approach that uses individual-level simulation to expand RCT results to target patient populations in the presence of HTE. For this purpose, we compared the results of the Randomized Evaluation of Long-Term Anticoagulation Therapy (RE-LY) trial, and two observational studies that compared benefits and risks of dabigatran to warfarin in patients with atrial fibrillation. We developed a simulation model that replicates the rates of ischemic stroke and major bleeding observed in RE-LY using published outcome risk models and participants' baseline characteristics. We used our validated simulation model to predict what the results of the RCT would have been had it been conducted in populations similar to those in the observational studies.
BACKGROUND: Adults with repaired coarctation of the aorta (CoA) have reduced long-term survival compared with the general population. This study aimed to determine whether CoA is independently associated with premature ischemic and hemorrhagic stroke in the contemporary era.
METHODS AND RESULTS: This was a cross-sectional study utilizing the National Inpatient Sample database from 2005 to 2014. We hypothesized that patients with CoA are hospitalized with ischemic and hemorrhagic stroke at a younger age compared with the general population. To test this hypothesis, we compared the age at stroke in patients with and without a diagnosis of CoA using simple and multivariable weighted linear regression. Among 4 894 582 stroke discharges, 207 had a diagnosis of CoA. Patients with CoA had strokes at significantly younger age compared with patients without CoA: 18.9 years younger for all-cause stroke (<0.001), 15.9 years younger for ischemic stroke (<0.001), and 28.5 years younger for hemorrhagic stroke (<0.001), after adjusting for potential confounders. There was no significant difference in the proportion of ischemic strokes between those with and without CoA (79.2% versus 83.0%, =0.50). However, CoA patients had a higher proportion of subarachnoid hemorrhage (11.8% versus 4.8%, =0.039) than those without CoA. Among patients who had a hemorrhagic stroke, the prevalence of unruptured intracranial aneurysms was higher in patients with CoA compared with those without CoA (23.3% versus 2.5%, =0.002).
CONCLUSIONS: Patients with CoA have both ischemic and hemorrhagic strokes at significantly younger ages compared with the general population.