PURPOSE: To systematically examine trends and applications of the disease risk score (DRS) as a confounder summary method.
METHODS: We completed a systematic search of MEDLINE and Web of Science® to identify all English language articles that applied DRS methods. We tabulated the number of publications by year and type (empirical application, methodological contribution, or review paper) and summarized methods used in empirical applications overall and by publication year (<2000, ≥2000).
RESULTS: Of 714 unique articles identified, 97 examined DRS methods and 86 were empirical applications. We observed a bimodal distribution in the number of publications over time, with a peak 1979-1980, and resurgence since 2000. The majority of applications with methodological detail derived DRS using logistic regression (47%), used DRS as a categorical variable in regression (93%), and applied DRS in a non-experimental cohort (47%) or case-control (42%) study. Few studies examined effect modification by outcome risk (23%).
CONCLUSION: Use of DRS methods has increased yet remains low. Comparative effectiveness research may benefit from more DRS applications, particularly to examine effect modification by outcome risk. Standardized terminology may facilitate identification, application, and comprehension of DRS methods. More research is needed to support the application of DRS methods, particularly in case-control studies.
OBJECTIVE: The incidence of hospital-acquired Clostridium difficile infection (CDI) has increased rapidly over the past decade; patients undergoing major surgery, including coronary artery bypass grafting (CABG), are at particular risk. Intravenous vancomycin exposure has been identified as an independent risk factor for CDI, but this is controversial. It is not known whether vancomycin administered for surgical site infection prophylaxis increases the risk of CDI.
METHODS: Using data from the Premier Perspective Comparative Database, we assembled a cohort of 69,807 patients undergoing CABG surgery between 2004 and 2010 who received either a cephalosporin alone (65.1%) or a cephalosporin plus vancomycin (34.9%) on the day of surgery. Patients were observed for CDI until discharge from the index hospitalization. In these groups, we evaluated the comparative rate of postoperative CDI with Cox models; confounding was addressed using propensity scores.
RESULTS: In all, 77 (0.32%) of the 24,393 patients receiving a cephalosporin plus vancomycin and 179 (0.39%) of the 45,414 patients receiving a cephalosporin alone had postoperative CDI (unadjusted hazard ratio [HR], 0.73; 95% confidence interval [CI], 0.56-0.95). After adjusting for confounding variables with either propensity score matching or stratification, there was no meaningful association between adjuvant vancomycin exposure and postoperative CDI (HR, 0.85; 95% CI, 0.61-1.19; and HR, 0.85; 95% CI, 0.63-1.15, respectively). Results of multiple sensitivity analyses were similar to the main findings.
CONCLUSIONS: After adjustment for patient and surgical characteristics, a short course of prophylactic vancomycin was not associated with an increased risk of CDI among patients undergoing CABG surgery.
BACKGROUND: Self-controlled analysis methods implicitly adjust for time-invariant confounding within individuals. A person's prognosis often varies over time and affects both therapy choice and subsequent health outcomes. Current approaches may not be able to fully address this within-person confounding. We evaluated the potential impact of time-varying prognosis in self-controlled studies of treatment effects and the extent to which alternative adjustment strategies could mitigate these biases.
METHODS: We used Medicare data linked to prescription drug data from a pharmaceutical assistance program to conduct case-crossover studies of the relationship between intermittent use of five classes of preventive medications (statins, oral hypoglycemics, antihypertensives, osteoporosis, and glaucoma medications) and death-relationships that are strongly biased because of healthy-user and sick-stopper effects. We used the case-case time-control design to adjust for confounding from exposure trends related to prognosis. Each class of medications was evaluated separately, with the remaining four used as reference drugs to estimate prognosis-related exposure trends.
RESULTS: The case-crossover odds ratios were 0.39, 0.38, 0.40, 0.39, and 0.45 for statin, antihypertensive, glaucoma, hypoglycemic, and osteoporosis drugs, respectively. After adjusting for the estimated noncausal prognosis-related trends in drug exposure among all eligible cases, odds ratios were clustered closer to null (0.99, 0.95, 1.02, 0.99, and 1.16, respectively).
CONCLUSIONS: Consideration of the sociology of medication use leading to health outcomes is essential in designing and analyzing self-controlled studies of treatment effects. Although the case-case time-control design was able to reduce bias from prognosis-related exposure trends in our examples, the difficulty in identifying appropriate reference exposures could be prohibitive.
BACKGROUND: A distributed research network (DRN) of electronic health care databases, in which data reside behind the firewall of each data partner, can support a wide range of comparative effectiveness research (CER) activities. An essential component of a fully functional DRN is the capability to perform robust statistical analyses to produce valid, actionable evidence without compromising patient privacy, data security, or proprietary interests.
OBJECTIVES AND METHODS: We describe the strengths and limitations of different confounding adjustment approaches that can be considered in observational CER studies conducted within DRNs, and the theoretical and practical issues to consider when selecting among them in various study settings.
RESULTS: Several methods can be used to adjust for multiple confounders simultaneously, either as individual covariates or as confounder summary scores (eg, propensity scores and disease risk scores), including: (1) centralized analysis of patient-level data, (2) case-centered logistic regression of risk set data, (3) stratified or matched analysis of aggregated data, (4) distributed regression analysis, and (5) meta-analysis of site-specific effect estimates. These methods require different granularities of information be shared across sites and afford investigators different levels of analytic flexibility.
CONCLUSIONS: DRNs are growing in use and sharing of highly detailed patient-level information is not always feasible in DRNs. Methods that incorporate confounder summary scores allow investigators to adjust for a large number of confounding factors without the need to transfer potentially identifiable information in DRNs. They have the potential to let investigators perform many analyses traditionally conducted through a centralized dataset with detailed patient-level information.
PURPOSE: When using claims data, dichotomous covariates (C) are often assumed to be absent unless a claim for the condition is observed. When available historical data differs among subjects, investigators must choose between using all available historical data versus data from a fixed window to assess C. Our purpose was to compare estimation under these two approaches.
METHODS: We simulated cohorts of 20,000 subjects with dichotomous variables representing exposure (E), outcome (D), and a single time-invariant C, as well as varying availability of historical data. C was operationally defined under each paradigm and used to estimate the adjusted risk ratio of E on D via Mantel-Haenszel methods.
RESULTS: In the base case scenario, less bias and lower mean square error were observed using all available information compared with a fixed window; differences were magnified at higher modeled confounder strength. Upon introduction of an unmeasured covariate (F), the all-available approach remained less biased in most circumstances and rendered estimates that better approximated those that were adjusted for the true (modeled) value of C in all instances.
CONCLUSIONS: In most instances considered, operationally defining time-invariant dichotomous C based on all available historical data, rather than on data observed over a commonly shared fixed historical window, results in less biased estimates.
OBJECTIVES: To ascertain predictors of initiation of brand-name versus generic narrow therapeutic index (NTI) drugs.
DESIGN: Retrospective cohort study.
SETTING: Data from CVS Caremark were linked to Medicare claims and to U.S. census data.
PARTICIPANTS: Individuals aged 65 and older who initiated an NTI drug in 2006 and 2007 (N = 36,832).
MEASUREMENTS: Demographic, health service utilization, and geographic predictors of whether participants initiated a generic or brand-name version of their NTI drug were identified using logistic regression.
RESULTS: Overall, 30,014 (81.5%) participants started on a generic version of their NTI drug. The most commonly initiated NTI drugs were warfarin (n = 17,790; 48%), levothyroxine (n = 10,779; 29%), and digoxin (n = 6,414; 17%). Older age (odds ratio (OR) = 1.12, 95% confidence interval (CI) = 1.02-1.22 comparing aged ≥ 85 with 65-74), higher burden of comorbidity (OR = 1.05, 95% CI = 1.04-1.07 for each 1-point increase in comorbidity score), and prior use of any generic drug (OR = 1.55, 95% CI = 1.29-1.87) were positively associated with generic drug initiation. Independent of other predictors, residing in the census block group with the highest generic use was positively associated with greater odds of generic NTI drug initiation (OR = 1.24, 95% CI = 1.14-1.35 compared with the lowest quintile).
CONCLUSION: Demographic, health service utilization, and geographic characteristics are important determinants of whether individuals initiate treatment with a brand-name or generic NTI drug. These factors may contribute to disparities in care and highlight potential targets for educational campaigns.
PURPOSE: Studies evaluating the association between statins and colorectal cancer (CRC) have used various methods to address bias and have reported mixed findings. We sought to assess the association in a large cohort of residents in Emilia-Romagna, Italy, using multiple methods to address different sources of confounding. We also sought to explore potential effect measure modification by sex.
METHODS: We conducted a retrospective cohort study using the 2003-2010 healthcare database of Emilia-Romagna, Italy. We identified all initiators of statins; initiators of glaucoma medications served as the comparison group to account for confounding by healthy user bias. We followed patients longitudinally to identify CRC cases in hospital discharge data. We used multivariable Cox regression analyses to adjust for confounding by CRC risk factors and we conducted a sensitivity analysis using propensity score matching.
RESULTS: After multivariable adjustment, initiators of statins had a lower incidence rate of CRC as compared to initiators of glaucoma drugs [hazard ratio (HR) 0.79; 95 % CI 0.69-0.90]. In sex-stratified analyses we observed a protective effect in men (HR 0.77; 95 % CI 0.67-0.88) but not in women (HR 0.96; 95 % CI 0.82-1.1). Results were similar in propensity score analyses.
CONCLUSIONS: After adjusting for observed risk factors, statin initiation versus glaucoma drug initiation was associated with a reduced risk of CRC in men but not in women. While this study is subject to many limitations, it corroborates a previous study that found sex differences in the association between statins and CRC.
OBJECTIVES: Clinicians and payers require rapid comparative effectiveness (CE) evidence generation to inform decisions for new drugs. We empirically assessed treatment dynamics of newly marked drugs and their implications for conducting CE research.
METHODS: We used claims data to evaluate five drug-outcome pairs: 1) raloxifene (vs. alendronate) and fracture; 2) risedronate (vs. alendronate) and fracture; 3) simvastatin plus ezetimibe fixed-dose combination (simvastatin + ezetimibe) (vs. simvastatin alone) and cardiovascular events; 4) rofecoxib (vs. nonselective nonsteroidal anti-inflammatory drugs [ns-NSAIDs]) and myocardial infarction; and 5) rofecoxib (vs. ns-NSAIDS) and gastrointestinal bleed. We examined utilization dynamics in the early marketing period, including evolving utilization patterns, outcome risk among those treated with new versus established drugs, and prior treatment patterns that may indicate treatment resistance or intolerance. We addressed these challenges by replicating active CE monitoring with sequential matched cohort analysis.
RESULTS: Patients initiating new drugs were more likely to have used other drugs for the same indication in the past, but the majority of patients in all new drug cohorts were treatment naive (82.0% overall). Patients initiating rofecoxib had higher predicted baseline risk of gastrointestinal bleed than did patients initiating ns-NSAIDs. Patients initiating risedronate and alendronate had similar predicted baseline risks of fracture, while those initiating raloxifene and simvastatin + ezetimibe had lower risks of outcomes of interest relative to their comparators. Prospective monitoring yielded results consistent with expectation for each example.
CONCLUSIONS: Many challenges to assessing the CE of new drugs are borne out in empirical data. Attention to these challenges can yield valid CE results.
OBJECTIVE: To develop and validate a maternal comorbidity index to predict severe maternal morbidity, defined as the occurrence of acute maternal end-organ injury, or mortality.
METHODS: Data were derived from the Medicaid Analytic eXtract for the years 2000-2007. The primary outcome was defined as the occurrence of maternal end-organ injury or death during the delivery hospitalization through 30 days postpartum. The data set was randomly divided into a two-thirds development cohort and a one-third validation cohort. Using the development cohort, a logistic regression model predicting the primary outcome was created using a stepwise selection algorithm that included 24-candidate comorbid conditions and maternal age. Each of the conditions included in the final model was assigned a weight based on its beta coefficient, and these were used to calculate a maternal comorbidity index.
RESULTS: The cohort included 854,823 completed pregnancies, of which 9,901 (1.2%) were complicated by the primary study outcome. The derived score included 20 maternal conditions and maternal age. For each point increase in the score, the odds ratio for the primary outcome was 1.37 (95% confidence interval [CI] 1.35-1.39). The c-statistic for this model was 0.657 (95% CI 0.647-0.666). The derived score performed significantly better than available comorbidity indices in predicting maternal morbidity and mortality.
CONCLUSION: This new maternal comorbidity index provides a simple measure for summarizing the burden of maternal illness for use in the conduct of epidemiologic, health services, and comparative effectiveness research.
LEVEL OF EVIDENCE: II.
OBJECTIVE: To evaluate whether smoking status is associated with the efficacy of antiplatelet treatment in the prevention of cardiovascular events.
DESIGN: Systematic review, meta-analysis, and indirect comparisons.
DATA SOURCES: Medline (1966 to present) and Embase (1974 to present), with supplementary searches in databases of abstracts from major cardiology conferences, the Cumulative Index to Nursing and Allied Health (CINAHL) and the CAB Abstracts databases, and Google Scholar.
STUDY SELECTION: Randomized trials of clopidogrel, prasugrel, or ticagrelor that examined clinical outcomes among subgroups of smokers and nonsmokers.
DATA EXTRACTION: Two authors independently extracted all data, including information on the patient populations included in the trials, treatment types and doses, definitions of clinical outcomes and duration of follow-up, definitions of smoking subgroups and number of patients in each group, and effect estimates and 95% confidence intervals for each smoking status subgroup.
RESULTS: Of nine eligible randomized trials, one investigated clopidogrel compared with aspirin, four investigated clopidogrel plus aspirin compared with aspirin alone, and one investigated double dose compared with standard dose clopidogrel; these trials include 74,489 patients, of whom 21,717 (29%) were smokers. Among smokers, patients randomized to clopidogrel experienced a 25% reduction in the primary composite clinical outcome of cardiovascular death, myocardial infarction, and stroke compared with patients in the control groups (relative risk 0.75, 95% confidence interval 0.67 to 0.83). In nonsmokers, however, clopidogrel produced just an 8% reduction in the composite outcome (0.92, 0.87 to 0.98). Two studies investigated prasugrel plus aspirin compared with clopidogrel plus aspirin, and one study investigated ticagrelor plus aspirin compared with clopidogrel plus aspirin. In smokers, the relative risk was 0.71 (0.61 to 0.82) for prasugrel compared with clopidogrel and 0.83 (0.68 to 1.00) for ticagrelor compared with clopidogrel. Corresponding relative risks were 0.92 (0.83 to 1.01) and 0.89 (0.79 to 1.00) among nonsmokers.
CONCLUSIONS: In randomized clinical trials of antiplatelet drugs, the reported clinical benefit of clopidogrel in reducing cardiovascular death, myocardial infarction, and stroke was seen primarily in smokers, with little benefit in nonsmokers.
OBJECTIVE: To examine the relation between the type of stress ulcer prophylaxis administered and the risk of postoperative pneumonia in patients undergoing coronary artery bypass grafting.
DESIGN: Retrospective cohort study.
SETTING: Premier Research Database.
PARTICIPANTS: 21,214 patients undergoing coronary artery bypass graft surgery between 2004 and 2010; 9830 (46.3%) started proton pump inhibitors and 11,384 (53.7%) started H2 receptor antagonists in the immediate postoperative period.
MAIN OUTCOME MEASURE: Occurrence of postoperative pneumonia, assessed using appropriate diagnostic codes.
RESULTS: Overall, 492 (5.0%) of the 9830 patients receiving a proton pump inhibitor and 487 (4.3%) of the 11,384 patients receiving an H2 receptor antagonist developed postoperative pneumonia during the index hospital admission. After propensity score adjustment, an elevated risk of pneumonia associated with treatment with proton pump inhibitors compared with H2 receptor antagonists remained (relative risk 1.19, 95% confidence interval 1.03 to 1.38). In the instrumental variable analysis, use of a proton pump inhibitor (compared with an H2 receptor antagonist) was associated with an increased risk of pneumonia of 8.2 (95% confidence interval 0.5 to 15.9) cases per 1000 patients.
CONCLUSIONS: Patients treated with proton pump inhibitors for stress ulcer had a small increase in the risk of postoperative pneumonia compared with patients treated with H2 receptor antagonists; this risk remained after confounding was accounted for using multiple analytic approaches.
BACKGROUND: Controversy exists regarding the optimal preventative therapy for venous thromboembolism (VTE) after coronary artery bypass graft (CABG) surgery. We sought to compare the effectiveness and safety of the most commonly used regimens.
METHODS AND RESULTS: We assembled a cohort of 92 699 patients who underwent CABG between 2004 and 2008, using the Premier database. Patients were categorized by method of VTE prevention initiated within 48 hours of surgery, including no preventative therapy (n=55 400), mechanical preventative therapy (n=21 162), subcutaneous unfractio--nated or low-molecular-weight heparin (n=10 718), subcutaneous fondaparinux (n=88), and concurrent mechanical-chemical therapy (n=5331). The incidence of VTE and major bleeding events within 6 weeks of CABG were compared, using multivariable and propensity score adjustment. The overall incidence of VTE for the entire cohort was 0.74%, and the incidence of major bleeding was 1.43%. VTE and bleeding events occurred with similar incidence in each of the patient categories (VTE: 0.70%, 0.79%, 0.81%, 1.14%, and 0.73%; major bleeding: 1.36%, 1.45%, 1.69%, 3.41%, 1.50%; no prevention, mechanical prevention, subcutaneous heparin, subcutaneous fondaparinux, concurrent mechanical-chemical prevention, respectively). Compared with receiving no prevention, the use of mechanical prevention or subcutaneous heparin did not significantly reduce the risk of VTE or change the risk of major bleeding (P=NS).
CONCLUSION: Venous thromboembolism occurs infrequently after CABG. Compared with the use of no prevention, the administration of chemical or mechanical preventative therapies to CABG patients does not appreciably lower the risk of VTE. These data provide support for the common practice of administering no VTE preventative therapy after CABG, used for nearly 60% of patients within this cohort.
BACKGROUND: Active medical-product-safety surveillance systems are being developed to monitor many products and outcomes simultaneously in routinely collected longitudinal electronic healthcare data. These systems will rely on algorithms to generate alerts about potential safety concerns.
METHODS: We compared the performance of 5 classes of algorithms in simulated data using a sequential matched-cohort framework, and applied the results to 2 electronic healthcare databases to replicate monitoring of cerivastatin-induced rhabdomyolysis. We generated 600,000 simulated scenarios with varying expected event frequency in the unexposed, alerting threshold, and outcome risk in the exposed, and compared the alerting algorithms in each scenario type using an event-based performance metric.
RESULTS: We observed substantial variation in algorithm performance across the groups of scenarios. Relative performance varied by the event frequency and by user-defined preferences for sensitivity versus specificity. Type I error-based statistical testing procedures achieved higher event-based performance than other approaches in scenarios with few events, whereas statistical process control and disproportionality measures performed relatively better with frequent events. In the empirical data, we observed 6 cases of rhabdomyolysis among 4294 person-years of follow-up, with all events occurring among cerivastatin-treated patients. All selected algorithms generated alerts before the drug was withdrawn from the market.
CONCLUSIONS: For active medical-product-safety monitoring in a sequential matched cohort framework, no single algorithm performed best in all scenarios. Alerting algorithm selection should be tailored to particular features of a product-outcome pair, including the expected event frequencies and trade-offs between false-positive and false-negative alerting.
We developed a semi-automated active monitoring system that uses sequential matched-cohort analyses to assess drug safety across a distributed network of longitudinal electronic health-care data. In a retrospective analysis, we show that the system would have identified cerivastatin-induced rhabdomyolysis. In this study, we evaluated whether the system would generate alerts for three drug-outcome pairs: rosuvastatin and rhabdomyolysis (known null association), rosuvastatin and diabetes mellitus, and telithromycin and hepatotoxicity (two examples for which alerting would be questionable). Over >5 years of monitoring, rate differences (RDs) in comparisons of rosuvastatin with atorvastatin were -0.1 cases of rhabdomyolysis per 1,000 person-years (95% confidence interval (CI): -0.4, 0.1) and -2.2 diabetes cases per 1,000 person-years (95% CI: -6.0, 1.6). The RD for hepatotoxicity comparing telithromycin with azithromycin was 0.3 cases per 1,000 person-years (95% CI: -0.5, 1.0). In a setting in which false positivity is a major concern, the system did not generate alerts for the three drug-outcome pairs.
BACKGROUND: Dabigatran, an oral thrombin inhibitor, and rivaroxaban and apixaban, oral factor Xa inhibitors, have been found to be safe and effective in reducing stroke risk in patients with atrial fibrillation. We sought to compare the efficacy and safety of the 3 new agents based on data from their published warfarin-controlled randomized trials, using the method of adjusted indirect comparisons.
METHODS AND RESULTS: We included findings from 44 535 patients enrolled in 3 trials of the efficacy of dabigatran (Randomized Evaluation of Long-Term Anticoagulation Therapy [RELY]), apixaban (Apixaban for Reduction in Stroke and Other Thromboembolic Events in Atrial Fibrillation [ARISTOTLE]), and rivaroxaban (Rivaroxaban Once Daily Oral Direct Factor Xa Inhibition Compared With Vitamin K Antagonism for Prevention of Stroke and Embolism Trial in Atrial Fibrillation [ROCKET-AF]), each compared with warfarin. The primary efficacy end point was stroke or systemic embolism; the safety end point we studied was major hemorrhage. To address a lack of comparability between trial populations caused by the restriction of ROCKET-AF to high-risk patients, we conducted a subgroup analysis in patients with a CHADS(2) score ≥3. We found no statistically significant efficacy differences among the 3 drugs, although apixaban and dabigatran were numerically superior to rivaroxaban. Apixaban produced significantly fewer major hemorrhages than dabigatran and rivaroxaban.
CONCLUSIONS: An indirect comparison of new anticoagulants based on existing trial data indicates that in patients with a CHADS(2) score ≥3 dabigatran 150 mg, apixaban 5 mg, and rivaroxaban 20 mg resulted in statistically similar rates of stroke and systemic embolism, but apixaban had a lower risk of major hemorrhage compared with dabigatran and rivaroxaban. Until head-to-head trials or large-scale observational studies that reflect routine use of these agents are available, such adjusted indirect comparisons based on trial data are one tool to guide initial therapeutic choices.