BACKGROUND: Controversy exists regarding the optimal preventative therapy for venous thromboembolism (VTE) after coronary artery bypass graft (CABG) surgery. We sought to compare the effectiveness and safety of the most commonly used regimens.
METHODS AND RESULTS: We assembled a cohort of 92 699 patients who underwent CABG between 2004 and 2008, using the Premier database. Patients were categorized by method of VTE prevention initiated within 48 hours of surgery, including no preventative therapy (n=55 400), mechanical preventative therapy (n=21 162), subcutaneous unfractio--nated or low-molecular-weight heparin (n=10 718), subcutaneous fondaparinux (n=88), and concurrent mechanical-chemical therapy (n=5331). The incidence of VTE and major bleeding events within 6 weeks of CABG were compared, using multivariable and propensity score adjustment. The overall incidence of VTE for the entire cohort was 0.74%, and the incidence of major bleeding was 1.43%. VTE and bleeding events occurred with similar incidence in each of the patient categories (VTE: 0.70%, 0.79%, 0.81%, 1.14%, and 0.73%; major bleeding: 1.36%, 1.45%, 1.69%, 3.41%, 1.50%; no prevention, mechanical prevention, subcutaneous heparin, subcutaneous fondaparinux, concurrent mechanical-chemical prevention, respectively). Compared with receiving no prevention, the use of mechanical prevention or subcutaneous heparin did not significantly reduce the risk of VTE or change the risk of major bleeding (P=NS).
CONCLUSION: Venous thromboembolism occurs infrequently after CABG. Compared with the use of no prevention, the administration of chemical or mechanical preventative therapies to CABG patients does not appreciably lower the risk of VTE. These data provide support for the common practice of administering no VTE preventative therapy after CABG, used for nearly 60% of patients within this cohort.
BACKGROUND: Active medical-product-safety surveillance systems are being developed to monitor many products and outcomes simultaneously in routinely collected longitudinal electronic healthcare data. These systems will rely on algorithms to generate alerts about potential safety concerns.
METHODS: We compared the performance of 5 classes of algorithms in simulated data using a sequential matched-cohort framework, and applied the results to 2 electronic healthcare databases to replicate monitoring of cerivastatin-induced rhabdomyolysis. We generated 600,000 simulated scenarios with varying expected event frequency in the unexposed, alerting threshold, and outcome risk in the exposed, and compared the alerting algorithms in each scenario type using an event-based performance metric.
RESULTS: We observed substantial variation in algorithm performance across the groups of scenarios. Relative performance varied by the event frequency and by user-defined preferences for sensitivity versus specificity. Type I error-based statistical testing procedures achieved higher event-based performance than other approaches in scenarios with few events, whereas statistical process control and disproportionality measures performed relatively better with frequent events. In the empirical data, we observed 6 cases of rhabdomyolysis among 4294 person-years of follow-up, with all events occurring among cerivastatin-treated patients. All selected algorithms generated alerts before the drug was withdrawn from the market.
CONCLUSIONS: For active medical-product-safety monitoring in a sequential matched cohort framework, no single algorithm performed best in all scenarios. Alerting algorithm selection should be tailored to particular features of a product-outcome pair, including the expected event frequencies and trade-offs between false-positive and false-negative alerting.
We developed a semi-automated active monitoring system that uses sequential matched-cohort analyses to assess drug safety across a distributed network of longitudinal electronic health-care data. In a retrospective analysis, we show that the system would have identified cerivastatin-induced rhabdomyolysis. In this study, we evaluated whether the system would generate alerts for three drug-outcome pairs: rosuvastatin and rhabdomyolysis (known null association), rosuvastatin and diabetes mellitus, and telithromycin and hepatotoxicity (two examples for which alerting would be questionable). Over >5 years of monitoring, rate differences (RDs) in comparisons of rosuvastatin with atorvastatin were -0.1 cases of rhabdomyolysis per 1,000 person-years (95% confidence interval (CI): -0.4, 0.1) and -2.2 diabetes cases per 1,000 person-years (95% CI: -6.0, 1.6). The RD for hepatotoxicity comparing telithromycin with azithromycin was 0.3 cases per 1,000 person-years (95% CI: -0.5, 1.0). In a setting in which false positivity is a major concern, the system did not generate alerts for the three drug-outcome pairs.
BACKGROUND: Dabigatran, an oral thrombin inhibitor, and rivaroxaban and apixaban, oral factor Xa inhibitors, have been found to be safe and effective in reducing stroke risk in patients with atrial fibrillation. We sought to compare the efficacy and safety of the 3 new agents based on data from their published warfarin-controlled randomized trials, using the method of adjusted indirect comparisons.
METHODS AND RESULTS: We included findings from 44 535 patients enrolled in 3 trials of the efficacy of dabigatran (Randomized Evaluation of Long-Term Anticoagulation Therapy [RELY]), apixaban (Apixaban for Reduction in Stroke and Other Thromboembolic Events in Atrial Fibrillation [ARISTOTLE]), and rivaroxaban (Rivaroxaban Once Daily Oral Direct Factor Xa Inhibition Compared With Vitamin K Antagonism for Prevention of Stroke and Embolism Trial in Atrial Fibrillation [ROCKET-AF]), each compared with warfarin. The primary efficacy end point was stroke or systemic embolism; the safety end point we studied was major hemorrhage. To address a lack of comparability between trial populations caused by the restriction of ROCKET-AF to high-risk patients, we conducted a subgroup analysis in patients with a CHADS(2) score ≥3. We found no statistically significant efficacy differences among the 3 drugs, although apixaban and dabigatran were numerically superior to rivaroxaban. Apixaban produced significantly fewer major hemorrhages than dabigatran and rivaroxaban.
CONCLUSIONS: An indirect comparison of new anticoagulants based on existing trial data indicates that in patients with a CHADS(2) score ≥3 dabigatran 150 mg, apixaban 5 mg, and rivaroxaban 20 mg resulted in statistically similar rates of stroke and systemic embolism, but apixaban had a lower risk of major hemorrhage compared with dabigatran and rivaroxaban. Until head-to-head trials or large-scale observational studies that reflect routine use of these agents are available, such adjusted indirect comparisons based on trial data are one tool to guide initial therapeutic choices.
Active medical product monitoring systems, such as the Sentinel System, will utilize electronic healthcare data captured during routine health care. Safety signals that arise from these data may be spurious because of chance or bias, particularly confounding bias, given the observational nature of the data. Applying appropriate monitoring designs can filter out many false-positive and false-negative associations from the outset. Designs can be classified by whether they produce estimates based on between-person or within-person comparisons. In deciding which approach is more suitable for a given monitoring scenario, stakeholders must consider the characteristics of the monitored product, characteristics of the health outcome of interest (HOI), and characteristics of the potential link between these. Specifically, three factors drive design decisions: (i) strength of within-person and between-person confounding; (ii) whether circumstances exist that may predispose to misclassification of exposure or misclassification of the timing of the HOI; and (iii) whether the exposure of interest is predominantly transient or sustained. Additional design considerations include whether to focus on new users, the availability of appropriate active comparators, the presence of an exposure time trend, and the measure of association of interest. When the key assumptions of self-controlled designs are fulfilled (i.e., lack of within-person, time-varying confounding; abrupt HOI onset; and transient exposure), within-person comparisons are preferred because they inherently avoid confounding by fixed factors. The cohort approach generally is preferred in other situations and particularly when timing of exposure or outcome is uncertain because cohort approaches are less vulnerable to biases resulting from misclassification.
BACKGROUND: Several efforts are under way to develop and test methods for prospective drug safety monitoring using large, electronic claims databases. Prospective monitoring systems must incorporate signalling algorithms and techniques to mitigate confounding in order to minimize false positive and false negative signals due to chance and bias.
OBJECTIVE: The aim of the study was to describe a prototypical targeted active safety monitoring system and apply the framework to three empirical examples.
METHODS: We performed sequential, targeted safety monitoring in three known drug/adverse event (AE) pairs: (i) paroxetine/upper gastrointestinal (UGI) bleed; (ii) lisinopril/angioedema; (iii) ciprofloxacin/Achilles tendon rupture (ATR). Data on new users of the drugs of interest were extracted from the HealthCore Integrated Research Database. New users were matched by propensity score to new users of comparator drugs in each example. Analyses were conducted sequentially to emulate prospective monitoring. Two signalling rules--a maximum sequential probability ratio test and an effect estimate-based approach--were applied to sequential, matched cohorts to identify signals within the system.
RESULTS: Signals were identified for all three examples: paroxetine/UGI bleed in the seventh monitoring cycle, within 2 calendar years of sequential data; lisinopril/angioedema in the second cycle, within the first monitoring year; ciprofloxacin/ATR in the tenth cycle, within the fifth year.
CONCLUSION: In this proof of concept, our targeted, active monitoring system provides an alternative to systems currently in the literature. Our system employs a sequential, propensity score-matched framework and signalling rules for prospective drug safety monitoring and identified signals for all three adverse drug reactions evaluated.
BACKGROUND: Prospective medical product monitoring is intended to alert stakeholders about whether and when safety problems are identifiable in longitudinal electronic healthcare data. Little attention has been given to how to compare methods in this setting.
PURPOSE: To explore aspects of prospective monitoring that should be considered when comparing method performance and to develop a metric that explicitly accounts for these considerations.
METHODS: We reviewed existing metrics and propose an event-based approach that classifies exposed outcomes according to whether a prior alert was generated.
RESULTS: In comparing performance of methods for prospective monitoring, three factors must be considered: (1) accuracy in alerting; (2) timeliness of alerting; and (3) the trade-offs between the costs of false negative and false positive alerting. Traditional scenario-based measures of accuracy, such as sensitivity and specificity, which classify only at the end of monitoring, fail to appreciate timeliness of alerting and impose fixed tradeoffs between false positive versus false negative consequences. We provide an expression that summarizes event-based sensitivity (the proportion of exposed events that occur after alerting among all exposed events in scenarios with true safety issues) and event-based specificity (the proportion of exposed events that occur in the absence of alerting among all exposed events in scenarios with no true safety issues) by taking an average weighted by relative costs of false positive and false negative alerting.
CONCLUSIONS: The proposed approach explicitly accounts for accuracy in alerting, timeliness in alerting, and the trade-offs between the costs of false negative and false positive alerting.
BACKGROUND: Usefulness of propensity scores and regression models to balance potential confounders at treatment initiation may be limited for newly introduced therapies with evolving use patterns.
OBJECTIVES: To consider settings in which the disease risk score has theoretical advantages as a balancing score in comparative effectiveness research because of stability of disease risk and the availability of ample historical data on outcomes in people treated before introduction of the new therapy.
METHODS: We review the indications for and balancing properties of disease risk scores in the setting of evolving therapies and discuss alternative approaches for estimation. We illustrate development of a disease risk score in the context of the introduction of atorvastatin and the use of high-dose statin therapy beginning in 1997, based on data from 5668 older survivors of myocardial infarction who filled a statin prescription within 30 days after discharge from 1995 until 2004. Theoretical considerations suggested development of a disease risk score among nonusers of atorvastatin and high-dose statins during the period 1995-1997.
RESULTS: Observed risk of events increased from 11% to 35% across quintiles of the disease risk score, which had a C-statistic of 0.71. The score allowed control of many potential confounders even during early follow-up with few study endpoints.
CONCLUSIONS: Balancing on a disease risk score offers an attractive alternative to a propensity score in some settings such as newly marketed drugs and provides an important axis for evaluation of potential effect modification. Joint consideration of propensity and disease risk scores may be valuable.