Kuehne F, Jahn B, Conrads-Frank A, Bundo M, Arvandi M, Endel F, Popper N, Endel G, Urach C, Gyimesi M, et al. Guidance for a causal comparative effectiveness analysis emulating a target trial based on big real world evidence: when to start statin treatment. J Comp Eff Res. 2019;8 (12) :1013-1025.
Abstract The aim of this project is to describe a causal (counterfactual) approach for analyzing when to start statin treatment to prevent cardiovascular disease using real-world evidence. We use directed acyclic graphs to operationalize and visualize the causal research question considering selection bias, potential time-independent and time-dependent confounding. We provide a study protocol following the 'target trial' approach and describe the data structure needed for the causal assessment. The study protocol can be applied to real-world data, in general. However, the structure and quality of the database play an essential role for the validity of the results, and database-specific potential for bias needs to be explicitly considered.
Young JG, Vatsa R, Murray EJ, Hernán MA.
Interval-cohort designs and bias in the estimation of per-protocol effects: a simulation study. Trials. 2019;20 (1) :552.
AbstractBACKGROUND: Randomized trials are considered the gold standard for making inferences about the causal effects of treatments. However, when protocol deviations occur, the baseline randomization of the trial is no longer sufficient to ensure unbiased estimation of the per-protocol effect: post-randomization, time-varying confounders must be sufficiently measured and adjusted for in the analysis. Given the historical emphasis on intention-to-treat effects in randomized trials, measurement of post-randomization confounders is typically infrequent. This may induce bias in estimates of the per-protocol effect, even using methods such as inverse probability weighting, which appropriately account for time-varying confounders affected by past treatment.
METHODS/DESIGN: In order to concretely illustrate the potential magnitude of bias due to infrequent measurement of time-varying covariates, we simulated data from a very large trial with a survival outcome and time-varying confounding affected by past treatment. We generated the data such that the true underlying per-protocol effect is null and under varying degrees of confounding (strong, moderate, weak). In the simulated data, we estimated per-protocol survival curves and associated contrasts using inverse probability weighting under monthly measurement of the time-varying covariates (which constituted complete measurement in our simulation), yearly measurement, as well as 3- and 6-month intervals.
RESULTS: Using inverse probability weighting, we were able to recover the true null under the complete measurement scenario no matter the strength of confounding. Under yearly measurement intervals, the estimate of the per-protocol effect diverged from the null; inverse probability weighted estimates of the per-protocol 5-year risk ratio based on yearly measurement were 1.19, 1.12, and 1.03 under strong, moderate, and weak confounding, respectively. Bias decreased with measurement interval length. Under all scenarios, inverse probability weighted estimators were considerably less biased than a naive estimator that ignored time-varying confounding completely.
CONCLUSIONS: Bias that arises from interval measurement designs highlights the need for planning in the design of randomized trials for collection of time-varying covariate data. This may come from more frequent in-person measurement or external sources (e.g., electronic medical record data). Such planning will provide improved estimates of the per-protocol effect through the use of methods that appropriately adjust for time-varying confounders.
Caniglia EC, Zash R, Swanson SA, Wirth KE, Diseko M, Mayondi G, Lockman S, Mmalane M, Makhema J, Dryden-Peterson S, et al. Methodological Challenges When Studying Distance to Care as an Exposure in Health Research. Am J Epidemiol. 2019;188 (9) :1674-1681.
Publisher's VersionAbstractDistance to care is a common exposure and proposed instrumental variable in health research, but it is vulnerable to violations of fundamental identifiability conditions for causal inference. We used data collected from the Botswana Birth Outcomes Surveillance study between 2014 and 2016 to outline 4 challenges and potential biases when using distance to care as an exposure and as a proposed instrument: selection bias, unmeasured confounding, lack of sufficiently well-defined interventions, and measurement error. We describe how these issues can arise, and we propose sensitivity analyses for estimating the degree of bias.