Publications

In Press
Ding P, Miratrix L. Likelihood-Based inference of binary experimental data without modeling . Scandinavian Journal of Statistics. In Press.
Ding P, Feller A, Miratrix L. Decomposing Treatment Effect Variation. Journal of the American Statistical Association. In Press.Abstract
Understanding and characterizing treatment effect variation in randomized experiments has become essential for going beyond the “black box” of the average treatment effect. Nonetheless, traditional statistical approaches often ignore or assume away such variation. In the context of randomized experiments, this paper proposes a framework for decomposing overall treatment effect variation into a systematic component explained by observed covariates and a remaining idiosyncratic component. Our framework is fully randomization-based, with estimates of treatment effect variation that are entirely justified by the randomization itself. Our framework can also account for noncompliance, which is an important practical complication. We make several contributions. First, we show that randomization-based estimates of systematic variation are very similar in form to estimates from fully-interacted linear regression and two stage least squares. Second, we use these estimators to develop an omnibus test for systematic treatment effect variation, both with and without noncompliance. Third, we propose an R2-like measure of treatment effect variation explained by covariates and, when applicable, noncompliance. Finally, we assess these methods via simulation studies and apply them to the Head Start Impact Study, a large-scale randomized experiment.
2018
Miratrix L, Wager S, Zubizarreta J. Shape-constrained partial identification of a population mean under unknown probabilities of sample selection . Biometrika. 2018;105 (1) :103-114. Publisher's Version
Miratrix L, Sekhon JS, Theodoridis AG, Campos LF. Worth Weighting? How to Think About and Use Weights in Survey Experiments. Political Analysis. 2018;26 (3) :275-291. Working TextAbstract

Winner of the 2019 Warren Miller Prize

The popularity of online surveys has increased the prominence of using weights that capture units' probabilities of inclusion for claims of representativeness. Yet, much uncertainty remains regarding how these weights should be employed in the analysis of survey experiments: Should they be used or ignored? If they are used, which estimators are preferred? We offer practical advice, rooted in the Neyman-Rubin model, for researchers producing and working with survey experimental data. We examine simple, efficient estimators for analyzing these data, and give formulae for their biases and variances. We provide simulations that examine these estimators as well as real examples from experiments administered online through YouGov. We find that for examining the existence of population treatment effects using high-quality, broadly representative samples recruited by top online survey firms, sample quantities, which do not rely on weights, are often sufficient. We found that Sample Average Treatment Effect (SATE) estimates did not appear to differ substantially from their weighted counterparts, and they avoided the substantial loss of statistical power that accompanies weighting. When precise estimates of Population Average Treatment Effects (PATE) are essential, we analytically show post-stratifying on survey weights and/or covariates highly correlated with the outcome to be a conservative choice. While we show these substantial gains in simulations, we find limited evidence of them in practice.

Miratrix L, Furey J, Feller A, Grindal T, Page LC. Bounding, an Accessible Method for Estimating Principal Causal Effects, Examined and Explained. Journal of Research on Educational Effectiveness. 2018;11 (1) :133-162.Abstract
Estimating treatment effects for subgroups defined by posttreatment behavior (i.e., estimating causal effects in a principal stratification framework) can be technically challenging and heavily reliant on strong assumptions. We investigate an alternative path: using bounds to identify ranges of possible effects that are consistent with the data. This simple approach relies on fewer assumptions and yet can result in policy-relevant findings. As we show, even moderately predictive covariates can be used to substantially tighten bounds in a straightforward manner. Via simulation, we demonstrate which types of covariates are maximally beneficial. We conclude with an analysis of a multisite experimental study of Early College High Schools. When examining the program's impact on students completing the ninth grade “on-track” for college, we find little impact for ECHS students who would otherwise attend a high-quality high school, but substantial effects for those who would not. This suggests a potential benefit in expanding these programs in areas primarily served by lower quality schools.
2017
Keele L, Miratrix L. Randomization Inference for Outcomes with Clumping at Zero. The American Statistician. 2017.Abstract
In randomized experiments, randomization forms the “reasoned basis for inference.” While randomization inference is well developed for continuous and binary outcomes, there has been comparatively little work for outcomes with nonnegative support and clumping at zero. Typically outcomes of this type have been modeled using parametric models that impose strong distributional assumptions. This article proposes new randomization inference procedures for nonnegative outcomes with clumping at zero. Instead of making distributional assumptions, we propose various assumptions about the nature of response to treatment. Our methods form a set of nonparametric methods for outcomes that are often described as zero-inflated. These methods are illustrated using two randomized trials where job training interventions were designed to increase earnings of participants.
Forastiere L, Mealli F, Miratrix L. Posterior Predictive p-Values with Fisher Randomization Tests in Noncompliance Settings: Test Statistics vs Discrepancy Measures. Bayesian Analysis. 2017.Abstract
In randomized experiments with noncompliance, one might wish to focus on compliers rather than on the overall sample. In this vein, Rubin(1998) argued that testing for the complier average causal effect and averaging permutation-based pp-values over the posterior distribution of the compliance types could increase power as compared to general intent-to-treat tests. The general scheme is a repeated two-step process: impute missing compliance types and conduct a permutation test with the completed data. In this paper, we explore this idea further, comparing the use of discrepancy measures—which depend on unknown but imputed parameters—to classical test statistics and contrasting different approaches for imputing the unknown compliance types. We also examine consequences of model misspecification in the imputation step, and discuss to what extent this additional modeling undercuts the advantage of permutation tests being model independent. We find that, especially for discrepancy measures, modeling choices can impact both power and validity. In particular, imputing missing compliance types under the null can radically reduce power, but not doing so can jeopardize validity. Fortunately, using covariates predictive of compliance type in the imputation can mitigate these results. We also compare this overall approach to Bayesian model-based tests, that is, tests that are directly derived from posterior credible intervals, under both correct and incorrect model specification.
Feller A, Mealli F, Miratrix L. Principal Score Methods: Assumptions, Extensions, and Practical Considerations. Journal of Educational and Behavioral Statistics. 2017;42 (6) :726-758.Abstract
Researchers addressing posttreatment complications in randomized trials often turn to principal stratification to define relevant assumptions and quantities of interest. One approach for the subsequent estimation of causal effects in this framework is to use methods based on the “principal score,” the conditional probability of belonging to a certain principal stratum given covariates. These methods typically assume that stratum membership is as good as randomly assigned, given these covariates. We clarify the key assumption in this context, known as principal ignorability, and argue that versions of this assumption are quite strong in practice. We describe these concepts in terms of both one- and two-sided noncompliance and propose a novel approach for researchers to “mix and match” principal ignorability assumptions with alternative assumptions, such as the exclusion restriction. Finally, we apply these ideas to randomized evaluations of a job training program and an early childhood education program. Overall, applied researchers should acknowledge that principal score methods, while useful tools, rely on assumptions that are typically hard to justify in practice.
Lee J, Forasiere L, Miratrix L, Pillai NS. More Powerful Multiple Testing in Randomized Experiments with Non-Compliance. Statistica Sinica. 2017;27 (3) :1319-1345.Abstract

Two common concerns raised in analyses of randomized experiments are (i) appropriately handling issues of non-compliance, and (ii) appropriately adjusting for multiple tests (e.g., on multiple outcomes or subgroups). Although simple intention-to-treat (ITT) and Bonferroni methods are valid in terms of type I error, they can each lead to a substantial loss of power; when employing both simultaneously, the total loss may be severe. Alternatives exist to address each concern. Here we propose an analysis method for experiments involving both features that merges posterior predictive p-values for complier causal effects with randomizationbased multiple comparisons adjustments; the results are valid familywise tests that are doubly advantageous: more powerful than both those based on standard ITT statistics and those using traditional multiple comparison adjustments. The operating characteristics and advantages of our method are demonstrated through a series of simulated experiments and an analysis of the United States Job Training Partnership Act (JTPA) Study, where our methods lead to different conclusions regarding the significance of estimated JTPA effects.

more_powerful_multiple_testing_in_randomized_experiments_non-compliance_2017.pdf
Ding P, Li X, Miratrix L. Bridging Finite and Super Population Causal Inference. Journal of Causal Inference . 2017;5 (2).Abstract

There are two general views in causal analysis of experimental data: the super population view that the units are an independent sample from some hypothetical infinite populations, and the finite population view that the potential outcomes of the experimental units are fixed and the randomness comes solely from the physical randomization of the treatment assignment. These two views differs conceptually and mathematically, resulting in different sampling variances of the usual difference-in-means estimator of the average causal effect. Practically, however, these two views result in identical variance estimators. By recalling a variance decomposition and exploiting a completeness-type argument, we establish a connection between these two views in completely randomized experiments. This alternative formulation could serve as a template for bridging finite and super population causal inference in other scenarios.

 

1702.08615.pdf
2016
Hennessy J, Dasgupta T, Miratrix L, C. P, Sarkar P. A Conditional Randomization Test to Account for Covariate Imbalance in Randomized Experiments. Journal of Causal Inference. 2016;4 (1) :61-80.Abstract

 

We consider the conditional randomization t
est as a way to account for covariate imbalance
in randomized experiments. The test accounts for co
variate imbalance by comparing the observed test
statistic to the null distribution of the test statistic conditional on the observed covariate imbalance.
We prove that the conditional randomization test
has the correct significance level and introduce
original notation to describe covariate balance more formally. Through simulation, we verify that
conditional randomization tests behave like more t
raditional forms of covariate adjustment but have
the added benefit of having the correct conditional s
ignificancelevel.Finally,weapplytheapproach
to  a  randomized  product  marketing  experiment  where  covariate  information  was  collected  after

We consider the conditional randomization test as a way to account for covariate imbalance in randomized experiments. The test accounts for covariate imbalance by comparing the observed test statistic to the null distribution of the test statistic conditional on the observed covariate imbalance. We prove that the conditional randomization test has the correct significance level and introduce original notation to describe covariate balance more formally. Through simulation, we verify that conditional randomization tests behave like more traditional forms of covariate adjustment but have the added benefit of having the correct conditional significance level. Finally, we apply the approach to a randomized product marketing experiment where covariate information was collected after randomization.

a_conditional_randomization_test_to_account_for_covariate_imbalance_in_randomized_experiments_2016_01.pdf
Feller A, Grindal T, Miratrix L, Page L. Compared to What? Variations in the Impacts of Early Childhood Education by Alternative Care-Type Settings. Annals of Applied Statistics. 2016;10 (3) :1245-1285.Abstract

Early childhood education research often compares a group of children who receive the intervention of interest to a group of children who receive care in a range of different care settings. In this paper, we estimate differential impacts of an early childhood intervention by alternative care setting, using data from the Head Start Impact Study, a large-scale randomized evaluation. To do so, we utilize a Bayesian principal stratification framework to estimate separate impacts for two types of Compliers: those children who would otherwise be in other center-based care when assigned to control and those who would otherwise be in home-based care. We find strong, positive short-term effects of Head Start on receptive vocabulary for those Compliers who would otherwise be in home-based care. By contrast, we find no meaningful impact of Head Start on vocabulary for those Compliers who would otherwise be in other center-based care. Our findings suggest that alternative care type is a potentially important source of variation in early childhood education interventions.

feller_2016_compared_to_what_variation_in_the_impacts_of_early_childhood_education_by_alternative_care_type.pdf
Miratrix L, Ackerman R. Conducting sparse feature selection on arbitrarily long phrases in text corpora with a focus on interpretability. Statistical Analysis and Data Mining. 2016;9 (6) :435-460. Publisher's VersionAbstract

 

We propose a general framework for topic-specific summarization of large text corpora, and illustrate how it can be used for analysis in two quite different contexts: an Occupational Safety and Health Administration (OSHA) database of fatality and catastrophe reports (to facilitate surveillance for patterns in circumstances leading to injury or death), and legal decisions on workers' compensation claims (to explore relevant case law). Our summarization framework, built on sparse classification methods, is a compromise between simple word frequency-based methods currently in wide use, and more heavyweight, model-intensive methods such as latent Dirichlet allocation (LDA). For a particular topic of interest (e.g., mental health disability, or carbon monoxide exposure), we regress a labeling of documents onto the high-dimensional counts of all the other words and phrases in the documents. The resulting small set of phrases found as predictive are then harvested as the summary. Using a branch-and-bound approach, this method can incorporate phrases of arbitrary length, which allows for potentially rich summarization. We discuss how focus on the purpose of the summaries can inform choices of tuning parameters and model constraints. We evaluate this tool by comparing the computational time and summary statistics of the resulting word lists to three other methods in the literature. We also present a new R package, textreg. Overall, we argue that sparse methods have much to offer in text analysis and is a branch of research that should be considered further in this context. © 2016 Wiley Periodicals, Inc. Statistical Analysis and Data Mining: The ASA Data Science Journal, 2016

 

conducting_sparse_feature_selection_on_arbitrarily_long_phrases_in_text_corpora_with_a_focus_on_interpretability_2016.pdf
2015
To Adjust or Not to Adjust? Sensitivity Analysis of M-Bias and Butterfly-Bias. Journal of Causal Inference. 2015.
Ding P, Feller A, Miratrix L. Randomization Inference for Treatment Effect Variation. Journal of the Royal Statistical Society: Series B (Statistical Methodology) . 2015;78 (3) :655-671.Abstract

 

pplied researchers are increasingly interested in whether and how treatment effects
vary in randomized evaluations, especially variation not explained by observed covariates. We pro-

Applied researchers are increasingly interested in whether and how treatment effects vary in randomized evaluations, especially variation that is not explained by observed covariates. We propose a model-free approach for testing for the presence of such unexplained variation. To use this randomization-based approach, wemust address the fact that the average treatment effect, which is generally the object of interest in randomized experiments, actually acts as a nuisance parameter in this setting.We explore potential solutions and advocate for a method that guarantees valid tests in finite samples despite this nuisance.We also show how this method readily extends to testing for heterogeneity beyond a given model, which can be useful for assessing the sufficiency of a given scientific theory.We finally apply our method to the National Head Start impact study, which is a large-scale randomized evaluation of a Federal preschool programme, finding that there is indeed significant unexplained treatment effect variation.

randomization_inference_for_treatment_effect_variation_2015.pdf
Page L, Feller A, Grindal T, Miratrix L, Somers MA. Principal stratification: A tool for understanding variation in program effects across endogenous subgroups. American Journal of Evaluation . 2015;36 (4) :1-18.Abstract

 

Increasingly, researchers are interested in questions regarding treatment-effect variation across partially or fully latent subgroups defined not by pretreatment characteristics but by postrandomization actions. One promising approach to address such questions is principal stratification. Under this framework, a researcher defines endogenous subgroups, or principal strata, based on post-randomization behaviors under both the observed and the counterfactual experimental conditions. These principal strata give structure to such research questions and provide a framework for determining estimation strategies to obtain desired effect estimates. This article provides a nontechnical primer to principal stratification. We review selected applications to highlight the breadth of substantive questions and methodological issues that this method can inform. We then discuss its relationship to instrumental variables analysis to address binary noncompliance in an experimental context and highlight how the framework can be generalized to handle more complex posttreatment patterns. We emphasize the counterfactual logic fundamental to principal stratification and the key assumptions that render analytic challenges more tractable. We briefly discuss technical aspects of estimation  procedures, providing a short guide for interested readers.

 

Principal stratification: A Tool for Understanding Variation in Program Effects Across Endogenous Subgroups
2014
J. J, Miratrix L, Yu B, Gawalt B, El Ghaoui L, Barnesmoore L, Clavier S. Concise Comparative Summaries (CCS) of Large Text Corpora with a Human Experiment. Annals of Applied Statistics. 2014;8 (1) :499-529.Abstract

In this paper we propose a general framework for topic-specific summarization of large text corpora and illustrate how it can be used for the analysis of news databases. Our framework, concise comparative summarization (CCS), is built on sparse classification methods. CCS is a lightweight and flexible tool that offers a compromise between simple word frequency based methods currently in wide use and more heavyweight, model-intensive methods such as latent Dirichlet allocation (LDA). We argue that sparse methods have much to offer for text analysis and hope CCS opens the door for a new branch of research in this important field.

For a particular topic of interest (e.g., China or energy), CSS automatically labels documents as being either on- or off-topic (usually via keyword search), and then uses sparse classification methods to predict these labels with the high-dimensional counts of all the other words and phrases in the documents. The resulting small set of phrases found as predictive are then harvested as the summary. To validate our tool, we, using news articles from the New York Times international section, designed and conducted a human survey to compare the different summarizers with human understanding. We demonstrate our approach with two case studies, a media analysis of the framing of “Egypt” in the New York Times throughout the Arab Spring and an informal comparison of the New York Times’ and Wall Street Journal’s coverage of “energy.” Overall, we find that the Lasso with L2'>L 2  L2 normalization can be effectively and usefully used to summarize large corpora, regardless of document size.

 

concise_comparative_summaries_ccs_of_large_text_corpora_with_a_human_experiment_2014.pdf
Jia J, Miratrix L, Yu B, Gawalt B, El Ghaoui L, Barnesmoore L, Clavier S. Concise Comparative Summaries (CCS) of Large Text Corpora with a Human Experiment. Annals of Applied Statistics. 2014;8 (1) :499-529.
2013
Miratrix LW, Sekhon JS, Yu B. {Adjusting treatment effect estimates by post-stratification in randomized experiments}. Journal of the Royal Statistical Society Series B. 2013;75 :369–396.
Reinecke K, Yeh T, Miratrix L, Mardiko R, Zhao Y, Liu J, Gajos KZ. {Predicting users' first impressions of website aesthetics with a quantification of perceived visual complexity and colorfulness}, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM ; 2013 :2049–2058.

Pages