Research

Working Paper
Ashesh Rambachan and Neil Shephard. Working Paper. “Econometric analysis of potential outcomes time series: instruments, shocks, linearity and the causal response function”.Abstract
Bojinov & Shephard (2019) defined potential outcome time series 
to nonparametrically measure dynamic causal effects in time series experiments.  Four innovations are developed in this paper: ``instrumental paths'', treatments which are ``shocks'', ``linear potential outcomes'' and the ``causal response function.''  Potential outcome time series are then used to provide a nonparametric causal interpretation of impulse response functions, generalized impulse response functions, local projections and LP-IV. 
arns_causal_macro_revised-spring2020.pdf
Ashesh Rambachan, Jon Kleinberg, Jens Ludwig, and Sendhil Mullainathan. Working Paper. “An Economic Approach to Regulating Algorithms”.Abstract
There is growing concern about "algorithmic bias" - that predictive algorithms used in decision-making might bake in or exacerbate discrimination in society. When will these "biases" arise? What should be done about them? We argue that such questions are naturally answered using the tools of welfare economics: a social welfare function for the policymaker, a private objective function for the algorithm designer and a model of their information sets and interaction. We build such a model that allows the training data to exhibit a wide range of "biases." Prevailing wisdom is that biased data change how the algorithm is trained and whether an algorithm should be used at all. In contrast, we find two striking irrelevance results. First, when the social planner builds the algorithm, her equity preference has no effect on the training procedure. So long as the data, however biased, contain signal, they will be used and the algorithm built on top will be the same. Any characteristic that is predictive of the outcome of interest, including group membership, will be used. Second, we study how the social planner regulates private (possibly discriminatory) actors building algorithms. Optimal regulation depends crucially on the disclosure regime. Absent disclosure, algorithms are regulated much like human decision-makers: disparate impact and disparate treatment rules dictate what is allowed. In contrast, under stringent disclosure of all underlying algorithmic inputs (data, training procedure and decision rule), once again we find an irrelevance result: private actors can use any predictive characteristic. Additionally, now algorithms strictly reduce the extent of discrimination against protected groups relative to a world in which humans make all the decisions. As these results run counter to prevailing wisdom on algorithmic bias, at a minimum, they provide a baseline set of assumptions that must be altered to generate different conclusions.
Ashesh Rambachan and Jonathan Roth. Working Paper. “An Honest Approach to Parallel Trends”.Abstract
Standard approaches for causal inference in difference-in-differences and event-study designs are valid only under the assumption of parallel trends. Researchers are typically unsure whether the parallel trends assumption holds, and therefore gauge its plausibility by testing for pre-treatment differences in trends (``pre-trends'') between the treated and untreated groups. This paper proposes robust inference methods that do not require that the parallel trends assumption holds exactly. Instead, we impose restrictions on the set of possible violations of parallel trends that formalize the logic motivating pre-trends testing --- namely, that the pre-trends are informative about what would have happened under the counterfactual. Under a wide class of restrictions on the possible differences in trends, the parameter of interest is set-identified and inference on the treatment effect of interest is equivalent to testing a set of moment inequalities with linear nuisance parameters. We derive computationally tractable confidence sets that are uniformly valid (``honest'') so long as the difference in trends satisfies the imposed restrictions. Our proposed confidence sets are consistent, and have optimal local asymptotic power for many parameter configurations. We also introduce fixed length confidence intervals, which can offer finite-sample improvements for a subset of the cases we consider. We recommend that researchers conduct sensitivity analyses to show what conclusions can be drawn under various restrictions on the set of possible differences in trends. We conduct a simulation study and illustrate our recommended approach with applications to two recently published papers.
Iavor Bojinov, Ashesh Rambachan, and Neil Shephard. Working Paper. “Panel Experiments and Dynamic Causal Effects: A Finite Population Perspective”.Abstract
In panel experiments, we randomly expose multiple units to different treatments and measure their subsequent outcomes, sequentially repeating the procedure numerous times. Using the potential outcomes framework, we define finite population dynamic causal effects that capture the relative effectiveness of alternative treatment paths. For the leading example, known as the lag-p dynamic causal effects, we provide a nonparametric estimator that is unbiased over the randomization distribution. We then derive the finite population limiting distribution of our estimators as either the sample size or the duration of the experiment increases. Our approach provides a new technique for deriving finite population central limit theorems that exploits the underlying Martingale property of unbiased estimators. We further describe two methods for conducting inference on dynamic causal effects: a conservative test for weak null hypotheses of zero average causal effects using the limiting distribution and an exact randomization-based test for sharp null hypotheses. We also derive the finite population limiting distribution of commonly-used linear fixed effects estimators, showing that these estimators perform poorly in the presence of dynamic causal effects. We conclude with a simulation study and an empirical application where we reanalyze a lab experiment on cooperation.
2020
Ashesh Rambachan, Jon Kleinberg, Jens Ludwig, and Sendhil Mullainathan. 5/2020. “An Economic Perspective on Algorithmic Fairness.” AEA Papers and Proceedings, 110, Pp. 91-95. Publisher's VersionAbstract
There are widespread concerns that the growing use of machine learning algorithms in important decisions may reproduce and reinforce existing discrimination against legally protected groups. Most of the attention to date on issues of "algorithmic bias" or "algorithmic fairness" has come from computer scientists and machine learning researchers. We argue that concerns about algorithmic fairness are at least as much about questions of how discrimination manifests itself in data, decision-making under uncertainty, and optimal regulation. To fully answer these questions, an economic framework is necessary—and as a result, economists have much to contribute.
Ashesh Rambachan and Jonathan Roth. 2020. “Bias In, Bias Out? Evaluating the Folk Wisdom.” 1st Symposium on Foundations of Responsible Computing (FORC 2020), 156, Pp. 6:1--6:15. Publisher's VersionAbstract
We evaluate the folk wisdom that algorithmic decision rules trained on data produced by biased human decision-makers necessarily reflect this bias. We consider a setting where training labels are only generated if a biased decision-maker takes a particular action, and so ``biased'' training data arise due to discriminatory selection into the training data. In our baseline model, the more biased the decision-maker is against a group, the more the algorithmic decision rule favors that group. We refer to this phenomenon as bias reversal. We then clarify the conditions that give rise to bias reversal. Whether a prediction algorithm reverses or inherits bias depends critically on how the decision-maker affects the training data as well as the label used in training. We illustrate our main theoretical results in a simulation study applied to the New York City Stop, Question and Frisk dataset.
2018
Jens Ludwig, Jon Kleinberg, Sendhil Mullainathan, and Ashesh Rambachan. 5/2018. “Algorithmic Fairness.” AEA Papers and Proceedings, 108, Pp. 22-27. Publisher's VersionAbstract
Concerns that algorithms may discriminate against certain groups have led to numerous efforts to 'blind' the algorithm to race. We argue that this intuitive perspective is misleading and may do harm. Our primary result is exceedingly simple, yet often overlooked. A preference for fairness should not change the choice of estimator. Equity preferences can change how the estimated prediction function is used (e.g., different threshold for different groups) but the function itself should not change. We show in an empirical example for college admissions that the inclusion of variables such as race can increase both equity and efficiency.