Some Papers, etc

Working Paper
Syrgkanis, Vasilis, Elie Tamer, and Juba Ziani. Working Paper. “Inference on Auctions with Weak Assumptions on Information”. Abstract

Given a sample of bids from independent auctions, this paper examines the question of inference on auction objects (like valuation distributions, welfare measures, etc) under weak assumptions on information. We leverage the re- cent contributions of Bergemann and Morris [2013] in the robust mechanism design literature that exploit the link between Bayesian Correlated Equilibria and Bayesian Nash Equilibria in incomplete information games, to construct an econometrics framework that is computationally feasible and robust to assump- tions about information. Checking whether a particular valuation distribution belongs to the identified set is as simple as determining whether a linear program (LP) is feasible. This is the key characteristic of our framework. A similar LP can be used to learn about various welfare measures and policy counterfactuals. For inference and to summarize statistical uncertainty, we propose novel finite sample methods using tail inequalities that are used to construct confidence sets on identified sets. Monte Carlo experiments show adequate finite sample properties. We illustrate our approach by applying our methods to a data set from search Ad auctions and to data from OCS auctions.

Khan, Shakeeb, Maria Ponomareva, and Elie Tamer. Working Paper. “Identification of Dynamic Binary Response Models ”.
Ciliberto, Federico, Charles Murry, and Elie Tamer. Working Paper. “Market Structure and Competition in Airline Markets, Forthcoming, Journal of Political Economy 2021. ”.
ssrn-id2777820.pdf cmt.pdf
Chen, Xiaohong, Elie Tamer, and Alexander Torgovitsky. 2015. “Sensitivity Analysis in Semiparametric Likelihood Models”. Abstract

We provide methods for inference on a finite dimensional parameter of interest,
2 <d , in a semiparametric probability model when an infinite dimensional nuisance parameter, g, is present. We depart from the semiparametric literature in that we do not require that the pair (, g) is point identified and so we construct confidence regions for that are robust to non-point identification. This allows practitioners to examine the sensitivity of their estimates of to specification of g in a likelihood setup. To construct these confidence regions for , we invert a profiled sieve likelihood ratio (LR) statistic. We derive the asymptotic null distribution of this profiled sieve LR, which is nonstandard when is not point identified (but is 2 distributed under point identification). We show that a simple weighted bootstrap procedure consistently estimates this complicated distribution’s quantiles. Monte Carlo studies of a semiparametric dynamic
binary response panel data model indicate that our weighted bootstrap procedures performs adequately in finite samples. We provide three empirical illustrations where we compare our results to the ones obtained using standard (less robust) methods.
Keywords: Sensitivity Analysis, Semiparametric Models, Partial Identification, Irregular Functionals, Sieve Likelihood Ratio, Weighted Bootstrap


Abstract. Randomized controlled trials (RCTs) are routinely used in medicine and are becoming more popular in economics. Data from RCTs are used to learn about treatment effects of interest. This paper studies what one can learn about the average treatment response (ATR) and average treatment effect (ATE) from RCT data under various assumptions and compares that to using observational data. We find that data from an RCT need not point identify the ATR or ATE because of selection into an RCT, as subjects are not randomly assigned from the population of interest to participate in the RCT. This problem relating to external validity is the primary problem we study. So, assuming internal validity of the RCT, we study the identified features of these treatment effects under a variety of weak assumptions such as: mean independence of response from participation, an instrumental variable assumption, or that there is a linear effect of participation on response. In particular we provide assumptions sufficient to point identify the ATR or the ATE from RCT data and also shed light on when the sign of the ATE can be identified. We then characterize assumptions under which RCT data provide more information than observational data.
Keywords: randomized controlled trials, experiments, treatment effect, identification

Kline, Brendan, and Elie Tamer. Working Paper. “The Empirical content of Models with Social Interactions”. Abstract

Empirical models with social interactions or peer effects allow the out- come of an individual to depend on the outcomes, choices, treatments, and/or characteristics of the other individuals in the group. We document the subtle re- lationship between the data and the objects of interest in models with interactions in small groups, and show that some econometric assumptions, that are direct ex- tensions from models of individualistic treatment response, implicitly entail strong behavioral assumptions. We point out two such econometric assumptions, EITR, or empirical individual treatment response, and EGTR, or empirical group treatment response. In some cases EITR and/or EGTR are inconsistent with a class of plau- sible economic models for the interaction under consideration; in other cases these econometric assumptions imply significant assumptions on behavior that are not necessarily implied by economic theory. We illustrate this using relevant examples of interaction in immunization and disease, and in educational achievement. We conclude that it is important for applications in this class of models with small group interactions to recognize the restrictions some assumptions impose on behav- ior.


The linear-in-means model is often used in applied work to empirically study the role of social interactions and peer effects.  We document the subtle relationship between the parameters of the linear-in-means model and the parameters relevant for policy analysis, and study the interpretations of the model under two different scenarios. First, we show that without further assumptions on the model the direct analogs of standard policy relevant parameters are either undefined or are complicated functions not only of the parameters of the linear-in-means model but also the parameters of the distribution of the unobservables.  This complicates the interpretation of the results. Second, and as in the literature on simultaneous equations, we show that it is possible to interpret the parameters of the linear-in-means model under additional assumptions on the social interaction, mainly that this interaction is a result of a particular {\it economic game}.  These assumptions that the game is built on  rule out economically relevant models.   We illustrate this using  examples of social interactions in educational achievement. We conclude that care should be taken when estimating and especially when interpreting coefficients from linear in means models.

Tamer, Elie. 2010. “Partial Identification in Econometrics.” Annual Reviews in Economics, 2, 1, 167-195. Abstract

Identification in econometric models maps prior assumptions and the data to information about a parameter of interest. The partial identification approach to inference recognizes that this process should not result in a binary answer that consists of whether the parameter is point identified. Rather, given the data, the partial identification approach characterizes the informational content of various assumptions by providing a menu of estimates, each based on different sets of assumptions, some of which are plausible and some of which are not. Of course, more assumptions beget more information, so stronger conclusions can be made at the expense of more assumptions. The partial identification approach advocates a more fluid view of identification and hence provides the empirical researcher with methods to help study the spectrum of information that we can harness about a parameter of interest using a menu of assumptions. This approach links conclusions drawn from various empirical models to sets of assumptions made in a transparent way. It allows researchers to examine the informational content of their assumptions and their impacts on the inferences made. Naturally, with finite sample sizes, this approach leads to statistical complications, as one needs to deal with characterizing sampling uncertainty in models that do not point identify a parameter. Therefore, new methods for inference are developed. These methods construct confidence sets for partially identified parameters, and confidence regions for sets of parameters, or identifiable sets.


Journal Article
Kline, Brendan, and Elie Tamer. Forthcoming. “Identification of Treatment Effects with Selective Participation in a Randomized Trials (forthcoming).” Econometrics Journal.
Chen, Xiaohong, Timothy Christensen, and Elie Tamer. Forthcoming. “Monte Carlo Confidence Sets for Identified Sets (forthcoming).” Econometrica. Abstract

In complicated/nonlinear parametric models, it is generally hard to know whether the model parameters are point identified. We provide computationally attractive procedures to construct confidence sets (CSs) for identified sets of full parameters and of subvectors in models defined through a likelihood or a vector of moment equalities or inequalities. These CSs are based on level sets of optimal sample criterion functions (such as likelihood or optimally-weighted or continuously-updated GMM criterions). The level sets are constructed using cutoffs that are computed via Monte Carlo (MC) simulations directly from the quasi-posterior distributions of the criterions. We establish new Bernstein-von Mises (or Bayesian Wilks) type theorems for the quasi-posterior distributions of the quasi-likelihood ratio (QLR) and profile QLR in partially-identified regular models and some non-regular models. These results imply that our MC CSs have exact asymptotic frequentist coverage for identified sets of full parameters and of subvectors in partially-identified regular models, and have valid but potentially conservative coverage in models with reduced-form parameters on the boundary. Our MC CSs for identified sets of subvectors are shown to have exact asymptotic coverage in models with singularities. We also provide results on uniform validity of our CSs over classes of DGPs that include point and partially identified models. We demonstrate good finite-sample coverage properties of our procedures in two simulation experiments. Finally, our procedures are applied to two non-trivial empirical examples: an airline entry game and a model of trade flows.

Paula, Aureo De, Seth Richards-Shubik, and Elie Tamer. Forthcoming. “Identifying Preferences in Networks with Bounded Degree.” Econometrica.
Kline, Brendan, and Elie Tamer. 2016. “Bayesian Inference in a Class of Partially Identified Models (Winner of Best Paper in QE for 2016).” Quantitative Economics 7 (2): 329-366. Publisher's Version Abstract

We develop a Bayesian approach to inference in a class of partially
identified econometric models. Models in this class have a point identified parameter μ (e.g., characteristics of the distribution of the data) and a partially identified parameter of interest (e.g., parameters of the model); further, if μ is known then the identified set for is known. Many instances of this class are commonly used in empirical work. Our approach maps, via the mapping between μ and , and without the specification of a prior for , the posterior for the point identified parameter μ to posterior probability statements about the identified set for , which is the quantity about which the data are informative. Thus, among other examples, we can report the posterior probability that a particular parameter value (or a set of parameter values, or a function of the parameter) is in the identified set. The paper develops general results on large sample approximations to these posterior probabilities, which illustrate how the posterior probabilities over the identified set are revised by the data. The paper establishes conditions under which the credible sets for the identified set also are valid frequentist confidence sets, providing a connection between Bayesian and frequentist inference in partially identified models (including for functions of the partially identified parameter). The approach is computationally attractive even in high-dimensional models: the approach avoids an exhaustive search over the parameter space (or “guess and verify”), partly by using existing MCMC methods to simulate draws from the posterior for μ. The paper also considers issues related to specification testing and estimation of misspecified models. We illustrate our approach via a set of Monte Carlo experiments and an empirical application to a binary entry game involving airlines. JEL codes: C10, C11. Keywords: partial identification, identified set, criterion function, posterior, Bayesian inference

Khan, Shakeeb, Maria Ponomareva, and Elie Tamer. 2016. “Identification of Panel Data Models with Endogenous Censoring.” Journal of Econometrics 194 (1): 57-75. Publisher's Version Abstract

We study inference on parameters in censored panel data models, where the censoring can depend on both observable and unobservable variables in arbitrary ways. Under some general conditions, we characterize the information the model and data contain about the parameters of interest by deriving the identified sets: Every parameter that belongs to these sets is observationally equivalent to the true parameter - the one that generated the data . We consider two separate sets of assumptions (2 models): the first uses stationarity on the unobserved disturbance terms. The second is a nonstationary model with a conditional independence restriction. Based on the characterizations of the identified sets, we provide a valid inference procedure that is shown to yield correct confidence sets based on inverting stochastic dominance tests. Also, we also show how our results extend to empirically interesting dynamic versions of the model with both lagged observed outcomes, and lagged indicators. We also show extensions to models with factor loads. In addition, and for both models, we provide sufficient conditions for point identification in terms of support conditions. The paper then examines sizes of the identified sets, and a Monte Carlo exercise shows reasonable small sample performance of our procedures.


Tamer, Elie, X Chen, and M Ponomareva. 2014. “Likelihood Inference in Some Finite Mixture Models.” Journal of Econometrics 182 (1): 87-99. Publisher's Version Abstract

Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.
JEL classification C. Keywords: Finite mixtures; Parametric bootstrap; Profiled likelihood ratio statistic.