Some Papers, etc

Working Paper
Syrgkanis, Vasilis, Elie Tamer, and Juba Ziani. Working Paper. “Inference on Auctions with Weak Assumptions on Information”.Abstract
Given a sample of bids from independent auctions, this paper examines the question of inference on auction fundamentals (e.g. valuation distributions, welfare measures) under weak assumptions on information structure. The question is important as it allows us to learn about the valuation distribution in a robust way, i.e., without assuming that a particular information structure holds across observations. We leverage recent contributions in the robust mechanism design literature that exploit the link between Bayesian Correlated Equilibria and Bayesian Nash Equilibria in incomplete information games to construct an econometrics framework for learning about auction fundamentals using observed data on bids. We showcase our construction of identified sets in private value and common value auctions. Our approach for constructing these sets inherits the computational simplicity of solving for correlated equilibria: checking whether a particular valuation distribution belongs to the identified set is as simple as determining whether a linear program is feasible. A similar linear program can be used to construct the identified set on various welfare measures and counterfactual objects. For inference and to summarize statistical uncertainty, we propose novel finite sample methods using tail inequalities that are used to construct confidence regions on sets. We also highlight methods based on Bayesian bootstrap and subsampling. A set of Monte Carlo experiments show adequate finite sample properties of our inference procedures. We also illustrate our methods using data from OCS auctions.
Chen, Xiaohong, Timothy Christensen, and Elie Tamer. Working Paper. “Monte Carlo Confidence Sets for Identified Sets”.Abstract

In complicated/nonlinear parametric models, it is generally hard to know whether the model parameters are point identified. We provide computationally attractive procedures to construct confidence sets (CSs) for identified sets of full parameters and of subvectors in models defined through a likelihood or a vector of moment equalities or inequalities. These CSs are based on level sets of optimal sample criterion functions (such as likelihood or optimally-weighted or continuously-updated GMM criterions). The level sets are constructed using cutoffs that are computed via Monte Carlo (MC) simulations directly from the quasi-posterior distributions of the criterions. We establish new Bernstein-von Mises (or Bayesian Wilks) type theorems for the quasi-posterior distributions of the quasi-likelihood ratio (QLR) and profile QLR in partially-identified regular models and some non-regular models. These results imply that our MC CSs have exact asymptotic frequentist coverage for identified sets of full parameters and of subvectors in partially-identified regular models, and have valid but potentially conservative coverage in models with reduced-form parameters on the boundary. Our MC CSs for identified sets of subvectors are shown to have exact asymptotic coverage in models with singularities. We also provide results on uniform validity of our CSs over classes of DGPs that include point and partially identified models. We demonstrate good finite-sample coverage properties of our procedures in two simulation experiments. Finally, our procedures are applied to two non-trivial empirical examples: an airline entry game and a model of trade flows.

Ciliberto, Federico, Charles Murry, and Elie Tamer. Working Paper. “Market Structure and Competition in Airline Markets”.
Kline, Brendan, and Elie Tamer. Forthcoming. “ECONOMETRIC ANALYSIS OF MODELS WITH SOCIAL INTERACTIONS (forthcoming in The Econometric Analysis of Network Data,'' B. Graham and A. De Paula Editors)”.
Chen, Xiaohong, Elie Tamer, and Alexander Torgovitsky. 2015. “Sensitivity Analysis in Semiparametric Likelihood Models”.Abstract

We provide methods for inference on a finite dimensional parameter of interest,
2 <d , in a semiparametric probability model when an infinite dimensional nuisance parameter, g, is present. We depart from the semiparametric literature in that we do not require that the pair (, g) is point identified and so we construct confidence regions for that are robust to non-point identification. This allows practitioners to examine the sensitivity of their estimates of to specification of g in a likelihood setup. To construct these confidence regions for , we invert a profiled sieve likelihood ratio (LR) statistic. We derive the asymptotic null distribution of this profiled sieve LR, which is nonstandard when is not point identified (but is 2 distributed under point identification). We show that a simple weighted bootstrap procedure consistently estimates this complicated distribution’s quantiles. Monte Carlo studies of a semiparametric dynamic
binary response panel data model indicate that our weighted bootstrap procedures performs adequately in finite samples. We provide three empirical illustrations where we compare our results to the ones obtained using standard (less robust) methods.
Keywords: Sensitivity Analysis, Semiparametric Models, Partial Identification, Irregular Functionals, Sieve Likelihood Ratio, Weighted Bootstrap

Kline, Brendan, and Elie Tamer. 2014. “Using Observational vs Randomized Controlled Trial Data to Learn about Treatment Effects”.Abstract

Abstract. Randomized controlled trials (RCTs) are routinely used in medicine and are becoming more popular in economics. Data from RCTs are used to learn about treatment effects of interest. This paper studies what one can learn about the average treatment response (ATR) and average treatment effect (ATE) from RCT data under various assumptions and compares that to using observational data. We find that data from an RCT need not point identify the ATR or ATE because of selection into an RCT, as subjects are not randomly assigned from the population of interest to participate in the RCT. This problem relating to external validity is the primary problem we study. So, assuming internal validity of the RCT, we study the identified features of these treatment effects under a variety of weak assumptions such as: mean independence of response from participation, an instrumental variable assumption, or that there is a linear effect of participation on response. In particular we provide assumptions sufficient to point identify the ATR or the ATE from RCT data and also shed light on when the sign of the ATE can be identified. We then characterize assumptions under which RCT data provide more information than observational data.
Keywords: randomized controlled trials, experiments, treatment effect, identification

Miscellaneous
Kline, Brendan, and Elie Tamer. Working Paper. “The Empirical content of Models with Social Interactions”.Abstract

Empirical models with social interactions or peer effects allow the out- come of an individual to depend on the outcomes, choices, treatments, and/or characteristics of the other individuals in the group. We document the subtle re- lationship between the data and the objects of interest in models with interactions in small groups, and show that some econometric assumptions, that are direct ex- tensions from models of individualistic treatment response, implicitly entail strong behavioral assumptions. We point out two such econometric assumptions, EITR, or empirical individual treatment response, and EGTR, or empirical group treatment response. In some cases EITR and/or EGTR are inconsistent with a class of plau- sible economic models for the interaction under consideration; in other cases these econometric assumptions imply significant assumptions on behavior that are not necessarily implied by economic theory. We illustrate this using relevant examples of interaction in immunization and disease, and in educational achievement. We conclude that it is important for applications in this class of models with small group interactions to recognize the restrictions some assumptions impose on behav- ior.

Tamer, Elie. Forthcoming. “ET Interview with Chuck Manski”.
Kline, Brendan, and Elie Tamer. 2012. “Some Interpretation of the Linear-In-Means Model of Social Interactions”.Abstract

The linear-in-means model is often used in applied work to empirically study the role of social interactions and peer effects.  We document the subtle relationship between the parameters of the linear-in-means model and the parameters relevant for policy analysis, and study the interpretations of the model under two different scenarios. First, we show that without further assumptions on the model the direct analogs of standard policy relevant parameters are either undefined or are complicated functions not only of the parameters of the linear-in-means model but also the parameters of the distribution of the unobservables.  This complicates the interpretation of the results. Second, and as in the literature on simultaneous equations, we show that it is possible to interpret the parameters of the linear-in-means model under additional assumptions on the social interaction, mainly that this interaction is a result of a particular {\it economic game}.  These assumptions that the game is built on  rule out economically relevant models.   We illustrate this using  examples of social interactions in educational achievement. We conclude that care should be taken when estimating and especially when interpreting coefficients from linear in means models.

Tamer, Elie. 2010. “Partial Identification in Econometrics.” Annual Reviews in Economics, 2, 1, 167-195.Abstract

Journal Article
Kline, Brendan, and Elie Tamer. Forthcoming. “Identification of Treatment Effects with Selective Participation in a Randomized Trials, fothcoming, Econometrics Journal.”.
Paula, Aureo De, Seth Richards-Shubik, and Elie Tamer. Forthcoming. “Identifying Preferences in Networks with Bounded Degree (forthcoming, Econometrica)”.
Kline, Brendan, and Elie Tamer. 2016. “Bayesian Inference in a Class of Partially Identified Models (Winner of Best Paper in QE for 2016).” Quantitative Economics 7 (2): 329-366. Publisher's VersionAbstract

We develop a Bayesian approach to inference in a class of partially
identified econometric models. Models in this class have a point identified parameter μ (e.g., characteristics of the distribution of the data) and a partially identified parameter of interest (e.g., parameters of the model); further, if μ is known then the identified set for is known. Many instances of this class are commonly used in empirical work. Our approach maps, via the mapping between μ and , and without the specification of a prior for , the posterior for the point identified parameter μ to posterior probability statements about the identified set for , which is the quantity about which the data are informative. Thus, among other examples, we can report the posterior probability that a particular parameter value (or a set of parameter values, or a function of the parameter) is in the identified set. The paper develops general results on large sample approximations to these posterior probabilities, which illustrate how the posterior probabilities over the identified set are revised by the data. The paper establishes conditions under which the credible sets for the identified set also are valid frequentist confidence sets, providing a connection between Bayesian and frequentist inference in partially identified models (including for functions of the partially identified parameter). The approach is computationally attractive even in high-dimensional models: the approach avoids an exhaustive search over the parameter space (or “guess and verify”), partly by using existing MCMC methods to simulate draws from the posterior for μ. The paper also considers issues related to specification testing and estimation of misspecified models. We illustrate our approach via a set of Monte Carlo experiments and an empirical application to a binary entry game involving airlines. JEL codes: C10, C11. Keywords: partial identification, identified set, criterion function, posterior, Bayesian inference

Khan, Shakeeb, Maria Ponomareva, and Elie Tamer. 2016. “Identification of Panel Data Models with Endogenous Censoring.” Journal of Econometrics 194 (1): 57-75. Publisher's VersionAbstract

We study inference on parameters in censored panel data models, where the censoring can depend on both observable and unobservable variables in arbitrary ways. Under some general conditions, we characterize the information the model and data contain about the parameters of interest by deriving the identified sets: Every parameter that belongs to these sets is observationally equivalent to the true parameter - the one that generated the data . We consider two separate sets of assumptions (2 models): the first uses stationarity on the unobserved disturbance terms. The second is a nonstationary model with a conditional independence restriction. Based on the characterizations of the identified sets, we provide a valid inference procedure that is shown to yield correct confidence sets based on inverting stochastic dominance tests. Also, we also show how our results extend to empirically interesting dynamic versions of the model with both lagged observed outcomes, and lagged indicators. We also show extensions to models with factor loads. In addition, and for both models, we provide sufficient conditions for point identification in terms of support conditions. The paper then examines sizes of the identified sets, and a Monte Carlo exercise shows reasonable small sample performance of our procedures.

Tamer, Elie, X Chen, and M Ponomareva. 2014. “Likelihood Inference in Some Finite Mixture Models.” Journal of Econometrics 182 (1): 87-99. Publisher's VersionAbstract

Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.
JEL classification C. Keywords: Finite mixtures; Parametric bootstrap; Profiled likelihood ratio statistic.

Komarova, T., T. Severini, and E. Tamer. 2012. “Quantile Uncorrelation and Instrumental Regressions.” Journal of Econometric Methods 1 (1): 2-14. Publisher's Version
Tamer, Elie, Tatiana Komarova, and Thomas Severini. 2012. “Quantile Uncorrelation and Instrumental Regression” 1 (1): 2-14. Publisher's VersionAbstract

Abstract. We introduce a notion of median uncorrelation that is a natural extension of mean (linear) uncorrelation. A scalar random variable Y is median uncorrelated with a k-dimensional random vector X if and only if the slope from an LAD regression of Y on X is zero. Using this simple definition, we characterize properties of median uncorrelated random variables, and introduce a notion of multivariate median uncorrelation. We provide measures of median uncorrelation that are similar to the linear correlation coefficient and the coefficient of determination. We also extend this median uncorrelation to other loss functions. As two stage least squares exploits mean uncorrelation between an instrument vector and the error to derive consistent estimators for parameters in linear regressions with endogenous regressors, the main result of this paper shows how a median uncorrelation assumption between an instrument vector and the error can similarly be used to derive consistent estimators in these linear models with endogenous regressors. We also show how median uncorrelation can be used in linear panel models with quantile restrictions and in linear models with measurement errors.

Tamer, Elie, and Brendan Kline. 2012. “Bounds for Best Response Functions in Binary Games.” Journal of Econometric 166 (1): 92-105.Abstract

This paper studies the identification of best response functions in binary
games without making strong parametric assumptions about the payoffs. The best response function gives the utility maximizing response to a decision of the other players. This is analogous to the response function in the treatment-response literature, taking the decision of the other players as the treatment, except that the best response function has additional structure implied by the associated utility maximization problem. Further, the relationship between the data and the best response function is not the same as the relationship in the treatment-response literature between the data and the response function. We focus especially on the case of a complete information entry game with two firms. We also discuss the case of an entry game with many firms, non-entry games, and incomplete information. Our analysis of the entry game is based on the observation of realized entry decisions, which we then link to the best response functions under various assumptions including those concerning the level of rationality of the firms, including the assumption of Nash equilibrium play, the symmetry of the payoffs between firms, and whether mixed strategies are admitted.