# Some Papers, etc

In complicated/nonlinear parametric models, it is generally hard to know whether the model parameters are point identified. We provide computationally attractive procedures to construct confidence sets (CSs) for identified sets of full parameters and of subvectors in models defined through a likelihood or a vector of moment equalities or inequalities. These CSs are based on level sets of optimal sample criterion functions (such as likelihood or optimally-weighted or continuously-updated GMM criterions). The level sets are constructed using cutoffs that are computed via Monte Carlo (MC) simulations directly from the quasi-posterior distributions of the criterions. We establish new Bernstein-von Mises (or Bayesian Wilks) type theorems for the quasi-posterior distributions of the quasi-likelihood ratio (QLR) and profile QLR in partially-identified regular models and some non-regular models. These results imply that our MC CSs have exact asymptotic frequentist coverage for identified sets of full parameters and of subvectors in partially-identified regular models, and have valid but potentially conservative coverage in models with reduced-form parameters on the boundary. Our MC CSs for identified sets of subvectors are shown to have exact asymptotic coverage in models with singularities. We also provide results on uniform validity of our CSs over classes of DGPs that include point and partially identified models. We demonstrate good finite-sample coverage properties of our procedures in two simulation experiments. Finally, our procedures are applied to two non-trivial empirical examples: an airline entry game and a model of trade flows.

We provide methods for inference on a finite dimensional parameter of interest,

2 <d , in a semiparametric probability model when an infinite dimensional nuisance parameter, g, is present. We depart from the semiparametric literature in that we do not require that the pair (, g) is point identified and so we construct confidence regions for that are robust to non-point identification. This allows practitioners to examine the sensitivity of their estimates of to specification of g in a likelihood setup. To construct these confidence regions for , we invert a profiled sieve likelihood ratio (LR) statistic. We derive the asymptotic null distribution of this profiled sieve LR, which is nonstandard when is not point identified (but is 2 distributed under point identification). We show that a simple weighted bootstrap procedure consistently estimates this complicated distribution’s quantiles. Monte Carlo studies of a semiparametric dynamic

binary response panel data model indicate that our weighted bootstrap procedures performs adequately in finite samples. We provide three empirical illustrations where we compare our results to the ones obtained using standard (less robust) methods.

Keywords: Sensitivity Analysis, Semiparametric Models, Partial Identification, Irregular Functionals, Sieve Likelihood Ratio, Weighted Bootstrap

Abstract. Randomized controlled trials (RCTs) are routinely used in medicine and are becoming more popular in economics. Data from RCTs are used to learn about treatment effects of interest. This paper studies what one can learn about the average treatment response (ATR) and average treatment effect (ATE) from RCT data under various assumptions and compares that to using observational data. We find that data from an RCT need not point identify the ATR or ATE because of selection into an RCT, as subjects are not randomly assigned from the population of interest to participate in the RCT. This problem relating to external validity is the primary problem we study. So, assuming internal validity of the RCT, we study the identified features of these treatment effects under a variety of weak assumptions such as: mean independence of response from participation, an instrumental variable assumption, or that there is a linear effect of participation on response. In particular we provide assumptions sufficient to point identify the ATR or the ATE from RCT data and also shed light on when the sign of the ATE can be identified. We then characterize assumptions under which RCT data provide more information than observational data.

Keywords: randomized controlled trials, experiments, treatment effect, identification

Empirical models with social interactions or peer effects allow the out- come of an individual to depend on the outcomes, choices, treatments, and/or characteristics of the other individuals in the group. We document the subtle re- lationship between the data and the objects of interest in models with interactions in small groups, and show that some econometric assumptions, that are direct ex- tensions from models of individualistic treatment response, implicitly entail strong behavioral assumptions. We point out two such econometric assumptions, EITR, or empirical individual treatment response, and EGTR, or empirical group treatment response. In some cases EITR and/or EGTR are inconsistent with a class of plau- sible economic models for the interaction under consideration; in other cases these econometric assumptions imply significant assumptions on behavior that are not necessarily implied by economic theory. We illustrate this using relevant examples of interaction in immunization and disease, and in educational achievement. We conclude that it is important for applications in this class of models with small group interactions to recognize the restrictions some assumptions impose on behav- ior.

The linear-in-means model is often used in applied work to empirically study the role of social interactions and peer effects. We document the subtle relationship between the parameters of the linear-in-means model and the parameters relevant for policy analysis, and study the interpretations of the model under two different scenarios. First, we show that without further assumptions on the model the direct analogs of standard policy relevant parameters are either undefined or are complicated functions not only of the parameters of the linear-in-means model but also the parameters of the distribution of the unobservables. This complicates the interpretation of the results. Second, and as in the literature on simultaneous equations, we show that it is possible to interpret the parameters of the linear-in-means model under additional assumptions on the social interaction, mainly that this interaction is a result of a particular {\it economic game}. These assumptions that the game is built on rule out economically relevant models. We illustrate this using examples of social interactions in educational achievement. We conclude that care should be taken when estimating and especially when interpreting coefficients from linear in means models.

Identification in econometric models maps prior assumptions and the data to information about a parameter of interest. The partial identification approach to inference recognizes that this process should not result in a binary answer that consists of whether the parameter is point identified. Rather, given the data, the partial identification approach characterizes the informational content of various assumptions by providing a menu of estimates, each based on different sets of assumptions, some of which are plausible and some of which are not. Of course, more assumptions beget more information, so stronger conclusions can be made at the expense of more assumptions. The partial identification approach advocates a more fluid view of identification and hence provides the empirical researcher with methods to help study the spectrum of information that we can harness about a parameter of interest using a menu of assumptions. This approach links conclusions drawn from various empirical models to sets of assumptions made in a transparent way. It allows researchers to examine the informational content of their assumptions and their impacts on the inferences made. Naturally, with finite sample sizes, this approach leads to statistical complications, as one needs to deal with characterizing sampling uncertainty in models that do not point identify a parameter. Therefore, new methods for inference are developed. These methods construct confidence sets for partially identified parameters, and confidence regions for sets of parameters, or identifiable sets.

We develop a Bayesian approach to inference in a class of partially

identified econometric models. Models in this class have a point identified parameter μ (e.g., characteristics of the distribution of the data) and a partially identified parameter of interest (e.g., parameters of the model); further, if μ is known then the identified set for is known. Many instances of this class are commonly used in empirical work. Our approach maps, via the mapping between μ and , and without the specification of a prior for , the posterior for the point identified parameter μ to posterior probability statements about the identified set for , which is the quantity about which the data are informative. Thus, among other examples, we can report the posterior probability that a particular parameter value (or a set of parameter values, or a function of the parameter) is in the identified set. The paper develops general results on large sample approximations to these posterior probabilities, which illustrate how the posterior probabilities over the identified set are revised by the data. The paper establishes conditions under which the credible sets for the identified set also are valid frequentist confidence sets, providing a connection between Bayesian and frequentist inference in partially identified models (including for functions of the partially identified parameter). The approach is computationally attractive even in high-dimensional models: the approach avoids an exhaustive search over the parameter space (or “guess and verify”), partly by using existing MCMC methods to simulate draws from the posterior for μ. The paper also considers issues related to specification testing and estimation of misspecified models. We illustrate our approach via a set of Monte Carlo experiments and an empirical application to a binary entry game involving airlines. JEL codes: C10, C11. Keywords: partial identification, identified set, criterion function, posterior, Bayesian inference

We study inference on parameters in censored panel data models, where the censoring can depend on both observable and unobservable variables in arbitrary ways. Under some general conditions, we characterize the information the model and data contain about the parameters of interest by deriving the identified sets: Every parameter that belongs to these sets is observationally equivalent to the true parameter - the one that generated the data . We consider two separate sets of assumptions (2 models): the first uses stationarity on the unobserved disturbance terms. The second is a nonstationary model with a conditional independence restriction. Based on the characterizations of the identified sets, we provide a valid inference procedure that is shown to yield correct confidence sets based on inverting stochastic dominance tests. Also, we also show how our results extend to empirically interesting dynamic versions of the model with both lagged observed outcomes, and lagged indicators. We also show extensions to models with factor loads. In addition, and for both models, we provide sufficient conditions for point identification in terms of support conditions. The paper then examines sizes of the identified sets, and a Monte Carlo exercise shows reasonable small sample performance of our procedures.

Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.

JEL classification C. Keywords: Finite mixtures; Parametric bootstrap; Profiled likelihood ratio statistic.

Abstract. We introduce a notion of median uncorrelation that is a natural extension of mean (linear) uncorrelation. A scalar random variable Y is median uncorrelated with a k-dimensional random vector X if and only if the slope from an LAD regression of Y on X is zero. Using this simple definition, we characterize properties of median uncorrelated random variables, and introduce a notion of multivariate median uncorrelation. We provide measures of median uncorrelation that are similar to the linear correlation coefficient and the coefficient of determination. We also extend this median uncorrelation to other loss functions. As two stage least squares exploits mean uncorrelation between an instrument vector and the error to derive consistent estimators for parameters in linear regressions with endogenous regressors, the main result of this paper shows how a median uncorrelation assumption between an instrument vector and the error can similarly be used to derive consistent estimators in these linear models with endogenous regressors. We also show how median uncorrelation can be used in linear panel models with quantile restrictions and in linear models with measurement errors.

This paper studies the identification of best response functions in binary

games without making strong parametric assumptions about the payoffs. The best response function gives the utility maximizing response to a decision of the other players. This is analogous to the response function in the treatment-response literature, taking the decision of the other players as the treatment, except that the best response function has additional structure implied by the associated utility maximization problem. Further, the relationship between the data and the best response function is not the same as the relationship in the treatment-response literature between the data and the response function. We focus especially on the case of a complete information entry game with two firms. We also discuss the case of an entry game with many firms, non-entry games, and incomplete information. Our analysis of the entry game is based on the observation of realized entry decisions, which we then link to the best response functions under various assumptions including those concerning the level of rationality of the firms, including the assumption of Nash equilibrium play, the symmetry of the payoffs between firms, and whether mixed strategies are admitted.