In complicated/nonlinear parametric models, it is generally hard to know whether the model parameters are point identified. We provide computationally attractive procedures to construct confidence sets (CSs) for identified sets of full parameters and of subvectors in models defined through a likelihood or a vector of moment equalities or inequalities. These CSs are based on level sets of optimal sample criterion functions (such as likelihood or optimally-weighted or continuously-updated GMM criterions). The level sets are constructed using cutoffs that are computed via Monte Carlo (MC) simulations directly from the quasi-posterior distributions of the criterions. We establish new Bernstein-von Mises (or Bayesian Wilks) type theorems for the quasi-posterior distributions of the quasi-likelihood ratio (QLR) and profile QLR in partially-identified regular models and some non-regular models. These results imply that our MC CSs have exact asymptotic frequentist coverage for identified sets of full parameters and of subvectors in partially-identified regular models, and have valid but potentially conservative coverage in models with reduced-form parameters on the boundary. Our MC CSs for identified sets of subvectors are shown to have exact asymptotic coverage in models with singularities. We also provide results on uniform validity of our CSs over classes of DGPs that include point and partially identified models. We demonstrate good finite-sample coverage properties of our procedures in two simulation experiments. Finally, our procedures are applied to two non-trivial empirical examples: an airline entry game and a model of trade flows.

CCT.pdf
We develop a Bayesian approach to inference in a class of partially

identified econometric models. Models in this class have a point identified parameter μ (e.g., characteristics of the distribution of the data) and a partially identified parameter of interest (e.g., parameters of the model); further, if μ is known then the identified set for is known. Many instances of this class are commonly used in empirical work. Our approach maps, via the mapping between μ and , and without the specification of a prior for , the posterior for the point identified parameter μ to posterior probability statements about the identified set for , which is the quantity about which the data are informative. Thus, among other examples, we can report the posterior probability that a particular parameter value (or a set of parameter values, or a function of the parameter) is in the identified set. The paper develops general results on large sample approximations to these posterior probabilities, which illustrate how the posterior probabilities over the identified set are revised by the data. The paper establishes conditions under which the credible sets for the identified set also are valid frequentist confidence sets, providing a connection between Bayesian and frequentist inference in partially identified models (including for functions of the partially identified parameter). The approach is computationally attractive even in high-dimensional models: the approach avoids an exhaustive search over the parameter space (or “guess and verify”), partly by using existing MCMC methods to simulate draws from the posterior for μ. The paper also considers issues related to specification testing and estimation of misspecified models. We illustrate our approach via a set of Monte Carlo experiments and an empirical application to a binary entry game involving airlines. JEL codes: C10, C11. Keywords: partial identification, identified set, criterion function, posterior, Bayesian inference

kt-bayesian.pdf
We study inference on parameters in censored panel data models, where the censoring can depend on both observable and unobservable variables in arbitrary ways. Under some general conditions, we characterize the information the model and data contain about the parameters of interest by deriving the identified sets: Every parameter that belongs to these sets is observationally equivalent to the true parameter - the one that generated the data . We consider two separate sets of assumptions (2 models): the first uses stationarity on the unobserved disturbance terms. The second is a nonstationary model with a conditional independence restriction. Based on the characterizations of the identified sets, we provide a valid inference procedure that is shown to yield correct confidence sets based on inverting stochastic dominance tests. Also, we also show how our results extend to empirically interesting dynamic versions of the model with both lagged observed outcomes, and lagged indicators. We also show extensions to models with factor loads. In addition, and for both models, we provide sufficient conditions for point identification in terms of support conditions. The paper then examines sizes of the identified sets, and a Monte Carlo exercise shows reasonable small sample performance of our procedures.

paper-final2015.pdfParametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.

JEL classification C. Keywords: Finite mixtures; Parametric bootstrap; Profiled likelihood ratio statistic.

Abstract. We introduce a notion of median uncorrelation that is a natural extension of mean (linear) uncorrelation. A scalar random variable Y is median uncorrelated with a k-dimensional random vector X if and only if the slope from an LAD regression of Y on X is zero. Using this simple definition, we characterize properties of median uncorrelated random variables, and introduce a notion of multivariate median uncorrelation. We provide measures of median uncorrelation that are similar to the linear correlation coefficient and the coefficient of determination. We also extend this median uncorrelation to other loss functions. As two stage least squares exploits mean uncorrelation between an instrument vector and the error to derive consistent estimators for parameters in linear regressions with endogenous regressors, the main result of this paper shows how a median uncorrelation assumption between an instrument vector and the error can similarly be used to derive consistent estimators in these linear models with endogenous regressors. We also show how median uncorrelation can be used in linear panel models with quantile restrictions and in linear models with measurement errors.

kst.pdf