Research

Mislearning from Censored Data: The Gambler’s Fallacy in Optimal-Stopping Problems, December 2018.
[download pdf]  [online appendix]  [arXiv]

Abstract

I study endogenous learning dynamics for people expecting systematic reversals from random sequences — the “gambler’s fallacy.” Biased agents face an optimal-stopping problem, such as managers conducting sequential interviews. They are uncertain about the underlying distribution (e.g. talent distribution in the labor pool) and must learn its parameters from previous agents’ histories. Agents stop when early draws are deemed “good enough,” so predecessors’ histories contain negative streaks but not positive streaks. Since biased learners understate the likelihood of consecutive below-average draws, histories induce pessimistic beliefs about the distribution’s mean. When early agents decrease their acceptance thresholds due to pessimism, later learners will become more surprised by the lack of positive reversals in their predecessors’ histories, leading to even more pessimistic inferences and even lower acceptance thresholds — a positive-feedback loop. Agents who are additionally uncertain about the distribution’s variance believe in fictitious variation (exaggerated variance) to an extent depending on the severity of data censoring. When payoffs are convex in the draws (e.g. managers can hire previously rejected interviewees), variance uncertainty provides another channel of positive feedback between past and future thresholds.

 

Network Structure and Naive Sequential Learning (with Krishna Dasaratha).
Revision requested at Theoretical Economics, August 2018.
[download pdf]  [slides]  [arXiv]  [pre-registration]

Abstract [click to expand]

We study a sequential learning model featuring naive agents on a network. Agents wrongly believe their predecessors act solely on private information, so they neglect redundancies among observed actions. We provide a simple linear formula expressing agents’ actions in terms of network paths and use this formula to completely characterize the set of networks guaranteeing eventual correct learning. This characterization shows that on almost all networks, disproportionately influential early agents can cause herding on incorrect actions. Going beyond existing social-learning results, we compute the probability of such mislearning exactly. This lets us compare likelihoods of incorrect herding, and hence expected welfare losses, across network structures. The probability of mislearning increases when link densities are higher and when networks are more integrated. In partially segregated networks, divergent early signals can lead to persistent disagreement between groups. We conduct an experiment and find that the accuracy gain from social learning is twice as large on sparser networks, which is consistent with naive inference but inconsistent with the rational-learning model.

 

Player-Compatible Equilibrium (with Drew Fudenberg), December 2018.
[download pdf]  [arXiv]

Abstract [click to expand]

Player-Compatible Equilibrium (PCE) imposes cross-player restrictions on the magnitudes of the players' “trembles” onto different strategies. These restrictions capture the idea that trembles correspond to deliberate experiments by agents who are unsure of the prevailing distribution of play. PCE selects intuitive equilibria in a number of examples where trembling-hand perfect equilibrium (Selten, 1975) and proper equilibrium (Myerson, 1978) have no bite. We show that rational learning and some near-optimal heuristics imply our compatibility restrictions in a steady-state setting.

 

Learning and Type Compatibility in Signaling Games (with Drew Fudenberg).
Econometrica, July 2018.
[download pdf]  [online appendix]  [publisher's DOI]  [arXiv]

Abstract [click to expand]

Which equilibria will arise in signaling games depends on how the receiver interprets deviations from the path of play. We develop a micro-foundation for these off-path beliefs, and an associated equilibrium refinement, in a model where equilibrium arises through non-equilibrium learning by populations of patient and long-lived senders and receivers. In our model, young senders are uncertain about the prevailing distribution of play, so they rationally send out-of-equilibrium signals as experiments to learn about the behavior of the population of receivers. Differences in the payoff functions of the types of senders generate different incentives for these experiments. Using the Gittins index (Gittins, 1979), we characterize which sender types use each signal more often, leading to a constraint on the receiver’s off-path beliefs based on “type compatibility” and hence a learning-based equilibrium selection.

 

Learning and Equilibrium Refinements in Signaling Games (with Drew Fudenberg), September 2017.
[download pdf]  [arXiv]

Abstract [click to expand]

We propose two new signaling-game refinements that are microfounded in a model of patient Bayesian learning. Agents are born into player roles and play the signaling game against a random opponent each period. Inexperienced agents know their opponents payoff functions but not the prevailing distribution of opponents play. One refinement corresponds to an upper bound on the set of possible learning outcomes while the other provides a lower bound. Both refinements are closely related to divine equilibrium (Banks and Sobel, 1987).

 

Bayesian Posteriors for Arbitrarily Rare Events (with Drew Fudenberg and Lorens Imhof).
Proceedings of the National Academy of Sciences, May 2017.
[download pdf]  [publisher's DOI]  [arXiv]

Abstract [click to expand]

We study how much data a Bayesian observer needs to correctly infer the relative likelihoods of two events when both events are arbitrarily rare. Each period, either a blue die or a red die is tossed. The two dice land on side \(1\) with unknown probabilities \(p_1\) and \(q_1\), which can be arbitrarily low. Given a data-generating process where \(p_1\ge c q_1\), we are interested in how much data is required to guarantee that with high probability the observer's Bayesian posterior mean for \(p_1\) exceeds \((1-\delta)c\) times that for \(q_1\). If the prior densities for the two dice are positive on the interior of the parameter space and behave like power functions at the boundary, then for every \(\epsilon>0,\) there exists a finite \(N\) so that the observer obtains such an inference after \(n\) periods with probability at least \(1-\epsilon\) whenever \(np_1\ge N\). The condition on \(n\) and \(p_1\) is the best possible. The result can fail if one of the prior densities converges to zero exponentially fast at the boundary.

 

Differentially Private and Incentive Compatible Recommendation System for the Adoption of Network Goods (with Xiaosheng Mu).
Proceedings of the Fifteenth ACM Conference on Economics and Computation (EC’14), June 2014. 
[download pdf]  [slides]  [publisher's DOI]

Abstract [click to expand]

We study the problem of designing a recommendation system for network goods under the constraint of differential privacy. Agents living on a graph face the introduction of a new good and undergo two stages of adoption. The first stage consists of private, random adoptions. In the second stage, remaining non-adopters decide whether to adopt with the help of a recommendation system \(\mathcal{A}\). The good has network complimentarity, making it socially desirable for \(\mathcal{A}\) to reveal the adoption status of neighboring agents. The designer’s problem, however, is to find the socially optimal \(\mathcal{A}\) that preserves privacy. We derive feasibility conditions for this problem and characterize the optimal solution.