Research

Job Market Paper

Scaling Auctions as Insurance: A Case Study in Infrastructure Procurement, with Valentin Bolotnyy
Download Working Paper

Abstract

The U.S. government spends about $165B per year on highways and bridges, or about 1% of GDP. Much of it is spent through "scaling" procurement auctions, in which private construction firms submit unit price bids for each piece of material required to complete a project. The winner is determined by the lowest total cost­­—given government estimates of the amount of each material needed—but, critically, they are paid based on the realized quantities used. This creates an incentive for firms to skew their bids—bidding high when they believe the government is underestimating an item's quantity and vice versa—and raises concerns of rent-extraction among policymakers. For risk averse bidders, however, scaling auctions provide a distinctive way to generate surplus: they enable firms to limit their risk exposure by placing lower unit bids on items with greater uncertainty. To assess this effect empirically, we develop a structural model of scaling auctions with risk averse bidders. Using data on bridge maintenance projects undertaken by the Massachusetts Department of Transportation (MassDOT), we present evidence that bidding behavior is consistent with optimal skewing under risk aversion. We then estimate bidders' risk aversion, the risk in each auction, and the distribution of bidders’ private costs. Finally, we simulate equilibrium item-level bids under counterfactual settings to estimate the fraction of MassDOT spending that is due to risk and evaluate alternative mechanisms under consideration by MassDOT. We find that scaling auctions provide substantial savings to MassDOT relative to lump sum auctions and suggest several policies that might improve on the status quo.

 

Publications

Pretrial Negotiations with Optimism, with Muhamet Yildiz
RAND Journal of Economics, Vol. 50, No. 2 (Summer 2019), pp. 359–390
Download Paper     Interactive Model Demo    Online Appendix

Abstract

We develop a tractable and versatile model of pretrial negotiation in which the negotiating parties are optimistic about the judge’s decision and anticipate the possible arrival of public information about the case prior to the trial date. The parties will settle immediately upon the arrival of information. However, they may agree to settle prior to an arrival as well. We derive the settlement dynamics prior to an arrival and show that negotiations result in either immediate agreement, a weak deadline effect — settling at a particular date before the deadline, a strong deadline effect — settling at the deadline, or impasse, depending on the level of optimism. We show that the distribution of settlement times has a U-shaped frequency and a convexly increasing hazard rate with a sharp increase at the deadline, replicating stylized facts about such negotiations.

Implementing the Wisdom of Waze, with Avinatan Hassidim and Michal Feldman
Download Conference Paper     Download Long Version
International Joint Conference on Artificial Intelligence (IJCAI). Vol. 24. Buenos Aires, Argentina, 2015

Abstract

We study a setting of non-atomic routing in a network of m parallel links with asymmetry of information. While a central entity (such as a GPS navigation system) — a mediator hereafter — knows the cost functions associated with the links, they are unknown to the individual agents controlling the flow. The mediator gives incentive compatible recommendations to agents, trying to minimize the total travel time. Can the mediator do better than when agents minimize their travel time selfishly without coercing agents to follow his recommendations? We study the mediation ratio: the ratio between the mediated equilibrium obtained from an incentive compatible mediation protocol and the social optimum. We find that mediation protocols can reduce the efficiency loss compared to the full revelation alternative, and compared to the non mediated Nash equilibrium. In particular, in the case of two links with affine cost functions, the mediation ratio is at most 8/7, and remains strictly smaller than the price of anarchy of 4/3 for any fixed m. Yet, it approaches the price of anarchy as m grows. For general (monotone) cost functions, the mediation ratio is at most m, a significant improvement over the unbounded price of anarchy.

 

Working Papers

Bargaining and International Reference Pricing in the Pharmaceutical Industry, with Ashvin Gandhi and Pierre Dubois
Download Working Paper     Slides

Abstract

The United States spends twice as much per person on pharmaceuticals as European countries, in large part because prices are higher in the US. This has led policymakers in the US to consider legislation for price controls. This paper assesses the effects of a hypothetical US reference pricing policy that would cap prices in US markets by those offered in Canada. We estimate a structural model of demand and supply for pharmaceuticals in the US and Canada, in which Canadian prices are set through a negotiation process between pharmaceutical companies and the Canadian government. We then simulate the impacts of the counterfactual international reference pricing rule, allowing firms to internalize the cross-country impacts of their prices both when setting prices in the US and when negotiating prices in Canada. We find that such a policy results in a slight decrease in US prices and a substantial increase in Canadian prices. The magnitude of these effects depends on the particular structure of the policy. Overall, we find modest consumer welfare gains in the US, but substantial consumer welfare losses in Canada. Moreover, we find that pharmaceutical profits increase in net, suggesting that reference pricing of this form would constitute a net transfer from consumers to firms.

Buying Data from Consumers: The Impact of Monitoring in US Auto Insurance, with Yizhou Jin  (Yizhou's JMP!)
Download Working Paper

Abstract

This paper develops an empirical framework for direct transactions of consumer data. We use it to study the design and impact of auto-insurance monitoring programs, in which insurers incentivize consumers to opt into having their driving behavior monitored for a short period of time. We acquire proprietary datasets from a major U.S. auto insurer that offers a monitoring program. The data is matched with price menus of the firm’s main competitors. We develop a model for consumers’ demand for insurance and for monitoring as well as the cost to insure them. Key parameters are estimated using rich data variation in insurance claims, prices, contract space, and monitoring status. We then conduct counterfactual simulations using a dynamic pricing model that endogenizes the firm’s information set. We find three main results. (i) Data collection changes consumer behavior. Drivers become 30% safer when monitored, which boosts total surplus and alters the informativeness of the data. (ii) Demand for monitoring interacts with the product market. Safer drivers are more likely to opt in. But monitoring take-up is low due to both consumers’ innate preference against being monitored and attractive outside options from other insurers. Nonetheless, introducing monitoring raises consumer welfare by 3% of premium per year. (iii) Proprietary data facilitate higher markups but protect the firm’s ex-ante incentives to produce the data. A counterfactual equilibrium in which the firm must share monitoring data with competitors harms both profit and consumer welfare. This is because the firm offers less upfront incentives for monitoring opt-in, so that fewer drivers are monitored.

Voluntary Disclosure and Personalized Pricing, with Nageeb Ali and Greg Lewis
Download Paper Preview     Download Slides      (Full WP Coming Soon!)

Abstract

A concern central to the economics of privacy is that firms may exploit consumer data to engage in greater price discrimination. A common response to these concerns is that consumers should have sovereignty over their own data, and choose whether firms access it. Since the market may draw inferences whenever a consumer withholds information about her preferences, the strategic implications of consumer data sovereignty are unclear. This paper investigates whether such measures improve consumer welfare in both monopolistic and competitive environments. We consider the interaction between a consumer, whose preferences are not known to the market, and a market that makes price offers to that consumer based on verifiable disclosures about her type. We show that a consumer can optimally use verifiable information about her preference to create exclusive partial pools that guarantee gains relative to perfect price discrimination.

Fast Bayesian Inference on Large-Scale Random Utility Logit Models, with James Savage
Draft under Preparation
Notebook with a Worked Example  Sample Slides

Abstract

Random coefficients logit is a benchmark model for discrete choice, widely used in marketing and industrial organization. In "conjoint analysis" conducted in experimental marketing, it has historically been estimated using a Metropolis-within Gibbs method, or with simulated likelihoods (Train 2009). In economic problems where only aggregate data are available, it is estimated using GMM using the BLP algorithm of Berry Levinsohn and Pakes (1995). We propose a latent variable form of the model as in Yang, Chen and Allenby (2003) for both individual choice and aggregate data, which can be estimated efficiently using Hamiltonian Monte Carlo. This offers several benefits over the current standards. Relative to Metropolis-within-Gibbs, HMC allows efficient estimation of enormous parameter spaces, allowing much larger (and even context-dependent choice) models to be fit in hours (rather than days or weeks). The proposed approach models aggregate sales, not shares, and so unlike BLP, the method allows for measurement error due to differences in market size.  Priors also regularize the loss surface, leading to estimates that are robust when GMM objectives would be susceptible to local minima.