Coal mined on federally managed lands accounts for approximately 40% of U.S. coal consumption and 13% of total U.S. energy-related CO2 emissions. The U.S. Department of the Interior is undertaking a programmatic review of federal coal leasing, including the climate effects of burning federal coal. This paper studies the interaction between a specific upstream policy, incorporating a carbon adder into federal coal royalties, and downstream emissions regulation under the Clean Power Plan (CPP). After providing some comparative statics, we present quantitative results from a detailed dynamic model of the power sector, the Integrated Planning Model (IPM). The IPM analysis indicates that, in the absence of the CPP, a royalty adder equal to the social cost of carbon could reduce emissions by roughly ¾ of the emissions reduction that the CPP is projected to achieve. If instead the CPP is binding, the royalty adder would: reduce the price of tradeable emissions allowances, produce some additional emissions reductions by reducing leakage, and reduce wholesale power prices under a mass-based CPP but increase them under a rate-based CPP. A federal royalty adder increases mining of non-federal coal, but this substitution is limited by a shift to electricity generation by gas and renewables.
Key words: extraction royalties, social cost of carbon JEL codes: Q54, Q58, Q38
When instruments are weakly correlated with endogenous regressors, conventional methods for instrumental variables estimation and inference become unreliable. A large literature in econometrics develops procedures for detecting weak instruments and constructing robust condence sets, but many of the results in this literature are limited to settings with independent and homoskedastic data, while data encountered in practice frequently violate these assumptions. We review the literature on weak instruments in linear IV regression with an emphasis on results for non-homoskedastic (heteroskedastic, serially correlated, or clustered) data. To assess the practical importance of weak instruments, we also report tabulations and simulations based on a survey of papers published in the American Economic Review from 2014 to 2018 that use instrumental variables. These results suggest that weak instruments remain an important issue for empirical practice, and that there are simple steps researchers can take to better handle weak instruments in applications.
This paper reviews the cost of various interventions that reduce greenhouse gas emissions. As much as possible we focus on actual abatement costs (dollars per ton of carbon dioxide avoided), as measured by 50 economic studies of programs over the past decade, supplemented by our own calculations. We distinguish between static costs, which occur over the lifetime of the project, and dynamic costs, which incorporate spillovers. Interventions or policies that are expensive in a static sense can be inexpensive in a dynamic sense if they induce innovation and learning-by-doing.
The classic papers by Newey and West (1987) and Andrews (1991) spurred a large body of work on how to improve heteroskedasticity- and autocorrelation-robust (HAR) inference in time series regression. This literature finds that using a larger than usual truncation parameter to estimate the long-run variance, combined with Kiefer-Vogelsang (2002, 2005) fixed-b critical values, can substantially reduce size distortions, at only a modest cost in (size-adjusted) power. Empirical practice, however, has not kept up. This paper therefore draws on the post-Newey West/Andrews literature to make concrete recommendations for HAR inference. We derive truncation parameter rules that choose a point on the size-power tradeoff to minimize a loss function. If Newey-West tests are used, we recommend the truncation parameter rule S = 1.3T1/2 and (nonstandard) fixed-b critical values. For tests of a single restriction, we find advantages to using the equal-weighted cosine (EWC) test, where the long run variance is estimated by projections onto Type II cosines, using ν = 0.4T2/3 cosine terms; for this test, fixed-b critical values are, conveniently, tν or F. We assess these rules using first an ARMA/GARCH Monte Carlo design, then a dynamic factor model design estimated using a 207 quarterly U.S. macroeconomic time series.
An exciting development in empirical macroeconometrics is the increasing use of external sources of as-if randomness to identify the dynamic causal effects of macroeconomic shocks. This approach – the use of external instruments – is the time series counterpart of the highly successful strategy in microeconometrics of using external as-if randomness to provide instruments that identify causal effects. This lecture provides conditions on instruments and control variables under which external instrument methods produce valid inference on dynamic causal effects, that is, structural impulse response function; these conditions can help guide the search for valid instruments in applications. We consider two methods, a one-step instrumental variables regression and a two-step method that entails estimation of a vector autoregression. Under a restrictive instrument validity condition, the one-step method is valid even if the vector autoregression is not invertible, so comparing the two estimates provides a test of invertibility. Under a less restrictive condition, in which multiple lagged endogenous variables are needed as control variables in the one-step method, the conditions for validity of the two methods are the same.