Coal mined on federally managed lands accounts for approximately 40% of U.S. coal consumption and 13% of total U.S. energy-related CO2 emissions. The U.S. Department of the Interior is undertaking a programmatic review of federal coal leasing, including the climate effects of burning federal coal. This paper studies the interaction between a specific upstream policy, incorporating a carbon adder into federal coal royalties, and downstream emissions regulation under the Clean Power Plan (CPP). After providing some comparative statics, we

present quantitative results from a detailed dynamic model of the power sector, the Integrated Planning Model (IPM). The IPM analysis indicates that, in the absence of the CPP, a royalty adder equal to the social cost of carbon could reduce emissions by roughly ¾ of the emissions reduction that the CPP is projected to achieve. If instead the CPP is binding, the royalty adder would: reduce the price of tradeable emissions allowances, produce some additional emissions reductions by reducing leakage, and reduce wholesale power prices under a mass-based CPP but increase them under a rate-based CPP. A federal royalty adder increases mining of non-federal coal, but this substitution is limited by a shift to electricity generation by gas and renewables.

Key words: extraction royalties, social cost of carbon

JEL codes: Q54, Q58, Q38

limited to settings with independent and homoskedastic data, while data encountered in practice frequently violate these assumptions. We review the literature on weak instruments in linear IV regression with an emphasis on results for non-homoskedastic (heteroskedastic, serially correlated, or clustered) data. To assess the practical importance of weak instruments, we also report tabulations and simulations based on a survey of papers published in the American Economic Review from 2014 to 2018 that use instrumental variables. These results suggest that weak instruments remain an important issue for empirical practice, and that there are simple steps researchers can take to better handle weak instruments in applications.

%B Annual Review of Economics %G eng %0 Journal Article %J Journal of Environmental Economics and Management %D 2019 %T Cost Pass-Through to Higher Ethanol Blends at the Pump: Evidence from Minnesota Gas Station Data %A James H. Stock %A Jing Li %B Journal of Environmental Economics and Management %V 93 %P 1-19 %8 2017 %G eng %0 Generic %D 2018 %T Inference in Structural Vector Autoregressions with External Instruments %A Montiel, Jose L. %A James H. Stock %A Mark W. Watson %G eng %0 Journal Article %J Journal of Economic Perspectives %D 2018 %T The Cost of Reducing Greenhouse Gas Emissions %A Kenneth Gillingham %A James H. Stock %X This paper reviews the cost of various interventions that reduce greenhouse gas emissions. As much as possible we focus on actual abatement costs (dollars per ton of carbon dioxide avoided), as measured by 50 economic studies of programs over the past decade, supplemented by our own calculations. We distinguish between static costs, which occur over the lifetime of the project, and dynamic costs, which incorporate spillovers. Interventions or policies that are expensive in a static sense can be inexpensive in a dynamic sense if they induce innovation and learning-by-doing. %B Journal of Economic Perspectives %V 32 %P 53-72 %G eng %N 4 %0 Journal Article %J Journal of Business & Economic Statistics %D 2018 %T HAR Inference: Recommendations for Practice %A Eben Lazarus %A Daniel J. Lewis %A James H. Stock %A Mark W. Watson %X

The classic papers by Newey and West (1987) and Andrews (1991) spurred a large body of work on how to improve heteroskedasticity- and autocorrelation-robust (HAR) inference in time series regression. This literature finds that using a larger than usual truncation parameter to estimate the long-run variance, combined with Kiefer-Vogelsang (2002, 2005) fixed-b critical values, can substantially reduce size distortions, at only a modest cost in (size-adjusted) power. Empirical practice, however, has not kept up. This paper therefore draws on the post-Newey West/Andrews literature to make concrete recommendations for HAR inference. We derive truncation parameter rules that choose a point on the size-power tradeoff to minimize a loss function. If Newey-West tests are used, we recommend the truncation parameter rule S = 1.3T1/2 and (nonstandard) fixed-b critical values. For tests of a single restriction, we find advantages to using the equal-weighted cosine (EWC) test, where the long run variance is estimated by projections onto Type II cosines, using ν = 0.4T2/3 cosine terms; for this test, fixed-b critical values are, conveniently, tν or F. We assess these rules using first an ARMA/GARCH Monte Carlo design, then a dynamic factor model design estimated using a 207 quarterly U.S. macroeconomic time series.

%B Journal of Business & Economic Statistics %V 36 %P 541-574 %G eng %N 4 %0 Journal Article %J Economic Journal %D 2018 %T Identification and Estimation of Dynamic Causal Effects in Macroeconomics %A James H. Stock %A Mark W. Watson %X An exciting development in empirical macroeconometrics is the increasing use of external sources of as-if randomness to identify the dynamic causal effects of macroeconomic shocks. This approach – the use of external instruments – is the time series counterpart of the highly successful strategy in microeconometrics of using external as-if randomness to provide instruments that identify causal effects. This lecture provides conditions on instruments and control variables under which external instrument methods produce valid inference on dynamic causal effects, that is, structural impulse response function; these conditions can help guide the search for valid instruments in applications. We consider two methods, a one-step instrumental variables regression and a two-step method that entails estimation of a vector autoregression. Under a restrictive instrument validity condition, the one-step method is valid even if the vector autoregression is not invertible, so comparing the two estimates provides a test of invertibility. Under a less restrictive condition, in which multiple lagged endogenous variables are needed as control variables in the one-step method, the conditions for validity of the two methods are the same. %B Economic Journal %V 128 %P 917-948 %G eng %N May %0 Generic %D 2017 %T The Effect of a Higher Ethanol Blend RVP Waiver on RIN Prices %A James H. Stock %G eng %0 Generic %D 2017 %T Addendum to Comments by James Stock on the Proposed Denial of Petitions for Rulemaking to Change the RFS Point of Obligation %A James H. Stock %G eng %0 Journal Article %J Journal of Economic Perspectives %D 2017 %T Twenty Years of Time Series Econometrics in Ten Pictures %A James H. Stock %A Mark W. Watson %B Journal of Economic Perspectives %V 31 %P 59-86 %G eng %N 2 %0 Journal Article %J Brookings Papers on Economic Activity %D 2017 %T The Disappointing Recovery of Output after 2009 %A John G. Fernald %A Robert E. Hall %A James H. Stock %A Mark W. Watson %XU.S. output expanded only slowly after the recession trough in 2009 even though

the unemployment rate has essentially returned to a pre-crisis, normal level. We use a growthaccounting decomposition to explore explanations for the output shortfall, giving full treatment to cyclical effects that, given the depth of the recession, should have implied unusually fast growth. We find that the growth shortfall has almost entirely reflected two factors: the slow growth of total factor productivity and the decline in labor force participation. Both factors reflect powerful adverse forces largely unrelated to the financial crisis and recession. These forces were in play before the recession.

Through its minerals leasing program, the U.S. government plays a large role in the extraction of oil, natural gas, and coal. This footprint is the largest for coal: 41 percent of U.S. coal is mined under federal leases, and burning this coal accounts for 13 percent of U.S. energy-related carbon dioxide (CO2) emissions. Currently, producers and consumers of this coal do not bear the full social costs associated with its use. At the same time, the threat of climate change has led the international community, including the United States, to pledge significant reductions in CO2 emissions. Over the past two decades Democratic and Republican administrations have taken steps to reduce U.S. CO2 emissions by reducing use of fossil fuels. Despite growing public attention to the climate consequences of fossil fuel extraction, U.S. climate policy so far has not extended to the government’s role as a major source of fossil fuels. We propose to incorporate climate considerations into federal coal leasing by placing a royalty adder on federal coal that is linked to the climate damages from its combustion. The magnitude of the royalty adder should be chosen to recognize both the substitution of nonfederal for federal coal, and the interaction of the royalty adder with other climate policies. A royalty adder set to 20 percent of the social cost of carbon would reduce total power sector emissions, raise the price of federal coal to align with coal mined on private land, increase coal mining employment in Appalachia and the Midwest, and provide additional government revenues to help coal communities. This proposal strikes a middle path between calling for a stop to all federal fossil fuel leasing on the one hand, and relying entirely on imperfect downstream regulation on the other.

%B The Hamilton Project %8 Dec 8 2016 %G eng %U http://www.hamiltonproject.org/papers/federal_minerals_leasing_reform_and_climate_policy %0 Magazine Article %D 2016 %T Reforming the U.S. Coal Leasing Program %A Kenneth Gillingham %A Bushnell, James %A Fowlie, Meredith %A Michael Greenstone %A Charles Kolstad %A Alan Krupnick %A Adele Morris %A Schmalensee, Richard %A James H. Stock %B Science %V 354 %P 1096-1098 %8 Dec 2 2016 %G eng %U http://science.sciencemag.org/content/354/6316/1096.full.pdf+html %N 6316 %0 Generic %D 2016 %T Factor Models and Structural Vector Autoregressions in Macroeconomics %A James H. Stock %A Mark W. Watson %G eng %0 Generic %D 2015 %T The Renewable Fuel Standard: A Path Forward %A James H. Stock %I Columbia Center on Global Energy Policy %G eng %0 Generic %D 2015 %T Administering the Cellulosic Requirements under the Renewable Fuel Standard with Increasing and Uncertain Supply %A James H. Stock %G eng %0 Journal Article %J Brookings Papers on Economic Activity %D 2014 %T Discussion of "Labor Force Participation: Recent Developments and Future Prospects" by Stephanie Aaronson et al. %A James H. Stock %B Brookings Papers on Economic Activity %P 261-271 %G eng %0 Journal Article %J Journal of Economics Literature %D 2014 %T

Empirical Evidence on Inflation Expectations in the New Keynesian Phillips Curve

%A Mavroeidis, S %A Plagborg-Moller, M %A Stock, JH %B Journal of Economics Literature %V 52 %P 124-188 %G eng %N 1 %0 Journal Article %J Journal of Econometrics %D 2014 %TEstimating Turning Points Using Large Data Sets

%A James H. Stock %A M.W. Watson %B Journal of Econometrics %V 178 %P 368-381 %G eng %0 Journal Article %J Brookings Papers on Economic Activity %D 2013 %T Discussion of "Unseasonal Seasonals?" by Jonathan Wright %A Stock, JH %B Brookings Papers on Economic Activity %P 111-119 %G eng %0 Journal Article %J Climatic Change %D 2013 %TDoes Temperature Contain a Stochastic Trend: Linking Statistical Results to Physical Mechanisms

%A Robert Kaufmann %A Heikki Kauppi %A Michael Mann %A Stock, JH %B Climatic Change %V 118 %P 729-743 %G eng %N 3-4 %0 Journal Article %J Journal of Econometrics %D 2013 %TConsistent Factor Estimation in Dynamic Factor Models with Structural Instability

%A Brandon Bates %A Mikkel Plagborg-Moller %A James H. Stock %A Mark W. Watson %B Journal of Econometrics %V 177 %P 289-304 %G eng %0 Journal Article %J Journal of Business & Economic Statistics %D 2012 %T Generalized Shrinkage Methods for Forecasting Using Many Predictors %A J Stock %A M. Watson %B Journal of Business & Economic Statistics %V 30 %P 481-493 %G eng %N 4 %0 Unpublished Work %D 2012 %T A State-Dependent Model for Inflation Forecasting %A Andrea Stella %A James Stock %G eng %0 Unpublished Work %D 2012 %TInference in Structural VARs with External Instruments

%A Montiel Olea, José Luis %A James Stock %A Mark W. Watson %G eng %9 Manuscript %0 Journal Article %J Brookings Papers on Economic Activity %D 2012 %TDisentangling the Channels of the 2007-2009 Recession

%A James Stock %A Mark Watson %B Brookings Papers on Economic Activity %V Spring 2012 %P 81-135 %G eng %0 Book Section %B Oxford Handbook on Economic Forecasting %D 2011 %T Dynamic Factor Models %A J Stock %A M. Watson %E M J Clements %E D F Hendry %B Oxford Handbook on Economic Forecasting %I Oxford University Press %C Oxford %G eng %0 Journal Article %D 2011 %T Discussion of Fuhrer, "The Role of Expectations in Inflation Dynamics" %A James Stock %G eng %0 Book %D 2011 %T Introduction to Econometrics (3rd edition) %A James Stock %A Mark Watson %I Addison Wesley Longman %G eng %0 Generic %D 2011 %T Forecasts in a Slightly Misspecified Finite Order VAR %A James Stock %A Ulrich K. Mueller %G eng %0 Journal Article %J Brookings Papers on Economic Activity %D 2011 %T Discussion of Ball and Mazumder, "Inflation Dynamics and the Great Recession %A James Stock %B Brookings Papers on Economic Activity %P 387-402 %G eng %0 Journal Article %J Proceedings of the National Academy of Sciences %D 2011 %T Reconciling Anthropogenic Climate Change with Observed Temperature 1998-2008 %A James Stock %A Robert Kaufmann %A Heikki Kauppi %A Michael Mann %B Proceedings of the National Academy of Sciences %V 108 %P 11790-11793 %G eng %N 29 %0 Unpublished Work %D 2010 %T Estimating Turning Points using Large Data Sets %A James Stock %A Mark W. Watson %G eng %9 NBER Working Paper %0 Journal Article %J Climatic Change %D 2010 %T Does temperature contain a stochastic trend? Evaluating conflicting statistical results %A Robert Kaufmann %A Heikki Kauppi %A James Stock %B Climatic Change %V 101 %P 395-405 %G eng %0 Conference Paper %B American Economic Review: Papers and Proceedings %D 2010 %T Indicators for Dating Business Cycles: Cross-History Selection and Comparisons %A James Stock %A Mark Watson %B American Economic Review: Papers and Proceedings %G eng %0 Book Section %B Oxford Handbook of Economic Forecasting %D 2010 %T Dynamic Factor Models %A James Stock %A Mark Watson %E Michael P. Clements %E David F. Henry %B Oxford Handbook of Economic Forecasting %I Oxford University Press %C Oxford %G eng %U http://www.economics.harvard.edu/faculty/stock/files/dfm_oup_4.pdf %0 Conference Proceedings %D 2010 %T Modeling Inflation After the Crisis %A James Stock %A M. Watson %XIn the United States, the rate of price inflation falls in recessions. Turning this observation into a useful

inflation forecasting equation is difficult because of multiple sources of time variation in the inflation

process, including changes in Fed policy and credibility. We propose a tightly parameterized model

in which the deviation of inflation from a stochastic trend (which we interpret as long-term expected

inflation) reacts stably to a new gap measure, which we call the unemployment recession gap. The

short-term response of inflation to an increase in this gap is stable, but the long-term response depends

on the resilience, or anchoring, of trend inflation. Dynamic simulations (given the path of unemployment)

match the paths of inflation during post-1960 downturns, including the current one.

We compare the powers of five tests of the coefficient on a single endogenous

regressor in instrumental variables regression. Following Moreira (2003), all tests are

implemented using critical values that depend on a statistic which is sufficient under the

null hypothesis for the (unknown) concentration parameter, so these conditional tests are

asymptotically valid under weak instrument asymptotics. Four of the tests are based on

k-class Wald statistics (two-stage least squares, LIML, Fuller’s (1977), and bias-adjusted

TSLS); the fifth is Moreira’s (2003) conditional likelihood ratio (CLR) test. The

heretofore unstudied conditional Wald tests are found to perform poorly, compared to the

CLR test: in many cases, the conditional Wald tests have almost no power against a wide

range of alternatives. Our analysis is facilitated by a new algorithm, presented here, for

the computation of the asymptotic conditional p-value of the CLR test.

This paper establishes the asymptotic distributions of the likelihood ratio (LR),

Anderson-Rubin (AR), and Lagrange multiplier (LM) test statistics under “many

weak IV asymptotics.” These asymptotics are relevant when the number of IVs is

large and the coeﬃcients on the IVs are relatively small. The asymptotic results hold

under the null and under suitable alternatives. Hence, power comparisons can be

made.

Provided k3 /n

→ 0 as n → ∞, where n is the sample size and k is the number of

instruments, these tests have correct asymptotic size. This holds no matter how weak

the instruments are. Hence, the tests are robust to the strength of the instruments.

The power results show that the conditional LR test is more powerful asymptotically

than the AR and LM tests under many weak IV asymptotics.

This paper considers tests of the parameter on endogenous variables in an instru-

mental variables regression model. The focus is on determining tests that have some

optimal power properties. We start by considering a model with normally distrib-

uted errors and known error covariance matrix. We consider tests that are similar and

satisfy a natural rotational invariance condition. We determine a two-sided power

envelope for invariant similar tests. This allows us to assess and compare the power

properties of tests such as the conditional likelihood ratio (CLR), Lagrange multi-

plier, and Anderson-Rubin tests. We ﬁnd that the CLR test is quite close to being

uniformly most powerful invariant among a class of two-sided tests.

The ﬁnite sample results of the paper are extended to the case of unknown error

covariance matrix and possibly non-normal errors via weak instrument asymptotics.

Strong instrument asymptotic results also are provided because we seek tests that

perform well under both weak and strong instruments.

This paper constructs tests for martingale time variation in regression coefficients in

the regression model yt = xt′βt + ut, where βt is k×1, and Σβ is the covariance matrix of

Δβt. Under the null there is no time variation, so Ho: Σβ = 0; under the alternative there is

time variation in r linear combinations of the coefficients, so Ha: rank(Σβ ) = r, where r

may be less than k. The Gaussian point optimal invariant test for this reduced rank

testing problem is derived, and the test’s asymptotic behavior is studied under local

alternatives. The paper also considers the analogous testing problem in the multivariate

local level model Zt = μt + at, where Zt is a k×1 vector, μt is a level process that is constant

under the null but is subject to reduced rank martingale variation under the alternative,

and at is an I(0) process. The test is used to investigate possible common trend variation

in the growth rate of per-capita GDP in France, Germany and Italy.

This paper considers VAR models incorporating many time series that interact through a

few dynamic factors. Several econometric issues are addressed including estimation of

the number of dynamic factors and tests for the factor restrictions imposed on the VAR.

Structural VAR identification based on timing restrictions, long run restrictions, and

restrictions on factor loadings are discussed and practical computational methods

suggested. Empirical analysis using U.S. data suggest several (7) dynamic factors,

rejection of the exact dynamic factor model but support for an approximate factor model,

and sensible results for a SVAR that identifies money policy shocks using timing

restrictions.

This paper reviews recent developments in methods for dealing with weak instru-

ments (IVs) in IV regression models. The focus is more on tests (and conﬁdence

intervals derived from tests) than estimators.

The paper also presents new testing results under “many weak IV asymptotics,”

which are relevant when the number of IVs is large and the coeﬃcients on the IVs

are relatively small. Asymptotic power envelopes for invariant tests are established.

Power comparisons of the conditional likelihood ratio (CLR), Anderson-Rubin, and

Lagrange multiplier tests are made. Numerical results show that the CLR test is on

the asymptotic power envelope. This holds no matter what the relative magnitude

of the IV strength to the number of IVs.

This paper provides a simple shrinkage representation that describes the

operational characteristics of various forecasting methods that are applicable when there

are a large number of orthogonal predictors (such as principal components). These

methods include pretest methods, Bayesian model averaging, empirical Bayes, and

bagging. We then compare these and other many-predictor forecasting methods in the

context of macroeconomic forecasting (real activity and inflation) using 131 monthly

predictors with monthly U.S. economic time series data, 1959:1 - 2003:12. The

theoretical shrinkage representations serve to inform our empirical comparison of these

forecasting methods.

This paper uses forecast combination methods to forecast output growth in a

seven-country quarterly economic data set covering 1959 – 1999, with up to 73 predictors

per country. Although the forecasts based on individual predictors are unstable over time

and across countries, and on average perform worse than an autoregressive benchmark,

the combination forecasts often improve upon autoregressive forecasts. Despite the

unstable performance of the constituent forecasts, the most successful combination

forecasts, like the mean, are the least sensitive to the recent performance of the individual

forecasts. While consistent with other evidence on the success of simple combination

forecasts, this finding is difficult to explain using the theory of combination forecasting in

a stationary environment.

The 2001 recession differed from other recent recessions in its cause, severity, and scope.

This paper documents the performance of professional forecasters and forecasts based on

leading indicators as the recession unfolded. Professional forecasters found this recession

a difficult one to forecast. A few leading indicators (stock prices, term spreads,

unemployment claims) predicted that growth would slow, but none predicted the sharp

economic slowdown. Several previously reliable leading indicators (housing starts,

orders for new capital equipment, consumer sentiment) provided no early warning

signals. When combined, the leading indicator performed somewhat better than a

benchmark autoregressive forecasting model.

The volatility of economic activity in most G7 economies has moderated over the past 40 years.

Also, despite large increases in trade and openness, G7 business cycles have not become more

synchronized. After documenting these facts, we interpret G7 output data using a structural VAR

that separately identiﬁes common international shocks, the domestic effects of spillovers from

foreign idiosyncratic shocks, and the effects of domestic idiosyncratic shocks. This analysis

suggests that, with the exception of Japan, a signiﬁcant portion of the widespread reduction in

volatility is associated with a reduction in the magnitude of the common international shocks.

Had the common international shocks in the 1980s and 1990s been as large as they were in the

1960s and 1970s, G7 business cycles would have been substantially more volatile and more

highly synchronized than they actually were. (JEL: C3, E5)

Weak instruments arise when the instruments in linear instrumental variables (IV) regression are weakly

correlated with the included endogenous variables. In generalized method of moments (GMM), more

generally, weak instruments correspond to weak identiﬁcation of some or all of the unknown parameters.

Weak identiﬁcation leads to GMM statistics with nonnormal distributions, even in large samples, so that

conventional IV or GMM inferences are misleading. Fortunately, various procedures are now available

for detecting and handling weak instruments in the linear IV model and, to a lesser degree, in nonlinear

GMM.

We consider both frequentist and empirical Bayes forecasts of a single time series using a linear

model with T observations and K orthonormal predictors. The frequentist formulation considers

estimators that are equivariant under permutations (reorderings) of the regressors. The empirical

Bayes formulation (both parametric and nonparametric) treats the coefficients as i.i.d. and estimates

their prior. Asymptotically, when K is proportional to T the empirical Bayes estimator is shown to

be: (i) optimal in Robbins' (1955, 1964) sense; (ii) the minimum risk equivariant estimator; and

(iii) minimax in both the frequentist and Bayesian problems over a class of nonGaussian error

distributions. Also, the asymptotic frequentist risk of the minimum risk equivariant estimator is

shown to equal the Bayes risk of the (infeasible subjectivist) Bayes estimator in the Gaussian case,

where the "prior" is the weak limit of the empirical cdf of the true parameter values. Monte Carlo

results are encouraging. The new estimators are used to forecast monthly postwar U.S.

macroeconomic time series using the first 151 principal components from a large panel of

predictors.