Working papers

Medium-frequency cycles

Title: Reconciling near trend-stationary growth with medium-frequency cycles.

NEP-DGE blog featured paper, December 2010, under the title “Products, patents and productivity persistence”

Abstract: Existing models of dynamic endogenous growth generate implausibly large trend breaks in output when augmented with standard business cycle shocks. This paper presents a model without this deficiency, yet still capable of generating large medium-frequency fluctuations around the trend. Ensuring the robustness of the trend requires that we eliminate the scale effects and knife edge assumptions that plague most growth models. In our model, medium-frequency fluctuations arise from changes in the proportion of industries producing patent protected products. However, variations in the number of firms within each industry ensure that process improvement incentives remain roughly constant.

View Download

Title: Data consistent modelling of medium-frequency cycles and their origins.

Abstract: This paper presents four stylized facts on medium-frequency cycles, then builds and estimates a model capable of replicating both these facts and standard business-cycle ones. We show that GDP returns to trend at long lags, that aggregate mark-ups always lead output, and are only counter-cyclical at low frequencies, and that medium-frequency cycles are larger in countries with longer patent protection. Since traditional dynamic endogenous growth models generate large trend-breaks following business-cycle shocks, our model is based on that of Holden (2013a). After estimation, a financial-type shock to the stock of ideas emerges as the key driver of the medium-frequency cycle.

View Download

Title: Online appendices to “Reconciling near trend-stationary growth with medium-frequency cycles” and “Data consistent modelling of medium-frequency cycles and their origins”.

Abstract: This paper presents the online appendices to Holden (2013a) and Holden (2013b). We discuss the derivation of the first order and free-entry conditions, the steady state level of relative productivity of non-protected industries, and the nature of the inventor-firm bargaining procedure. We go on to present the full equations of both models considered, details of the data used for estimation, and the results of this estimation procedure.

View Download

Inequality constraints

Title: Efficient simulation of DSGE models with inequality constraints (joint with Michael Paetz)

NEP-DGE blog featured paper, August 2012

Abstract: This paper presents a fast, simple and intuitive algorithm for simulation of linear dynamic stochastic general equilibrium models with inequality constraints. The algorithm handles both the computation of impulse responses, and stochastic simulation, and can deal with arbitrarily many bounded variables. Furthermore, the algorithm is able to capture the precautionary motive associated with the risk of hitting such a bound. To illustrate the usefulness and efficiency of this algorithm we provide a variety of applications including to models incorporating a zero lower bound (ZLB) on nominal interest rates. Our procedure is much faster than comparable methods and can readily handle large models. We therefore expect this algorithm to be useful in a wide variety of applications.

View Download

For other papers, please see my RePEc profile or my Google Scholar profile.


Title: Learning from learners

Abstract: Traditional macroeconomic learning algorithms are misspecified when all agents are learning simultaneously. In this paper, we produce a number of learning algorithms that do not share this failing, and show that this enables them to learn almost any solution, for any parameters, implying learning cannot be used for equilibrium selection. As a by-product, we are able to show that when all agents are learning by traditional methods, all deep structural parameters of standard new-Keynesian models are identified, overturning a key result of Cochrane (2009; 2011). This holds irrespective of whether the central bank is following the Taylor principle, irrespective of whether the implied path is or is not explosive, and irrespective of whether agents’ beliefs converge. If shocks are observed then this result is trivial, so following Cochrane (2009) our analysis is carried out in the more plausible case in which agents do not observe shocks.

View Download