Edward C. Prescott (2014) - The Revolution in Aggregate Economics

It’s good to be here with you fellow scientists. I think economics is a highly successful science. And I’ll be talking about some of the great successes. It’s unified. It’s a hard science. Aggregate economics is now a hard science, I repeat. It has been tested through successful use. Like all hard science there’s a theory. You can’t have a deviation from theory if you have no theory. If you can explain anything with... And there are deviations from this theory. Progress in the sciences are back and forth between theory and measurement. The national accounts really define macro. A goal in the early 1930s was to come up with a measure of the performance of the business sector. Kuznets used prices to measure the value of all final output and came up with one, GNP. His followers measured the inputs and used prices to aggregate inputs of labour. Human capital. And services of durable goods, tangible capital. I went to graduate school in the ‘60s and worked in a different tradition than the one I’ll be talking about here, with the macroeconometric model. The leader of this was Lawrence Klein. His was a quantitative structure. The national account identities all held. And it was complete. He was a great scientist. Well, what did this do? Some growth facts came out. The Swan-Solow growth model was incredibly useful. In particular, it proves invaluable in output accounting. I'd call the original of that model classical. Here I’m following Ragnar Frisch. That theory is the theory of the income side of the national accounts. This you know about, aggregate production function is the maximum output that can be produced given the quantity of factor inputs. There’s an aggregation theory underlying the aggregate production function - that goes way back. If factor markets are competitive, profit maximisation results in actual output being equal to the maximum amount that can be produced. So the aggregate production function has empirical content. Further factor payments exhaust product. But the model is not neoclassical. There are no household decisions. Savings/investment rate is exogenous. Labour supply exogenous. What did they do in the ‘60s? It was the Hicksian IS-LM. I used that in my dissertation along with a base in sequential decision theory for optimal control. What they’re doing, they’re empirically searching for the law of motion of the dynamic system. We all know about Lucas, who said that if you attempted to exploit the trade-off between inflation and unemployment, it would fail. This model came to prominence because of its great success in forecasting. Klein said no great depression in the ‘50s. In the ‘60s he was pretty accurate. Quality of forecasting went up. Then people said, well, this is it, we’re going to use it. Nixon said we’re all Keynesians now. An attempt was made in the ‘70s and it failed spectacularly - as predicted by dynamic economic theory. So what happened in the ‘70s? A lot of conjecturing. What do to? A lot of storytelling. It was like the pre-Klein era. There was some excellent factual studies. Sims and Sargent, in particular, found independence between nominal and real aggregate variables using some pretty sophisticated time series tools, they’re good. But there was no established tested theory. This theory developed at the very end of the decade. It is the neoclassical growth theory. It has an aggregate production function. ATFP times the function of the capital and labour input. Aggregate household discounts expected utility flows depending upon consumption. And leisure - that is the fraction of productive time allocated to non-market activities. Some of that time you’re working in home production. Model endogenises the saving/investment decision and the labour/leisure decision. Note I said aggregate, not representative, household. We all know that if there’s a common homothetic preference and some other condition, there is a representative aggregate household. Micro evidence is grossly at variance with the common homothetic preferences. So that the representative household construct is useless in aggregate analysis. What was the key observation? The principle margin of adjustment is the fraction of the people working - not the hours per worker. And the key is that there has to be non-convexities - which there are. That’s been well documented. I want to emphasise that the micro and macro theory are fundamentally different. People who use micro theory to address macro issues fall flat on their face. Micro theory is an incredibly valuable tool for dealing with small units. And you organise your empirical knowledge around things like supply and demand. When addressing issues the appropriate tool is aggregate theory. Things are organised around preferences and technology. In this language preferences just describe what people will choose when given a choice. But we have gone beyond supply and demand - that’s what Sargent said in the late 1970s. By the way, you hear about business cycles all the time. I got to write a handbook chapter on that. I said I don’t know what, there aren’t business cycles. Wesley Mitchel looked at the time series and periods coming down, contractions are recessions and periods of expansion. And that’s the way they represented the time series. He had 500 series pasted all over his wall. But we’ve learned there’s not. Adelman and Adelman, 1959, Econometrica, at the suggestion of Arrow found that the Klein-Goldberg model was not cyclical. It had a dominant eigenvalue that was real, .75 annual data. By the way, in Kydland and my model, 'Time to build' - it’s a quarterly model so it’s about the 4th rule of .75 or about .93 or something. Using micro reasoning most economists incorrectly concluded the growth model could not be used to study aggregate fluctuations. Well, they were wrong - is what history proved. Business cycles that are the result of movements in real factors – I use the word business cycle and I didn’t define it, I apologise. Business cycles are what you define them to be. You fit a smooth line through the curve and it’s the deviations from this. These are operational definitions, so you can’t say they’re right or wrong. It’s a question of whether they’re useful or not. Things like technology, the regulation and legal system, taxes, terms of trade and demographics are the real factors. So what is theory? Given a question, theory is a set of instructions for constructing a fully-articulated model economy for which the quantitative answer to the question at hand can be determined. Models are instruments to draw scientific inference. They’re abstractions, scientists abstract. You have to. Reality is too complex. The amazing thing is that simple abstraction is so powerful and useful. Core of the neoclassical growth theory. Of course, the technology side: the aggregate production function. On the preference side: the aggregate household utility function. I said aggregate household. We know about the aggregation to get the aggregate production function. You sum up small units with free entry and exit, then there’s an aggregate production function. And that’s very different. The behaviour of the aggregate is very different than the behaviour of the units being aggregated. If you go to, say, McKinsey with the coupled activities, both of which are operated. The elasticity of substitution between capital and labour at the aggregate level isn't infinite. At the plant level zero. That’s the definition of the activities. But something like this aggregation theory had to develop for the aggregate household. I’m using that word 'aggregate' intentionally, repeating it. As I said before, common homothetic preferences aggregation incorrectly predicts the margin of labour supply adjustment is workweek length. That’s just a small part of it. The big thing is the fraction of people working in a given period. Why is the principle number margin working? Empirically the aggregate thing seems to work. But you’d like to have some micro foundations for your aggregate elements. The key guy there is Richard Rogerson. He developed the aggregation theory when there’s a labour indivisibility. And the margin of adjustment is the fraction of people working. But it was forced to be that way. That matches observation much better than the representative ageing construct. Hanson in his classic business cycle paper introduced it into the basic neoclassical growth model. And found the model displayed the business cycle facts. What are the business cycle facts? Faster and a lot more volatile than consumption. The relative and constant share of incomes going to the 2 factors of production. The relative stability of the capital output ratio and the consumption in investment. But not all investment is in the fraction of people working. Some is in average hours worked per worker. You may take one week less vacation or work overtime. Professor Kydland and I developed and used one with both margins operating. A major point in aggregate labour supply elasticity is determined by features of both preferences and technology. So the whole language of labour supply, it’s not…(inaudible) labour supply. Using the methodology, economists found monetary policy matters little for the real economy. That isn’t what the people in Washington say and, I guess, wherever the European Central Bank is located. They think that they can ... they’re deluded. There’s no theoretical or empirical support for the propositions that monetary policy can have significant effect upon employment and output - or 'consequence', I shouldn’t use 'effect'. By the way, with this neoclassical growth model you can introduce all kinds into this methodology, it’s the RBC methodology. It’s not saying the business sectors are due the real factors. And people took that and introduced things like wage setting, staggered wage setting, price stickiness etc. This is Econometrica paper, the sticky price was inconsistent with the micro and macro empirical facts. There is a lot of spectacular successes. We use this neoclassical growth model to study great depressions of the 20th century in a consistent way across. I think it’s about 30 authors about 16 depressions. Not only the Great US Depression. The current not-so-Great US Depression. Japan's growth miracle. Japan’s lost decade of growth. The large secular movement in the value of corporations relative to GNP. An extension of the model has being used to deal with issues of foreign direct investment and others. And by the way, there’s also getting investment and innovation that supports Adam Smith. Schumpeter was wrong. You don’t need monopoly rents being able to use the technology at multiple locations. There can be locational rents to support the investment. And that’s been used. And a lot of the big imbalances were just world capital markets working. For example, why is the US 40% more prosperous than Western Europe? This is detrended GDP per adult. That corrects for population size and the secular growth due to the increase in the stock of useful knowledge – which you’ll be contributing to. People all over the world will be using the tools you develop. Our tools are so much better now than they used to be. We can do so much more. We’re not so limited. Using the same basic model, the assumption you make across applications have to all be mutually consistent. And they have to be consistent with the micro-observations as well. Turned out that the intertemporal tax rate was important for this big difference. Why Europe? By the way, Europe is not lazy. They work a lot less in the market sector, but if you include home production it’s about the same. There’s just a distortion in this. When you compare US to Japan, the Japanese are not as productive in the output per hour as the Europeans and North Americans. At the early stages we looked at the statistical properties of the time series. Fitting that smooth curve with an operational definition of trend in constructing those statistics. By the way, when we do our model economy, the national accounts didn’t love us. Because we take them and put them in our model. And have them do what they do for our model economy. So we’re always comparing the statistics they compute, the same statistics. If they don’t measure some parts of output - the accountant in our model does not measure that part of output in coming up with GDP. A turning point in this was the Prescott-Summers debate in 1985. Larry stuck to the serious general equilibrium public finance language. He wanted to know what the shocks were. I wanted to know what the shocks were too. I suppose, it’s just the statistical properties of these time series. But the big methodological advance is now, we can look at predicted equilibrium paths and compare them with actual. We can say why the economy went up and down relative to trend. But now the aggregate economic science provides him with what he and I also wanted - because it’s obvious we want that. Path analysis. Treat productivity, demographics and tax rates as exogenous. They are too important to be ignored, abstracted from. Given the initial capital stocks, compute the perfect foresight equilibrium path. The perfect forecast assumption is incorrect. But tests have been run and found that the predicted path is, at least for artificial economies, is very close to the actual one. If you give people the correct expectation in this incorrect expectation scheme, the 2 paths are very, very close. And this makes it simple. Kydland, when he looked at the Argentine economy, estimated the expectation scheme and plops in the optimal forecasting scheme given his estimate - and the realisation of these shocks. I say 'deviations from theory'. A big deviation from theory arose in the 1990s - the US boom. I was at the Minneapolis fed as a consultant at that time. And they were really puzzled. Why were people working so much in the US? You looked at that period and everything was ... the business cycle facts don’t hold. Productivity, GDP per hour, was low; labour supply was high. The 2 generally go up and down together. You look at profits: profits were low, accounting profits. It was resolved. The aggregate technology to include intangible capital, most of which is not-measured output. Recently, about I think a year ago, they started including part of that, where they had some prices, about ¼. Everybody knew intangible capital investment was large. Ellen McGrattan figured out a way to deal with this unobservable. This just shows you the way things work. If no intangible capital: huge deviation. Red line is the actual hours that went way up. And that, predicted by the theory, the simple naïve theory that abstracted from intangible investment, went way down. With it there was no deviation. There was very little deviation. The 2 predicted in actual are close. Turned out that the technology shocks were non-neutral with regard to intangible: Developing new products. Starting new businesses. They’re all big intangible investment. Intangible investment is as big as tangible investment. You’re not talking about little things. The reason it was not included is people didn’t know how to. Now we have to have 2 outputs and 3 inputs. Before the recent revision of GDP and virtually all measured outpour and GDP were essentially the same. Notice in this first equation - let's see - the intangible capital has no sector or activity subscript. The tangible capital, the buildings and vehicles and other things, and the labour, does. Add the 1 and 2 for the 2 activities. We developed microfoundations for this. But this one intangible can be used in both places. If you have some brands, Wheaties, you can develop a Wheaties substitute very cheaply. If you have some patents for making drugs - that helps. You can actually use them to make pills to sell to people. You can use it - GDP. And you can use the knowledge about that to develop new drugs or improve existing ones. The intangible capital is used by both activities. By the way, there was a puzzle: How could ... the basic neoclassical growth model says that growth miracles, like in Japan, should have been a lot faster than it was. But it should have been even faster. Parenti and I found that more capital slows down that. So we searched all over the place for more capital. We found a lot of human capital acquired on the job and at school - big numbers. But there still was not enough. The intangible capital by businesses was the right additional amount and predicted an actual. That puzzle was resolved. This intangible capital is crucial in understanding. Developing a theory of the fundamental value of the stock market. The same theory. The value of the businesses is equal to the value of the various types of capital times their queue price, which depends upon policy in a specific way. So it just gave the right amount. Everything seemed to be falling into shape. By the way, that accounted for these huge movements in the value of the stock market - or 'corporations' I should say – relative to GNP, in the US and UK. We did it for the US. The editor told us we had to do it for the UK. We did it for the UK. And it worked better there. That ratio varied by a factor of 2.5 in the US and 3 in the 1960 to 2000 period - a factor of 3! You look at what profits were. Little variation. After tax profits. What it was, was changes in taxes and regulations. In the 1960s you could not buy back your shares to distribute earnings to your owners. That rule changed in the Carter administration. Why? IBM had a lot of extra cash. Carter’s cabinet had about 7 former IBM people who wanted to have this permitted. And it was a good thing and it was permitted. Pension funds could not hold stocks before about 1980. If you were in a fiduciary responsibility, the stock market went down - you were liable up until this. But then there’s a legal case and the prudent investor changed - what was considered prudent and was allowed about 1980. Is the current US depression a deviation from theory? No, the same model we used for the boom in the ‘90s is consistent with this. The problem is productivity and taxes. There are some cases where future taxes are important, like Kydland and Zarazaga show. Higher future taxes on distribution of capital donors. But for the current time theory is ahead of measurement. By the way, the US has been depressed for 5 years. The QE is just much to do about nothing. Modigliani-Miller tell us that. TARP did depress. Concluding, theory can only predict given future policy regimes and the current state of the economy. What will happen now and in the future. Economists' findings are just one input into policy selection. We know the nature of governance is the primary determinant of economic performance. How can a country set up good sustainable governance? That is one of the many big open questions. Thank you for your attention.

Edward C. Prescott (2014)

The Revolution in Aggregate Economics

Edward C. Prescott (2014)

The Revolution in Aggregate Economics

Abstract

Neoclassical growth theory is a theory that has been successfully used to quantitatively address many aggregate economic questions. These questions include the quantitative contribution of various factors to business cycle fluctuations and the large difference in hours worked across countries and over time. In the process of using the theory, deviations from the theory emerged, and in process of resolving these puzzling deviations, the theory was advanced. One major advance was figuring out a way to incorporate intangible capital produced and owned by businesses. The investment in this form of capital is known to be big and nearly all is expense and therefore most is not part of GDP. This extension resolved the puzzled of why the U.S. boomed in the 1990s even though GDP per hour and corporate profits were low. The model that resolved that puzzle also is consistent with the 2008-2009 recession and the subsequent depression that has persisted for over 5 years.
One major problem was how to account for intangible investment in this theory, which assumes competitive markets, and the aggregates reported in the national income and product accounts. McGrattan and Prescott (2009) developed a way, with technology capital that can be used at any location if permitted by policy. Locational rents, not monopoly rents are what provide the incentive to innovate. With this extension there is a reason for direct foreign investment (See McGrattan and Prescott 2010). The predicted gains from openness are three times bigger than the standard trade models predictions, but still only a third of what the empirics indicate. Other factors are important with the leading candidates being lowering of incentives to set barriers to more efficient production and increasing the rate at which knowledge useful in production diffuses. Determining the quantitative importance of these two factors are two major open problems in aggregate economic theory.

Readings:
“Technology Capital and the U.S. Current Account,” E. R. McGrattan and E. C. Prescott, American Economic Review, 100 (4), 1493-1522, September 2010.

“Openness, Technology Capital, and Development,” E. R. McGrattan and E. C. Prescott, Journal of Economic Theory 144, 2454–76, November 2009.

Content User Level

Beginner  Intermediate  Advanced 

Cite


Specify width: px

Share

COPYRIGHT

Content User Level

Beginner  Intermediate  Advanced 

Cite


Specify width: px

Share

COPYRIGHT


Related Content