Panel Discussion (2014) - Strategic Behavior, Incentives, and Mechanism Design; Panelists Maskin, Mirrlees, Myerson

I should say this last academic event, that's open for the general public, for all participants, here in Lindau. Tomorrow will be in Mainau. This panel continues a tradition that was started three years ago. To have a discussion on the origins of the research and the significance of the research, that has been the subject of a prize. In this case it's not just one prize, but two prizes. Namely, the prize in 1996, for analysis of incentives in situations with asymmetric information, and the prize in 2007, for theory of mechanism design. And I should say right away that there are two other prizes that are also related, namely the one in 1994, equilibrium in non-cooperative games. And the one in 2001 on markets with asymmetric information. One of the interesting features that might suggest that the split is inappropriate, is that there is one condition floating around which is known by now as the so-called Spence–Mirrlees condition, which links Mike Spence's work on markets with asymmetric information and Jim Mirrlees' work on optimal taxation. The common feature was incentives, but the 2001 prize was all about markets. Whereas the prize here is not about markets, but to some extent about governments, to some extent about welfare economics. I'll explain that and I think the Laureates themselves will explain it even more as we go along. We should, in this context, remember, that for both of those prizes there is one person missing. For 1996 it's William Vickrey, who actually died between the announcement and the ceremony. For 2007 it's Leo Hurwicz, who died, I believe, in the year after he was given the prize. Before I hand over to Jim, to talk about his research, I'll make a few brief remarks on the research of Vickrey and Hurwicz. Vickrey was very much, and he wrote in the early 60's, to give you an impression of timing, he was very much concerned with what you can do in terms of government policy. When you don't know people's preferences, people's abilities, and how does asymmetric information affect what the government can do, and also affect the effects of any measure that the government might take. He worked on three different areas. Taxation. I suspect that Jim will be linking up on that. Public sector pricing and the thing for which he has become most famous is auction design, he invented the so-called Vickrey Auction. The question is, what can a seller of an object get if he puts it up for auction? If the seller knows what the object is worth to the different buyers, he'll go to the buyer with the highest valuation and will say "well you pay me your valuation minus epsilon". So basically the seller collects what it is worth to the buyer who values the object most. If he doesn't, well think of an auction. The buyer with the highest valuation if he knows that he has the highest valuation, what's he going to do? If he knows that he has the highest valuation, he's going to shade his bid. Is there anything one can do about that? Well, not really. Is there any auction, any other design that one might use in order to improve what can be collected? Well, not really. But Vickrey had the interesting idea that if the buyers know their valuations, then how much is the buyer with the highest valuation going to shade? Well down to the level of the second highest valuation, plus perhaps some epsilon that we're going to neglect, because he knows that nobody else is going to overbid him then. So that's the basis of the idea of the Vickrey auction. The so-called second price auction. Ask each person how much the object is worth to you, to this person, award the object to the person who values it most, and let this person pay the price named, the value named by the second highest bidder. It turns out, that this procedure, has the interesting feature that for every participant, truth telling is an optimal strategy. And not only that, it's an optimal strategy, regardless of what the others do, which is where we link up with the '94 prize. Many of these situations are situations of strategic interdependence. Where what one person thinks of doing, depends on what this person expects others to be doing. Vickrey's second price auction was the first example of what is now known as a dominant strategy mechanism. A procedure where what each person wants to do is independent of what the others want to do. And, there are some interesting features associated with the second price auction and with dominant strategy auctions in particular under constellations of asymmetric information even between the bidders themselves. I expect that Roger Myerson is going to say a little bit more about that. So, let me now turn to Leo Hurwicz. Leo Hurwicz was concerned with theories that competitive markets, are wonderful mechanisms for processing information about what should be done. This goes back to Hayek who made precisely this claim. And he asked the asked the question "what does that actually mean?" We're used to drawing for a single market a supply schedule, a demand schedule, and then we have an intersection and we have equilibrium transactions and an equilibrium price. Well if that's all there is to it, why not just have a planner, assign everyone the equilibrium transactions and that's it. There's no element of information processing in this, but suppose that beforehand, we do not know, what supply schedules and demand schedules are. Then the theory tells us, we get different points of intersections and our planner doesn't know, which is the right one. So the notion that the planner can tell what should be done, doesn't work. Hayek is right on that one. But does a market system know it? By just drawing supply and demand schedules, we haven't provided an apparatus for dealing with that question. So Hurwicz articulated the problem in a different way. He asked, suppose we have the participants transmit messages to whoever is wanting this system. And, whoever is wanting the system, think of a big computer programme that gets these messages as input and then provides suitable output is going to assign some results. What messages will people deliver? How do they depend on how the computer is programmed? That leads to notions of incentive compatibility and notions of implementability. We can only implement certain outcomes, like points of intersection of supply and demand curves depending on what these curves are. If we have a system of messages, since the people find it appropriate to convey messages in such a way that these outcomes are actually realised. Now, you can immediately see that the problem of game theoretic interaction enters in a very dramatic way and we are again back to the issue of game theoretic interdependence. The important contributions of Vickrey and Hurwicz, were for Vickrey to introduce the problem of policy towards asymmetric information together with the construct of the second price auction. For Hurwicz, to articulate the problem of what do we mean by the notion that this social system processes information appropriately and gives desirable results, including of course, a market system. On that note, I want to end my sketch of these two scholars' contributions and hand over to Jim, who will talk about his own research on taxation and incentives. I want to mention these very important contributions of Vickrey and Hurwicz just straight away. I approached these questions, in what seem to me, to be, a slightly more concrete situation. Peter Diamond and I were working on optimal taxation. I would, I think on this occasion, like to put that rather more grandly. We were really on our project to reconstruct welfare economics an a proper sense. Because welfare economics as it had been taught to us, just talked about how you could achieve something that was Pareto efficient. And it never got as far as in Samuelson, it certainly did as maximising your welfare function, that's actually having a criterion that would allow you to select amongst different things. Remember that, Pareto efficient allocations can be quite dreadful. You can have people starving and things like that. That wasn't what one wanted. So if you had, around with welfare functions, then of course the standard treatments would tell you to use lump sum transfers. There were a number of really interesting papers that got some way away from that. Certainly Paul Samuelson on social indifference curves was one of the papers that stimulated when to ask, well, "So, if you can't do lump sum transfers, why was that?" What were they really? What was the difficulty? That was of course what led us simply to jump in to thinking about taxation, following Frank Ramsey, who back in the 1930's, had on Pigou stimulation started a theory of optimal commodity taxation. But of course none of these things were very explicitly in the same kind of general setting, as you would have in Arrow–Debreu. So that was what we were doing. I think we got quite a long way with that. I'm going to make this a little bit more personal, than what you were saying there, because one of the mysteries in what I was up to back then, is why wasn't Peter and I who were doing this together. It was actually, it seemed to me then, only a relatively small step to go from talking about optimal commodity taxation and there's material in our joint papers about effectively, an income tax situation where you might have something more complicated. But still there seemed to be a nice feeling. Speaking as somebody who likes differential equations and non-linear functions and things like that. One could have non-linear taxation and be more explicit about asking what sort of shape could a non-linear tax schedule be, if you could do it. No reason why you shouldn't have a non-linear tax schedule. But the interesting thing was, when one came to try to analyse that question of course, the right sort of context to place it in, was to have a continuous distribution of people with types spread all over, from sort of, low wage to high wage. Without any particular limit and you wanted to... At first I thought that would be a perfectly straightforward optimization problem, that you have to maximise something, that the obvious one to use was integral of the utilities of the people. Subject of course to some production constraint. I'm not sure if it was the first thing, but I think for at least the entertainment value, it should be treated as the first thing. What I did was to say well, "What would things be like if we were to use a model, which, unlike most of the papers discussing these issues of welfare maximisation had simply a single thing, like income, and gave arguments for equalising income for example?" So let's have a model where, as you saw this morning, utility would be a function of consumption and work that people do, labour supply. Actually the particular model that I was using was like that, but you took it that people varied and the productivity of the hours of work. So, it was hours of work that was in the... [inaudible]... besides consumption. The first thing to do, should have been, simply to say "what's the right answer and what are the lump sum taxes that you'd want?" The answer, provided that people had normal labour supply in the technical sense, was that the higher the productivity, the ability of people, the lower should their utility be, and the outcome. Well of course you see, nothing could be less incentive-compatible than that. Why should people reveal the fact that they had high ability, if having high ability would consign you to having a lower utility. You could surely manage if you had to pass a test in order to give the right grade that could be used as the basis for lump sum taxation. You would fail any exam that anyone put to you. I know I'm making the absurd assumption that people are egotistical, selfish, rational people, but I'm not the only economist who's had that assumption. You see this is driving one straight to incentive compatibility, which is of course what Leo Hurwicz had been doing in another way, but it still didn't immediately get written out that way. So you then go onto the problem saying "No there's just going to be a tax schedule, there's just going to be consumption as a function of the amount that people earn." So you won't use any information about people's types. So that's just a matter of writing down what that means and solve the equations. Write it down. What a mess. At this point I will bring Vickrey in to the story, in a way that he should have played a more important part for me than he did. I knew a number of his papers and I was a great admirer and he certainly had conveyed these concepts of incentive compatibility and so on and some of the things I had read. But in an economic journal paper in 1945, he had pretty much defined this optimal income tax problem. Perhaps I've said enough, to sketch. And, he'd written down some first order conditions and said "I don't see what to do with these, it's a mess." He essentially got the same mess that I got years later, more than 20 years later, but fortunately I wasn't prepared to give quite up so easily. I dare say that was because computers existed by my time. And, so I thought well at least that's a possibility of computing some answers. Then I realised that the statement that people are maximising subject to a budget constraint, that doesn't involve their type in any way. It was the equivalent to saying that each person chooses what he's doing in preference to what anyone else is going to be doing in the equilibrium. So that's the public finance equilibrium with incentive compatibility, the term I didn't use, but there it was. And the point of that was of course that then, pretty much instantly, set others an envelope condition that says the derivative of maximised utility with respect to the type of the individual, the ability of the individual, is going to be equal to the partial derivative of the utility, with respect to the type of the individual. Then you look at it and say It's just the same sort of thing. Going to be no problem with that. Well of course you think a bit and realise there are problems. But I'm sure, I've pretty much run out of time. So I can barely mention what the problem... So much of the time in dealing with this paper was spent on proving rigorously that certain things couldn't happen. That indeed, it was valid to have the envelope condition under the assumptions that I was prepared to make. Creating a computer programme, that was before I had learned any computer languages. It really worked by getting our computing assistant in Nuffield College to do it and then I got some research assistants, who did it and then I would, and nothing would work, so I'd go through it and say "No, no, no, you've got to change it to this." And it worked. But, it wasn't simply a straightforward control problem, because you had to cheque to see whether perhaps a whole group of people of different skill levels would actually choose the same consumption and labour supply levels. So there were interesting complications, but not of a kind that one would get a Nobel Prize for sorting out. And at last, here is where I better end. We got the first computations of what an optimal tax schedule should be like. I mean 25% of people should be induced not to work. The thought crossed my mind that this might not be a publishable result. What I had done of course, and I think this is what you'd might expect to do yourselves, if you were engaged in a similar exercise, was that I'd taken the simplest distribution of skills that seemed reasonable. One in which the mean of the logarithm of ability was equal to the standard deviation of the logarithm of the ability. If inequality is rising, one day that might be the right answer. Can you hear? No. How about now? Is that okay? When the four of us were deciding how we would organise this panel, we decided that I would talk about Nash implementation. And that's what I'm going to do, but I'd just like to say a word first about how I got into mechanism design. Actually into economics in the first place. It was by accident. I was studying math in college, which I enjoyed, but for some reason I wandered into an economics course, which was taught by Kenneth Arrow. I didn't know who Arrow was at the time, I certainly found out later. It so happened that in this course he talked both about Vickrey and about Hurwicz. So I saw the second price auction and I saw Hurwicz's conceptualization of the mechanism design problem almost simultaneously and I thought this was wonderful. And I changed direction and I ended up doing a PhD with Ken Arrow and working on mechanism design myself. So, let me turn to Nash implementation and I've prepared some slides, because I wanted to show you an example. Martin mentioned that one of the beautiful properties of the second price auction is that, in that auction, it's a dominant strategy to reveal your true valuation for the good being sold. And that means, that the second price auction implements an efficient allocation, because the buyer who has the highest valuation, will bid his valuation and therefore actually be allocated the good. So, this is a dominant strategy implementation of the efficient social choice function, if you like. Well, unfortunately there are many circumstances in which dominant strategy implementation is not possible and let me give you such an example. Let's look at a society. It's a very small society, which consists of two people. Alice and Bob. Alice and Bob are consumers of energy and there's an energy authority, whose job it is to figure out what kind of energy Alice and Bob are going to consume. There are four choices: gas, oil, nuclear power and coal. And the energy authority has to pick just one of these, it's too expensive to have more than one. Now, what the energy authority would like to do is to choose an energy source which accords with Alice and Bob's preferences, but Alice and Bob may have different preferences and furthermore, the energy authority may not know what those preferences are. Let's imagine that there are two possibilities, two states of the world. In state 1, Alice likes gas the most and the oil and then coal and then nuclear power. And similarly for Bob, in state 1 and in state 2 they have the preference rankings that I've indicated on the slide. That would suggest that in state 1, if the energy authority knew that state 1 was the actual state, it would choose oil. That looks like a pretty good compromise between what Alice and Bob want. Whereas in state 2, gas looks like a pretty good compromise. But, the problem as I said, is that the authority doesn't know the states. Now, it would be nice if the authority could just go to Alice and Bob and say But the problem is that, both Alice and Bob have an incentive to misrepresent and that's pretty easy to see. Notice that Alice in both state 1 and state 2 prefers gas to oil. So Alice has an incentive to try to make the authority think that state 2 is the actual state, because then the authority will presumably choose gas. So she'll always say gas, and you can check that Bob always prefers oil to gas, so he always has the incentive to say state 1; the poor energy authority will have no idea what the actual state is. Bob says state 1, Alice says state 2. So, dominant strategy implementability is not possible, but there's still something we can do and this 2x2 matrix is the solution. That's a game or a mechanism, in which Alice chooses rows and Bob chooses columns and the outcome, the energy that's actually adopted is the intersection of their two choices. So, if Alice chooses the bottom row and Bob chooses the left column the outcome will be nuclear power. Now, I'd like to show you that this little mechanism actually implements the energy authorities' goal, which is to get oil in state 1 and gas in state 2. And why is that? Well, suppose that state 1 is the actual state. Now, Alice and Bob know their preferences, so they know which state is the actual state. Look at it from Alice's point of view, if she anticipates that Bob is going to choose the left column then she's going to want to choose the top row. Why? Because in state 1, she prefers oil to nuclear power and if she chooses the top row, she'll get oil, if she chooses the bottom row, she'll get nuclear power. She'll choose the top row. The anticipation that Bob will choose the left column, is well-founded because actually, Bob will want to choose the left column regardless of what Alice does. If Alice chooses the top row, Bob will go left, because he prefers oil to coal and if he thinks that Alice is going to choose the bottom row, he'll go left, because Bob prefers nuclear power to gas. So, in other words, Alice going top, Bob going left, is a Nash-Equilibrium, leading to the optimal outcome. In fact, you can show, it's not hard to show, that it's the unique Nash-Equilibrium of this game. And, I won't go through the argument, but you can also cheque that in state 2, Alice choosing the bottom row and Bob choosing the right column, is the unique Nash-Equilibrium and that leads to gas, which is the social optimum in state 2. So, in other words: This mechanism implements the designers' goals. Now this is just an example. You might wonder, as I did: How, if you didn't already know that this mechanism worked, you could find such a mechanism other than experimenting and trying lots of mechanisms out? At the time I came into the subject, there were some brilliant examples of mechanisms, which implemented important social choice functions. So, David Schmeidler had a beautiful implementation of the Walrasian social choice function. That's the social choice function which, given people's preferences and producers production functions, it implements the competitive equilibrium outcome in an incentive-compatible way. And Leo Hurwicz had done the same thing for the Lindahl correspondence. But I was wondering, could we look at the question more generally? So, here is a general statement of the problem. The social planners' goal, or the social function to be implemented is a function from states of the world, which the designer doesn't know, to outcomes. In the little example that I showed you, the possible outcomes were the different choices of energy. And a mechanism is just a game where each player has a strategy set, or a message space, and their choices of strategies or messages lead to some outcome. And a Nash-Equilibrium of this mechanism as in the example, is just a configuration of strategies, such that each player is maximising his or her payoff, given what the others are doing. So no unilateral deviations pay. We say that the mechanism implements the social choice function, if the set of Nash equilibrium outcomes coincides with the set of optimal outcomes for every possible state. So that's the general problem. That problem, by the way, was stated by Leo Hurwicz. Leo looked at important examples, like Lindahl, but did not look at the general, didn't make an attempt at asking And I went about it by trying out lots and lots of examples over an embarrassingly long period of time. And at the end of the story I discovered that the key thing that's implementable which social choice functions had in common, was that they shared a monotonicity property. And this monotonicity property says the following. Suppose that in some states outcome A is optimal. And now, suppose we change the states. We change people's preferences in such a way that outcome A doesn't fall in anybody's preference ordering vis-a-vis B. So in state theta A is optimal. In state theta prime A doesn't fall vis-a-vis any other outcome B in anybody's preference order. Then monotonicity insists that A should also be optimal in state theta prime. And this turns out to be the key to Nash implementability. You can satisfy that equation up at the top only if monotonicity is satisfied and with a couple of other relatively weak conditions or at least relatively weak in many circumstances, you can also find a mechanism, and there's an explicit algorithm for finding a mechanism, which will satisfy the equation at the top. So that was the story with Nash implementability. Since then, the monotonicity idea has been extended to other equilibrium concepts, some getting perfect equilibrium, ex-post equilibrium, trembling hand perfect equilibrium for game theory aficionados. And also notably to Bayesian equilibrium which I believe that Roger will be talking about next. So, thank you. Do you want to say anything about the temporal relation for when the brilliant insight on monotonistic became relative to your thesis being submitted? to suddenly see the whole thing clearly and I thought we should all know that. And then suddenly, what nobody saw before becomes clear. I get to talk about the Bayesian side of the mechanism design subject, as it was honoured in 2007. I should say, Leo really worked principally on the... Eric was talking about an example, the Nash implementability where the normal assumption in what you just heard was, the individuals in the economy or the society share information, but the coordinator who's trying to help them, doesn't know. The Bayesian approach that I'll talk about now, is where the individuals in sight just have different information and the coordinator doesn't know anything either, but is quite devious in communicating with them or can be potentially devious in approaching them separately. And Leo Hurwicz worked principally on the Nash side. I worked principally on this Bayesian side and Eric's worked on both. So I get to do this one. To talk about the history... And it's such a pleasure to be able to talk about the history of these ideas. As Martin said we... Precursors, both Eric and I, look at us, we were the same generation, we were following Leo Hurwicz. I want to also mention, as Eric in graduate school, I was studying Leo Hurwicz's writings very carefully. I was at the same time, not very far, not many blocks away, studying also John Harsanyi's writings and I'll try to emphasise what's important about that. Leo Hurwicz clearly, as Martin said, was influenced by Hayek, who, in his classic 1945 article, noted that the debates about capitalism versus socialism didn't seem... The theoretical debates about superiority of capitalism or socialism didn't seem to be going anywhere, because whatever he said on prices in favour of the free market system, the socialists could say the socialist planners could just talk about prices also. And that's where Hayek realised whatever's going on, it's about communication. It's about information is decentralised in the economy and coordination needs to be... The coordination of resources and production must use information from everyone in the economy and use information that no one person has. And perhaps it would be very difficult, certainly before the age of modern computers for the central ministry to collect all that information and perhaps there's something more fundamental. By the way, Hayek then went on to say that he thought among the economists who were the worst about understanding that it's about communication were the mathematical economists of his day. Which was not a statement about mathematics, but simply the state of the art in a particular part of the profession. Leo Hurwicz understood that mathematics is very flexible and if there's something fundamental we are missing about how to think about something, we ought to be looking for a mathematical way to do it. Whatever Hayek might think. The other important, as Jim mentioned. The precursor was Paul Samuelson's 1954 paper which observed in brief that when you're trying to find out how much public goods we should produce and you go around asking people Well, if it's free, they might exaggerate how much they want it and if they have to pay the amount that they say they might want, they might understate how much they want it and there seems to be an incentive problem and he remarked that it's just not going to work. Lindahl and other theories are going to give people the wrong incentives. And that of course was the idea that both you and Hurwicz were clearly inspired by. Leo Hurwicz's 1972 paper, in brief, looked at private good allocations and realised, everybody thought public goods were hard, but private goods, those we know how to do markets for. He said "no actually, the private good has the same incentive problem or can." That certainly if you have the only information about the cost structure in your industry, you might have an incentive to misrepresent it to people, so as to achieve the monopoly price. So, the information was available in private good markets as well, and that was Leo's brilliant insight that the problem was one of communication, but of incentives to communicate, and his phrase incentive compatibility crystallised things. And then there was the problem of what did incentive compatibility actually mean and it took a few years, in the 1970's for Leo and others, like Eric and I and many others, to sort of realise that some of us were interpreting it one way and some the other. The Bayesian approach. Let me just back up and say, on the history, I want to talk about Harsanyi. Actually, it occurs to me, from the Nobel prize, let me back up and say the title mechanism design theory is in the English translation, with the Swedish academy of sciences accused us of participating in. And it's a term that, I use that kind of terminology in my papers as Leo and Eric, I know you do. But, I had a paper where I boldly said this mechanism design theory paper was a paper about game design; it's about nothing else, as a single game, but designing a game. It occurs to me that the same terminology was more appropriately applied to what Al Roth and Lloyd Shapley's Nobel prize, game design, market design. I think what we're talking about is the theory of design of communication systems or coordination systems. I like to think of it as a theory of optimal mediation. Where a mediator is going to go back and forth between people and enlarge the possibilities, their strategic possibilities, because what you say to the mediators is like and that's creating some strategic options. What to say and the mediator will then react and perhaps carry some information back to others, perhaps some social decisions might be controlled by the mediator or perhaps individuals in society have some inalienable effort variables they control and how should we think about this. I would like to say, I think what, the part of the subject that I embrace. I think that it's really about understanding efficiency. Go back to Pareto, someone who didn't win a Nobel prize, because they didn't exist when he lived. Understanding what Pareto efficiency means, when we start talking about transactions among people who have different information and who have difficulty trusting each other. But of course you were just talking about a different part, so now I know why that phrase isn't on your website to describe what mechanism design might mean, but certainly for the Bayesians. I want to emphasise the problem as I sought as being "what do we mean by efficiency?". What should economists mean by efficiency of a market or a society or an organisation where some transactions are occurring, among people, who have different information and have difficulty trusting each other? I want to include in that both have difficulty trusting each other's testimony about what they privately know. And perhaps, have difficulty trusting each other about what they promise to do. if I can't see what you're doing, if I can't see how hard you're working and you can't see how hard I'm working. Harsanyi wrote a classic paper in 1967, it's a long three-part paper and he's gradually exploring what can we mean about games where people, at the time the game is played, actually have different information. And, that clearly has to be a general model of transactions among people who have different information. Before, the signalling and economics information prizes of 2000 or 2001. talking about people having different information. The idea, before the late 1960's... Vickrey in 1961 is the earliest paper I can think of that seriously, analytically deals with transactions where people have different information. Almost everything in formal economic analysis assumes everybody has the same information and almost everything certainly before 1961. And in the 1970's it becomes something we all want to understand better and many of us almost independently come to it. To me, Harsanyi's attempt to formulate a general model of games with incomplete information. He goes through a number of steps where he says, you could give the person something that they know, that other people don't know, but then you have to say what does the... Player 1 knows something that player 2 doesn't know, but you have to ask what does player 2 believe about what player 1 knows. And what does player 1 believe what player 2 believes and what does player 2 believe about what player 1 believes player 2 believes and so on. And the model got more and more complicated and then he cut the Gordian knot and said: Here's the general model. We'll have a set of players. Which we'll assume is common knowledge, everyone knows that everyone knows it and so on. And, for each player there's a set of choices that are common knowledge, there's a set of types that are common knowledge and everyone's payoff might depend on his or her and everyone else's choices and types. The type is a random variable that describes the private information of an individual. A player's type is whatever that player knows at the time the game is played that is not common knowledge among everyone in the society, in the game. And then, we have to put a probability structure on. The simplest way to do it, is to sort of simply put a joint probability distribution on the types. A joint distribution on the types of all the players and that was Harsanyi's general model. To me, William Vickrey's paper does a brilliant analysis of what we've later called a Bayesian Game, ever before Harsanyi. I suspect, I wasn't really in the profession in the interim. I wasn't in the profession in the interim. Period. I suspect in the interim between 1961 and when people started reading Harsanyi a lot of people would have looked at Vickrey's paper and said "it's about auctions." I haven't been to an auction lately, so I don't really care very much about it, but perhaps you have. Clearly Vickrey understood, it was about information and planning and after people had studied Harsanyi's general model, then auctions, especially with the techniques that Vickrey taught us, that Vickrey exhibited so early, where a beautiful example of a general Bayesian game in which there are prices and, in which prices are going to be formed, based on information. Information of the bidders, is going to affect their bids, but as a result of the game there will be a price. So here is price and allocation of resources, who wins the object and gets the allocation? Depends on the rules of the game and Vickrey began studying them. He found the first of many revenue equivalence results, in his case, that the first and the second price auction that were so interesting, happened to give the seller the same expected revenue, that was interesting. When you look at it from a general Harsanyi viewpoint, suddenly you're not surprised to see in George Akerlof's early paper on markets for lemons, the type of the seller was information about the qual... I'm selling my car to you, I'm player one you're player two, who's going to buy my old car. Perhaps, if we can agree on a price, but I have private information about how many times it's broken down in the last year. I know something about its quality, so my type is I might know I have a high quality car or a low quality car and suddenly if you buy the car from me, your payoff depends on my type. That doesn't happen in the private values model, that was what Vickrey looked at in his classic paper. The private values model is good, but that can have common value auctions. And putting all those together, winner's curse effects wouldn't exist in private values model, where everyone knows what the thing they're buying is worth to them. Whereas there's uncertainty about the quality of the object being sold, then suddenly people have to worry about what did other people know, when they made the bid that they made. So, Harsanyi's general model, opened the door to assimilating that and many other things into a common framework, to seeing all these things, all these applications and signalling and insurance adverse selection models of the Rothschild-Stiglitz classic paper. All of them as cases in some sense, that could be modelled if you had finitely many people in the market as Bayesian games. So, to me the general... What Leo Hurwicz then taught us, was to think about... Under some circumstances and I think Eric's paper with Dasgupta and Hammond early on, was one of several discoveries of the revelation principal and I was independently working on it. They understood that for different solution concepts, sometimes the revelation principal applied. sometimes it didn't. The revelation principal said, for any Bayesian equilibrium you could imagine of any communications process that the mediator might set up when intervening in a Harsanyi-Bayesian game, there would exist an equivalent honest equilibrium where everyone tells... The mediator confidentially says to everyone "what's your type" and then if people have private decisions to make about effort, tells people confidentially "this is what you should do" after collecting all of this private information. If there are some general social decisions that the mediator controls, then the mediator makes those choices. The mediator can simulate any other equilibrium of any communication system, by just using plans, mechanisms or decision rules that make honesty and obedience an equilibrium. If everyone thinks everyone else will be honest, no one has an incentive to lie. And that's our Bayesian interpretation of Leo Hurwicz initial idea of incentive compatibility. Why is that important? What's really going on I think, is... The revelation principal and Hurwicz's definition of incentive compatibility that suddenly made the question of finding the, for whatever social welfare function you might write down. The social welfare function could be simply give the most to me, I want the highest expected utility and everyone else has to just, you know, live it. Or we could have an average, of our respective payoffs. Whatever social welfare function, greedy or asymmetric or symmetric we might choose to write down. The best you can do for that social welfare function, can be achieved by just considering these honest and obedient plans, where nobody has any incentive to lie or disobey the mediator. To lie to the mediator when providing information or to disobey the mediators recommendations about efforts. And, subject to these incentive constraints. The constraints I just said, the incentive constraints that you shouldn't want to deviate, by lying or disobeying. Those incentive constraints are from a wide-class of elementary examples, easy to write down. So, I would argue suddenly what we had in economics, was a formal mathematical structure in which we could see that part of the economic problem was incentive constraints. Incentives to get people to provide information that they know privately. The problem of providing people incentives to exert efforts that no one else can directly observe. In the economic problem, in all previous generations, the core part of the problem was resource constraints. Satisfying the wants of humanity are limited, because we only have so much arable land, only so much clean water, only so much coal that we can dig out, only so many trained, skilled physicians in the world, and so on. Those resource constraints are associated with prices, but the incentive constraints, that we also have different information. Those were absolute... So in some sense I want to argue we were adding with incentive compatibility with Leo's idea of incentive compatibility, especially in its Bayesian context where it became rather simple to apply. We were enriching our understanding of the economic problem to include incentive constraints. The other side of it was, and let me just say, the paper that I think, if I could put an addendum into the 2007 Nobel citation. I wish I could add a reference to my paper with Bengt Holmström in 1983, where I think we were trying to pull it all together. The first half of that paper, is specifically addressing the question, let's now go through and say The first idea is Hurwicz's core suggestion, core number one suggestion, that we're not going to talk about a specific allocation, but rather, how in the market, to say that a market is efficiently designed. What we want to talk about is, how the allocation of resources depends on people's information. What Eric called a social choice function, or you could call it an allocation rule. And then, we can identify that, with a class of mechanisms that implement it and then we say, it's not the allocation that's efficient or inefficient, it's the way allocations depend on information, the function from information space to allocation space, that is or isn't inefficient. And its efficiency, Bengt and I argued in that paper, should be subject to incentive constraints and we argued that perhaps, the right time to evaluate people's welfare is under most interpretations of these models, is when they already have their information about their private information, but don't know each other, is what we call the interim stage. The last thing I want to say, is that there were technical results about the revenue equivalence result. In auction theory, that I mentioned, is one of several results in my paper with Mark Satterthwaite, an impossibility of an easy integral for determining what kinds of bilateral trades were or were not feasible. But what I think is really more important, the fundamental was the idea of the informational rent and this gets back, I think Jim was talking about it. That, because I have private information, certain of my types cannot be denied. If you're going to give my worst type an incentive to participate... If you're going to give my least skilled type an incentive to participate, not exploit, my least skilled type, or give him an incentive to run off into the wilderness, then my more skilled types are going to have to do better. And if you're going to treat me well, with my real type, then my more skilled types, that I really am, are going to have to be treated even better. And that rose in the beginning... I think the idea of it's the mechanism that's efficient or it isn't, incentive constraints are real and are a part of the economic problem and the people's private information may give them rents, that society cannot expropriate. These are some of the big ideas, that this was all about. Let me give one reaction or make one comment as a user of these results. This is something that's common to I think the three presentations, except Roger, you didn't actually express it at all though. There was a paper in the Journal of Economic Theory by yourself and Mark Satterthwaite, which has a version of that. That's impossibility. Many people in this room probably know what's called the Coase theorem. We can all deal with externalities at whatever, if only we have sufficient bargaining. All the problems we have are problems of assigning property rights and making sure that whatever the bargaining game or bargaining procedure is, is sophisticated enough so that it can actually handle the problem. Now the Myerson–Satterthwaite theorem says that if you have mutually asymmetric information and if it is not known whether it's worthwhile to have a trade or not. Not commonly known whether it's worthwhile to have a trade or not. Then, it just may not be possible to design a bargaining game so that you get an efficient outcome. Meaning the Coase theorem presumes something about information asymmetry not being a problem. Why am I so impressed by that? Because it's a statement about games that we don't even know. The statement is, you cannot design and I'm not underestimating your talents. You cannot design a game that will implement an efficient allocation. And in the "only if" part of Eric's presentation you also had an impossibility theorem. If you have a social choice function that doesn't satisfy monotonicity, then no matter what you do, you cannot design a game, that provides Nash implementation in that social choice function. And of course, Jim's results on income taxation also have impossibility theorems, namely you cannot get incentive compatibility and extractments. The really striking feature, and this is where I think mechanism design is actually a bit more than just applied game theory. The really striking feature is that you get results that are about arbitrary games, that try to capture the essence of the information constraints, as Roger called it. That must be taken into account in thinking about, what a society, a group of people should do in order to deal with coordination problems. But, I don't want to prolong the discussion up here any longer, and would now like to ask for questions from the floor. Or comments. Can you please go to the microphone? Who, do the panel think, has the right knowledge and incentives to monitor banks risk-taking and discipline their behaviour? Is it regulators, is it shareholders or is it creditors? Rather than the policy hat. I would say, you have just defined a very interesting research programme. And I would add the concern, make sure that the design problems that we're thinking of are dealt with in a reasonably robust way, so that if the nature of the social interactions changes between today and next week, whatever we design today, won't become obsolete. But let me also go back to Peter Diamond's methodological comment, this morning. I'm saying this now as a person who's both doing theory and working in policy applications. Each theoretical model, each version of the auction problem, each version of a bargaining problem, each model of competition, is as I like to call it, one entry, one animal in our zoo of models. And, when I have an applied problem, the first thing I have to decide is, which member of the zoo or which collection of members of the zoo are relevant for thinking about this particular problem at hand? Now, for the problem that you suggested, I would say that concerns about excessive risk-taking as a function of borrowing; might be one of the animals in the zoo that we should be entering. There are others as well, but I don't actually believe in theory being descriptive. Theory gives us a set of modes of thinking about things and on that, I cannot do better than refer you to the Marshall quotes that Peter Diamond had put up this morning. about incentives for management in general that also can apply to understanding different forms of corporate finance in terms of optimising incentives for a manager who has inalienable control over something. Obviously capital requirements are an attempt to give the right incentives to owners. Concerned about no bail out is to give better incentives to monitor creditors. I argued this morning at the breakfast that if you write down a model that assumes that a regulator can do something with private information using the kind of techniques that I just talked about, if he's doing confidential communication, if you apply that, and then interpret that my mediator of my model as one of your regulators, you might be in great danger and I might be in great danger if I follow my own prescription, because there's enough money in the financial system to corrupt any small number of public official. And in some sense, I would argue, there's another incentive constraint that I don't necessarily know how to easily model. That's something about transparency and democracy is needed to ultimately make credible regulatory commitments to do one thing or another as the function of information about the banking system. That a lack of transparency... There may be moral hazard at the regulator level and if you assume that away, you might be missing something important. Since I stopped before I went on to moral hazard. Which I think is interesting. What we think we know about moral hazard is that, if some optimum with moral hazard were to be implemented, it should be done by incredibly complex contracts with exclusional clauses, saying you can't have another one. We have observations about people creating securitized contracts which are then traded, which are clearly not of the nature, of the sort of contract that AIG would have been writing on these things. So it was as though that the answer to the question is posed isn't one that a theorist could come up with. That's saying in principal. It looks as though there should be a sort of self-policing solution to the existence of moral hazard. But, the idea that any of us in particular managers or mortgage lenders, are going to have their whole activity governed by some contract which has been written with all possible eventualities in it, just doesn't make sense. It's not clear that one can go the kind of route that your question's suggested. the fact that we are not going to write these theoretically optimal contracts, means that in the end, we have to rely on regulation. The regulator has to internalise the... The externality that the contracts won't be, because they're not optimal. was that there's a trade-off between insurance and moral hazard, with the risk sharing and moral hazard. When I share risks then I have less incentive to work hard. Perhaps before 2008 we should have been worried about seeing such creativity in finding new ways to share mortgage risks so broadly and thinly. It's an incentive constraint problem. We're still waiting for the real model. We're just talking. although I was a long time ago trained as an economist. It seems to me that moral hazard is being researched in one aspect only, but through the financial crisis perhaps it should have become evident there was another aspect of moral hazard. Boards, demanding or abandoning decision making to executive management, because of not lack of information, but lack of competence. So one aspect of moral hazard, is when you're bound to make a decision and you pass that opportunity to another agent, simply because you're not comfortable enough with making that decision. I wonder whether you could comment on that. is one that was written on with great effect, many years ago by Berle and Means. Who pointed out that the modern publicly held corporation has a moral hazard problem. The advantage of having a widely held corporation is that you can raise lots of capital and the disadvantage is that it's very difficult to control management. If there are lots of little shareholders. Since you gave very interesting discussions of the intellectual history of incentive theory and mechanism design and social choice theory and so on, which was instructive. I wonder if you could say a little bit about what you think, still needs to be done in these areas. What are the important things about this, that aren't known. What are the important things that would be very useful for people to think about? When someone has private information and this is guiding the mechanism design, it's a very subtle problem that Eric and I have both written about. And my views have not achieved broad acceptance and the last time I presented them I confessed there were some theorems there, that just seemed so elegant there must be something to them, but I didn't exactly understand why either. has come back to prominence recently with Mylovanov. which people are currently working on about where there's a long way to go, that I happen to find exciting. One is what's sometimes called robust mechanism design. One of the features of mechanism design as applied to specific models, specific settings, is that the optimal mechanism can often be rather complicated, maybe rather sensitive to the particular details of that setting. That means, that unless we know the setting with great precision, we're not going to be able to get the mechanism quite right. This is a problem, that Bob Wilson pointed out many years ago and it's now called the Wilson Critique and it's one that people have embraced recently. The idea is to design mechanisms which may not be optimal in any given setting, but which perform well, in a large variety of settings. And so, for example, there's a series of papers, by Dirk Bergemann and Stephen Morris. some of which they've collected together in a volume on robust mechanism design. There's lots more to be done along those lines. Then there's the question of limited rationality or bounded rationality. Once you get into Nash implementation or Bayesian implementation, where what you do is highly dependent on what you expect others to do. The question of whether real life agents can actually perform the optimizations required. Particularly if the game is reasonably complicated. And so, it's important to consider, to what extent will a mechanism continue to work or at least perform reasonably well when agents are making mistakes or misperceiving what other agents are doing. The bounded rationality programme in mechanism design is an interesting one too. It's very hard, because our modelling attempts in the area of bounded rationality, behavioural economics are diffuse, that is for each possible behavioural problem, we have a separate little model. And so, at the moment, if you wanted to build a mechanism that was robust to all of those possible behavioural difficulties, you would have to employ 12 different behavioural theories or thereabouts, which is clearly unmanageable. I think some progress in behavioural economics itself may be necessary to make the critical steps in mechanism design, but that's clearly an important programme as well. People may have noticed that my inclination is to go for, I'd better not say realistic situations, but more concrete examples and so on. I'm going to try to show a particular instance which has always struck me as a real puzzle I would love to get properly sorted out. And I'm not sure what the nature of the sorting out would be. If we go the next step beyond the income tax theory that I told you all about earlier on. There are different directions you can go and they all seem to lead to the bounded rationality problem. I'll mention one of these. That instead of dealing with asymmetric information, that is to say, assuming people have private information and the uncertainty is still something that whatever there is to be known, somebody knows it. Moral hazard we treat as saying the people in the economy actually don't even know their own abilities. This doesn't sound initially quite so reasonable, but if you think that many of the real decisions about labour supply have taken a long time, before are in like, what subject you're going to do at university or whatever. Then there's clearly quite a lot of uncertainty, which gives you moral hazard. If you do the simple thing and take the same particular model that I'd use for income taxation and do it with a moral hazard instead of asymmetric information. That's not a really hard and fast distinction, but I think I said enough to indicate what I mean. Then you'd find that you can get arbitrarily close to the first best. Obviously I mean in the particular specification of the model. And you do this, by having threats of very severe punishment if you happen to produce very little. If the individual, that you're kind of talking about, does that, well I think credit where it's due, Peter Diamond more or less immediately pointed out that this means that you are really exploiting the theory of rational behaviour under uncertainty, in an unreasonable way. There's an extremely small probability that you would be hit very hard. And, if that event doesn't happen, you will get the first best consumption level and everybody gets the same. So you're supposed to do an expected utility calculation of incredible precision. This is the kind of situation where we're quite sure that people would have bounded rationality. Well what does one do about that? I could give you other instances and indeed I've played around with models of bounded rationality, hoping that they would throw some light on this. It's a fairly standard problem in a lot of these areas that the model quickly gets rather complicated and I'm a believer in simple models and I want to stop when it gets too complicated, but i think this gives you lots of opportunities for interesting things. My recommendation would be to look for a version of this that has a realistic feel to it. We've been talking about mechanism design from the perspective of some welfare maximizer. Now, there is an important research programme in the economics of institutions in contract theory, which is to explain institutions and contracts as quote unquote The moment we're thinking of that, we're using mechanism design as a tool, not of normative analysis, but of descriptive analysis. Which of course raises fundamental issues of its own. One aspect to which I wanted to point here, is that if you do the mathematics properly, typically, you get some crazy stuff. Solutions to incentive problems and to mechanism design problems usually are much more complicated than the arrangements we see in the real world. So, the claim that the analysis is descriptive, always requires an additional non-craziness assumption. Let's assume we use this maximisation subject to incentive constraints and a non-craziness condition. Jensen and Meckling, a combined risk-taking and effort choice problem as explaining the way in which a firm is funded. If you think about the actual incentive contract it's much more complicated than what they have. so important so you don't have to appeal to.... Can we draw the link between the implicit non-craziness assumptions that are made in the contract theoretic and institution theoretic literature and bounded rationality, more explicit. I'm not sure we've done that as at actually yet. I want to close this panel with a story. Which is motivated by a remark that Roger made about trade-offs between incentives and insurance. At some point, in the mid-90's, I was travelling in the US and I was talking to people about the changes in housing finance, that had taken place in the US. And I was told very enthusiastically, including by some of our colleagues, about this great new device called securitization. And my response was, I think this is of course wonderful. If you get a bank in Japan or a bank in Germany, to share in some of the fundamental risks, such as interest rate risks associated with housing finance in the Midwest, that provides for a greater extent of risk sharing. But this bank in Japan, doesn't really know about whether the property in Iowa is good, whether the buyer of this property is a good risk or a bad risk. And at that point, I received and not just once, but on a number of times, the answer of such mortgages together, in a big package and then by the law of larger numbers, risk disappears." Which was the typical confusion, linguistic confusion, between risk as deviation from the mean and risk as a probability of something bad happening. At some level I think we see their moral hazard at work, or we've seen moral hazard at work. At another level I think we also see the benefits about being very precise, about the language we use as we write down the incentive problems and the mechanism design that's being chosen. And on that note, I want to close this session and thank everybody, thank in particular the panellists, for your contributions. I've learned a lot.

Panel Discussion (2014)

Strategic Behavior, Incentives, and Mechanism Design; Panelists Maskin, Mirrlees, Myerson

Panel Discussion (2014)

Strategic Behavior, Incentives, and Mechanism Design; Panelists Maskin, Mirrlees, Myerson

Abstract

Economics has always been concerned with incentives. For a long time, however, most formal analyses of incentive issues were limited to the behavior of people or corporations in markets where the institutional environment was given. Since the 1960s, our understanding of incentives and incentive problems has been revolutionized by research contributions that identified asymmetric information and strategic interdependence as root causes of incentive problems and provided fundamental insights about the scope and the limits for dealing with these problems by institution design. The importance of this research was recognized by the award of the Prize in 1996 to Sir James Mirrlees and William Vickrey for their “fundamental contributions to the economic theory of incentives under asymmetric information” and in 2007 to Leonid Hurwicz, Eric Maskin, and Roger Myerson “for having laid the foundations of mechanism design theory”. The panel will discuss the underlying ideas and their development.

Content User Level

Beginner  Intermediate  Advanced 

Cite


Specify width: px

Share

COPYRIGHT

Content User Level

Beginner  Intermediate  Advanced 

Cite


Specify width: px

Share

COPYRIGHT


Related Content